>> Yuval Peres: Okay. So let's start. ... projections of fractal [inaudible].

advertisement

>> Yuval Peres: Okay. So let's start. Michael will tell us about the averages and projections of fractal [inaudible].

>> Michael Hochman: Okay. So thank you for the invitation. It's nice to be here again.

So this is joint work with Pablo Shmerkin, who is currently at Manchester, although he was in Seattle for a long time as a grad student.

So I'm going to tell you about some work which is -- there are two sides to, one is some maybe more theoretical work about projections of fractals, and then some applications to problems and dynamics. And I'm not sure which one will interest you more, so I'll try to tell you about both of them.

So the -- so if the abstract setting in fractal geometry is the following thing, you want to understand how nice maps, like linear maps, smooth maps, affect the dimension of sets under projections.

So the picture you should have in mind is you have a set in the plane and you have a linear projection, say, onto a line. You have size X and you project that you get pi of X.

And you'd like to understand how the Hausdorff dimension changes.

So everyone here knows what a Hausdorff dimension is?

>> No.

>> Michael Hochman: No. Ah. I was hoping everybody would. So I will define what

Hausdorff dimension means a little bit later.

Right now you should think of Hausdorff dimension as if you think of a set in the plane, you can associate to it a tree, which is you look at which -- you divide a square into four equal squares, and you look how many of them the set intersects, and then you divide each square into four and you see how many of those intersect and you count the exponential growth rate of the number of small sets that intersects.

That's essentially the Hausdorff dimension. So I'll give a more precise definition later.

So here's an example of what -- this is an example of a different theorem, a classical theorem, which says that if you specify for me which projection you want to get for different directions, then you can build a set which projects in that way to those directions essentially.

So I'm not going to make a precise statement. But what this is a picture of, this is a digital sundial. So this is not a digital screen. I mean, there's no computer involved.

This is just a solid object. And the sun shines onto it from the back. And of course the dot that what you're seeing is the complement of the shadow, so the shadow is a black area.

And as the sun moves around, it will show you a different digital shadow at a resolution.

I think this one is a resolution of five minutes, and presumably -- I'm not sure about this, but presumably it adjusts for season also. Sun of course doesn't go in the same arc in the sky every day.

So the point is if you tell me for every direction -- going back to this picture, if you tell me for every line what the set is that you want to get, then you can construct, and given some measurability condition, which I won't state, you can construct a set X so that for almost every line the image that you get is up to measure zero equal to the image that you wanted.

That's the theorem of Falconer from I think '70s or early '80s. And this is an implementation of that.

Okay. So now what can you say mathematically. So some basic notations. So D and K are going to be integers. D is the ambient space. So you can think of two. And K will be the dimension of what you're projecting into. So if you're projecting onto lines from

Euclidean space, then D is 2 and K is 1. And pi D, K is the space of orthogonal projections from RD to K-dimensional subspaces.

And then there are two things that you can say very generally. One is that if you take any projection at any set, then of course the image of the set is contained in the range of the projection, which is a linear subspace of dimension K. And so the dimension of the image is at most the dimension of the range which is K.

And the second thing you can say is that pi is a contraction. And it's a general fact that contractions don't increase dimension and so the dimension of the image is at most the dimension of X. And so a general upper bound that you can get says that the dimension of the image is bounded by the minimum of K and the dimension of X. And this is true for any projection.

And there's a theorem that goes way back. I think Besicovitch had a version of it, and there are lots of different versions, but this is maybe the simplest. If you take a closed subset of RD, then for almost every projection the -- you actually have equality here.

You will not have a strict inequality in almost every means with respect to any reasonable parameterization of projection.

So if you want to be more concrete in the plane, if you're looking at pi 2, 1, you're looking at projections onto lines, then you can look at, say, Lebesgue measure on the angle that it forms with the X axis or something like that.

So this is an almost everywhere statement. And it certainly is not true that you expect to have equality everywhere. So here's a very simple example. Take a product set. Take two sets of positive dimension. Take a product set, project the product set onto the X axis, you get the first set back, project it onto the seconds axis, you get the second set back. And you're getting a drop in dimension because the dimension of the product set is going to be at least the sum of the dimensions, and the images have dimension equal to one of them.

Okay. So here's the picture. You project on the axis. You get a lower dimensional set.

>> Are these [inaudible] define [inaudible] you could just [inaudible] --

>> Michael Hochman: Oh, there's a [inaudible] okay.

>> -- [inaudible] how can you measure [inaudible].

>> Michael Hochman: So rather than have -- let me give box dimensions, because that's just easier. You want real dimension? Okay. So all right. Let me jump ahead. I actually have a slide on this.

>> Could I just something about [inaudible]?

>> Michael Hochman: Sure.

>> [inaudible]

>> Michael Hochman: [inaudible] dimension of a measure, because that's what I'm going to need really. So if you have a probability measure on a set, you look at the rate of decay of both. So the way I'm defining it -- there's several ways, but you want the largest number so that almost every point X, if you look at the ball of radius R around X, the mass of that ball is approximately R to the alpha. You could have a constant that depends on X. You don't actually -- so you could require that you have the same decay at every point. I'm looking at the fast -- at the slowest decay.

>> And just for those who don't know, you could define the dimension of a set just to be the largest [inaudible] measure on the set satisfies section [inaudible].

>> Michael Hochman: Yes. Yes.

>> [inaudible]

>> Michael Hochman: That's fine. So, anyway, so just to be complete, this is called lower Hausdorff dimension. As I said, you could require that the decay be uniform, you could look at the fastest decay. But this is what we're looking. So we're looking at the slowest decay which you get, except for maybe a measure of 0, 6.

Okay. Where are we?

>> So could I ask a stupid thing? I mean, this doesn't seem to be an exceptional projection, right, because the -- you know, that's projecting onto the line, so K is 1.

>> Michael Hochman: So K is 1. But the -- what you get -- okay. This is not a fractal set, right? Well, so if you look at this, then yeah.

>> [inaudible]

>> Michael Hochman: Imagine this is a Cantor set. I'm being -- I'm not being --

>> [inaudible] product [inaudible].

>> Michael Hochman: Yes. Yes. You're right. In this picture you actually get intervals, so the dimension is 1. But, I mean, imagine this is a Cantor set.

>> So the dimension of the product and two Cantor sets --

>> Michael Hochman: Is at least the sum of the dimensions. For nice, well-behaved sets, it's the sum. So if you take two sets of dimension half, you'll get a one-dimensional set -- whoops, wrong pointer -- get a one-dimension set here, and it will project onto something of dimension half.

Okay. So there are very simple examples where you -- it's really hard to determine what the exceptional set is. Let me give you an example of that.

So here's a very simple fractal set. It's the middle 1/2 Cantor set. You can define it as the points in the unit interval whose base-4 expansion doesn't contain the digits 1 and 2.

Geometrically what it is is you take the interval 0, 1, you remove -- you divide it into quarters and you remove the middle two. So these are gone. And then you take -- you divide each remaining interval into quarters, and you remove the middle two and so on.

And you repeat and you get a Cantor set.

And, now, if -- as I'll show you on the next slide, so I won't prove the dimension of this -- of these sets -- of A and B is 1/2 and the product set has dimension 1, but there are infinitely many directions where when you project onto the line of slope T, you get a drop in dimension.

So here's the picture. This is the set I was describing. On each axis you've removed the middle half of each interval and of the two remaining intervals of length quota, you remove the middle half and so on.

And the product set looks like this. And this is the line of slope 1. And what you can see here is that the slope has been chosen in such a way that when you look at these smaller scale copies of the original set, they're matching up precisely when you project.

And what that means is that when you look at the image set, you're seeing in this case nine subintervals of order -- you know, the width of each one is approximately square root 2 over 16. And you can do an easy calculation and see that this -- in any case, this is going to be a set which is a proper fractal with strictly smaller dimension than dimension of the line.

So when you project this set which has dimension 1 in this direction, you get a drop. And you can choose other directions with rational slopes so that you get a matching up of these small subblocks. In every sub's direction you're going to see a drop in the dimension.

And there's a conjecture which is due to Furstenberg that the only exceptional slopes are actually subslopes of rational slope. But that's completely open. There's a paper by

Kenyon about a related problem where he analyzes those projections onto lines with rational slopes. But about the irrational slopes, except for the fact that almost every slope the image set has large dimension, very little else is known.

>> [inaudible]

>> Michael Hochman: Well, not all the rationals are exceptional. There's a classification depending on the congruence properties of numerator and denominator. But you can understand those. The irrational ones, it's much harder.

>> So the conjecture is for this example [inaudible] or something [inaudible]?

>> Michael Hochman: Well, you can generalize, but it's for this type of -- this type of -- okay. So let me tell you a little bit about dynamics on unit interval.

So -- right. So I'm going to be looking at the map. There's a parameter B, which is an integer. You look at the map from the unit interval to itself, which multiplies by B mod

1.

The way to think about this is look at the number X in base B. Multiplying by B just means shifting the decimal point one point to the right, and taking mod 1 just means chopping off the integer part. So you're just shifting the base-B representation.

This is a -- you know, it's piecewise linear. It has B minus 1 discontinuities.

And when we talk about closed invariant subsets of the unit interval, so a subset is closed.

You know what that means. And invariant just means that when you apply TB to the set, you get the set back.

So if a point starts in A, then multiplying it by B mod 1, you get a number that's still an

A. And every point in A arises as the image of some point in A.

The way to think about such sets is you should imagine that you've placed some constraint on the digits. So, for example, we already saw the middle 1/2 Cantor set. I think this is a typo. Middle 1/2 Cantor set is just the set of numbers whose base-4 expansion can be written without the numbers 1 and 2.

And if you take a such a number and you shift its digit sequence, if it started out not having the digits 1 and 2, after the shift of course it still doesn't have the numbers 1 and 2, and so this set is invariant under T4.

And you can create much more complicated examples. In some sense every closed invariant, TB invariant set comes up in this way, but you may have to place infinitely many constraints on the digits. So you could make a list of things that you don't want to appear. You don't want zeros to appear, you don't want 101 to appear in sequence and so on.

Take an infinite list like that, and you'll get all closed invariant sets. But let me just warn you that you can get very strange and complicated sets from a dynamical point of view.

Okay. So the conjecture I want to talk about is this conjecture, Furstenberg's from the

1970s, which says that if you take two sets A and B which are closed and each is invariant, one under T2 and the other under T3, then if you take any projection onto a line, except for the coordinate projections, TX is the projection onto X axis and Y onto Y axis, for any projection besides these two exceptions, the dimension of the projected set, the projection of the product set is equal -- that is, not less than -- this number.

In other words, in this case, rather than the almost everywhere result, we know that this is true for almost every pi. In this case it should be true for every pi except for the two that we saw always violate this for product sets.

And maybe -- one word about motivation. So what this is saying -- there's a pretty large universe of conjectures about the behavior of objects which are defined in different bases.

So, for example, invariant sets, but also their conjectures, for example, about behavior of integers which are powers of 2 when you represent them in base-3.

It's expected that the digits is should behave in some sense randomly. And so this is another expression of this phenomenon. When you define sets using independent arithmetic operations, they should behave independently. Because the independence is important here because, as you saw, if you take the middle -- this is again -- should be

1/2.

If you take the middle 1/2 Cantor set, which is this one that I drew for you before, as we said, it's invariant under T4. But when you take that product set, you get infinitely many projections which behave badly.

So it is crucial here that 2 and 3 should be independent. Independent means not powers of the same integers. 4 and 4 of course are not independent.

Okay. And one more comment is that you can write this equality as a statement about sum sets. So you can identify the images of the product set under linear maps with all the sum sets of A and affine image of B.

>> So 4 and 6?

>> Michael Hochman: 4 and 6 are okay. They're not powers of the same integer. But 4 and 16 are not.

Okay. So there's really one relevant piece of previous work on this, that I'm aware of anyway, and that's a theorem by you value Yuval Peres and Pablo from a year or two ago.

And they proved -- so I'm not going to give you a complete definition of all the things here, because that won't be necessary for later on, but they proved a similar statement for sets that come up from a different type of dynamics.

So if you take two sets which are attractors of iterated function systems whose contractions are similarities and every pair of -- and some pair of contractions has contraction ratios which are log independent over the rationals, then except for the coordinate projections, the projection of product set in any direction will be equal to the minimum and the product -- and the dimension of the product set.

So for the conjecture that I'm discussing, the important thing is that this includes some cases of invariant sets and includes sets like the middle-1/M Cantor set, sets of this type.

And you can push their methods a little bit further to get maybe constructions of this type where you alternate at different stages what you do. It doesn't include, though, all the invariant sets.

So what I want to describe a little bit about is the proof of the general case, which is, as I said, joint with Pablo.

Okay. So to explain the general case, we're back to this slide. We're going to work with measures instead of sets. So, again, so if you have a measure on Euclidean space, then the dimension is going to be the largest alpha set. The decay at almost every point is like the alpha power of the radius of the ball.

And we're going to be working with invariant measures on the interval. So the measure is invariant if the push forward of the measure is equal to the measure; that is, for any

Borel set A, if you pull A back through TB and you take some measure, it's the same as the measure of the original set.

So, for example --

>> So you're supposed to read these through a small [inaudible] from smaller.

>> Michael Hochman: Yes. It's not interesting. You're allowed to have a constant, so yes. The best behavior that would -- the best behavior that you could hope for for a measure would be that if you take the mass of the ball of radius R around a point, well, the best thing you could hope for would be that this converges to 1 as R goes to 0. And then you would say the dimension is alpha. That would be real decay of order alpha at every point.

What would be slightly less strong would be a statement like this. But still it's asking that that happened on --

>> [inaudible]

>> Michael Hochman: 1 is right. But that's still too strong. We're not asking that it happen at every point; we're asking that it happen at almost every point.

We're not even asking that. We're asking that it can happen -- the alpha for which this is true at different points can be different. We're looking at the smallest alpha for which it's true at almost every point. So that's why we have an inequality rather than a convergence over there.

>> [inaudible]

>> Michael Hochman: So this should hold for every R, or as R goes to zero --

>> [inaudible] something that I find is meaningful not only from [inaudible] from a smaller.

>> Michael Hochman: Yeah. I don't care about [inaudible]. It's a local -- it's a local --

>> Local.

>> Michael Hochman: Local. Okay. So a couple of examples of invariant measures that you could think of. Lebesgue measure is invariant under TB. This is actually a little bit

surprising if you've never seen it before. Because multiplying by B stretches. But multiplying by B -- well, let's take multiplying by 2.

Mod 1, the map looks like this. This is the graph of 2X mod 1. And it's true that if you take a set and you push it forward, the measure doubles. But if you take a set and pull it back, if you take a small interval, pull it back, you get two intervals of half the length. So it does preserve measure.

You can take Hausdorff measure on regular Cantor sets. You can take -- if you have a set of -- so here's an invariant set under times B. If you take an M which is -- which has -- greatest common divisor with B is 1 and you take the fractions whose numerator is any K between 0 and M and whose denominator is M, then multiplying by B just permutes the numerators.

And if you take uniform measure on this set, that will be invariant under multiplication by B mod 1.

So this is sort of the easy examples. Okay. So now back to this conjecture that I'm trying to prove. The statement that we actually prove is that if you take two probability measures on 0 to 1, 1 is invariant under times 2 and the other is invariant under times 3, and you project the product measure under any projection except for the coordinate projections, then the dimension of what you get should be the minimum of 1 and the dimension of the product measure.

And to go from this to the version for sets, you use the variational principle. So as you've all mentioned before, if you have a set, then its Hausdorff dimension is equal to the sup of the dimensions of measures supported on the set.

You can also go the other way. If you have an invariant set, you can always produce an invariant measure whose dimension is equal to the dimension of the invariant set.

And so to go from this statement about measures to the statement about sets, suppose you start with two invariant sets. You can find [inaudible] invariant measures of the same dimension, you can form the product measure, which is supported on a product set, you project the product measure, the image is supported on the image of the product set. If you can show that the projection of the product measures large, you'll deduce that the projection of the product set is large. So that's a very soft thing.

And now we say there are a few special cases of this that were known. Again, so in a slightly later paper by Nazarov, Peres, and Shmerkin, they prove this for certain very special types of Cantor sets. Slightly less general than the previous theorem that I had up.

Okay.

>> Is it really just for 2 and 3?

>> Michael Hochman: No. So 2 and 3 is an example of [inaudible]. What you need is integers whose -- you want log M over log N to be irrational or not powers of the same integer. Yeah. Somehow it's traditional to use 2 and 3.

Okay. So I want to tell you a little bit about the proof. So some notations. So we've got our two measures, mu and nu. One is T2 invariant, the other is T3 invariant. We have the product measure theta. Alpha is going to be this number that we're aiming at. So the minimum of 1 and the dimension of theta.

I'm working in two dimensions and it's going to be useful to parameterize projections in the flowing way. So LT will be the line of slope T, it's just the line Y equals TX, and pi

T is the orthogonal projection onto LT.

So this doesn't let me represent the projection onto the Y axis, but I don't really care about that. And I'm going to make a technical assumption, which maybe I won't explain, which is that I'm going to assume that the measure theta is ergodic under the product action. That's not necessary. It just will make the presentation maybe slightly more present.

So some of the things I will say will depend on this and some won't. And I'll try to point out what does.

Okay. So the proof really has two parts, and one of them is in fact a statement which is very general and has to do with certain types of fractals that you get in dynamical situations.

The second part uses the fact that these measures are invariant under independent dynamics, under T2 and T3 and 2, 3 are not powers of the same integer.

And these two, the way that we use these two pieces of information, are quite separate.

So the first, let me show you how we use the invariant under 2 and 3.

So the first statement is that if you look at the function E of T, which is the dimension of pi T theta, so remember that this is the projection onto the line of slope T of our product measure, if you look at this function, then it's going to be invariant under the semigroup, which you get by taking powers of 3 over powers of 2 for --

>> [inaudible]

>> Michael Hochman: Sorry?

>> [inaudible]

>> Michael Hochman: Oh. Yes. One of them should be an [inaudible]. Either one. As long as it's not both. So, in other words, this is what it means. It means that if you evaluate E at T, it's the same as evaluating it at T/2 and at T3.

So this is a very soft argument, and I'm going to show it to you. So we're fixing T and we want to show that the negligence of pi T theta is the same as the dimension of pi 2 or 2 theta. This is what we call E of T. This is E of T/2.

And here's a picture of theta which is of course inaccurate in most ways, because it's not a product measure and it looks like it has, you know, positive -- it looks equivalent to

Lebesgue and so on. But imagine that this is your measure and this is your line. And so

you're projecting the measure onto the line. You're getting this -- these three little measures.

Now, let's look at what happens to the same measure when you apply the map, T2 cross the identity to it. So what we're doing is we're taking the map that -- to a point XY in the unit square. It doesn't touch Y, but it multiplies X by 2 mod 1. So you're just applying

T2 to the first coordinate.

If you do that, first of all, you can break up the measure into two halves. You get two rectangles. The base of the first is from 0 to 1/2. The base of the second is from 1/2 to 1.

And when you apply this map, each of these rectangles gets mapped onto the entire unit square.

And the measure, the piece of the measure in this rectangle, which I'll call theta 1, gets mapped -- let me -- slightly imprecisely let me say it gets mapped onto theta or to theta and the piece in the second half also gets mapped to theta.

And here I have the line of slope T/2, and I have -- I've drawn the projection of each of the halves onto that line. So here I have pi of T/2 theta 1, and here I have pi 2/2 theta 2.

And of course this is also -- the sum of these is just pi T/2 of theta. So this that's this equality. But then the other thing that you can notice is that when you apply the map, T/2 times the identity, to each of these halves, this half is going to be mapped onto the image of theta under pi T, and so is this half. So you're going to get that pi T. That's not quite true that [inaudible] you have to make affine change of coordinates, but pi T theta is going to be the sum of affine images of these two small measures.

And then all you need is a very small lemma that says that if you have a -- if you have two measures which are the sum of two other measures, then they have the same dimension. So this is very soft. So you get that the dimension of this and this are the same.

Now, same argument works when you multiply T by 3, only this time you're going to divide the unit square into three rectangles like this. And when you apply the math which does nothing to the first coordinate and multiplies the sect coordinate by 3 mod 1, each of these rectangles is going to get mapped onto the unit square again. And the measure in each of them will be mapped onto the entire measure essentially.

And so the projection of T will be the image -- will be the sum of affine images of these three measures, and again you'll find that the dimensions are the same. So that's invariance.

The one thing I should point out here, and this is, if any one of you is paying attention closely enough, when I apply this map to the line, actually the slope should be increasing.

I mean, you would think that to get this line from this line, you actually have to divide by three, not multiply by three. But what's happening is that we're pulling back the projections, we're not pulling back the line.

So what you should actually be looking at is the fibers of the projection. And since the slope of the fibers is becoming larger, the slope of the line that you're projecting onto is becoming smaller when you push this way. So this is actually the correct picture.

Okay. So that's the only place where we use invariants of -- that's where we use invariants of 2 and 3. I haven't quite told you the punch line yet. But the second part is the part which I said was general, and this -- I'll explain a little bit later what's required for this statement.

But it doesn't require -- this is true also when you don't have -- when you have powers of the same integer. For example, if you took a product of a 2 in a 2 invariant set. Assume it is the following. So the function E -- remember that E is the function that tells me for

T what the dimension of the projection onto the line of LT is.

E is bounded below by a semi-continuous function E hat, which is equal almost everywhere to alpha.

Remember that we know already that E is equal almost everywhere to alpha. This was

Marstrand's theorem which I stated in the beginning for sets. But it's true to measures too.

But knowing this gives more information. So here's a consequence of having such a function E hat. It means that if you take any positive epsilon and you look at the set of Ts so that the image of pi T theta in that direction is least alpha minus epsilon, this will be an open, dense set.

It will be open because it's bounded below -- well, because -- I'm sorry. Actually, it won't be open but will have nonempty interior because it will contain the set of Ts where E hat

T is at least alpha minus epsilon, and since hat T is lower semi-continuous, that's an open set. And this will also be a dense set because we know that E hat T is almost everywhere equal to alpha.

So it gives you some topological information. It tells you that if you look at these -- if you look at this, a set of Ts where the projection is almost as large as you want it to be, it will contain an open, dense subset.

And this part, as I said, it doesn't depend on the product structure, it doesn't depend on the dependence or the arithmetic properties of 2 and 3. It depends on some self-similarity property that these invariant measures have. I'll describe -- I'll tell you more about that in a bit.

But let me just explain how these two things imply the theorem. So fix your epsilon.

Now this set, this set of approximately good directions, what we know about it is that it is invariant under the action of -- well, first of all, we know that it contains an open, dense subset of zero to -- of the ray from zero to infinity. We also know that it's invariant under multiplication by few to the N over 2 to the M.

Now, this semigroup is dense in the positive reals. That's a consequence of the fact that the ratio of the logs of 2 and 3 are irrational. So you put these two things together, you conclude that U epsilon is the entire ray. Because you have an open set, moving it

around by a dense set of translations essentially in logarithmic scale, it's a dense set of translations. You're covering the entire ray.

So now if you fix T, then you know the projection in direction T for T positive is at least alpha minus epsilon. For every epsilon you get that the projection is of dimension alpha.

Well, you actually get that it's at least dimension alpha, but the upper bound of alpha is trivial, so you get equality. And for negative T you argue in the same way, you just look at the negative ray.

So all I have to show you at this point is the semi-continuity. And that is somehow the main point of this work.

So let me -- so okay. So let me go over again what the problem is in some sense. We're trying to understand projections of a set, and the problem is that when you vary the projection just a little bit, what can happen is that points in the set that previously projected to the same point suddenly become separated.

And of course when you look at these two pictures from far away, they look the same. I mean, these two have only become slightly separated. And at a low resolution, you're still getting the same spreading out of the image set in both pictures. But when you get close, you're seeing a difference.

And so what we would like to do is find some way of capturing the dimension of the image at an finite scale. Okay. So we're going to measure this using Shannon entropy, or just entropy.

So the definition of that is suppose that you have a probability measure eta on the line and you have a partition of the line in two measurable sets. The entropy is defined to be minus the sum of the mass of the atoms times the log of the mass of the atoms with the convention that zero log zero is zero.

So I suspect that everyone here has seen this before. But what this measures is how spread out the measure eta is among the atoms of P.

So a few basic properties is that, first of all, it's always a nonnegative property. That's obvious. It's also pretty easy to see that it will equal zero -- the entropies equal zero if and only if the measure is concentrated on a single atom.

On the other hand, if the measure is supported on N atoms, you're going to get an upper bound of log N for the entropy. And, in fact, you have equality if and only if the measure is equally distributed, uniformly distributed among those N atoms.

So, again, so if you haven't seen it, what you should understand is that it measures how uniformly eta is spread out among the atoms of P.

Now, take a parameter B, and B is maybe not such a good letter. This is not the B from

TB before. This is just an integer -- not integer, a small real parameter. And we'll define

HB eta to be the entropy of a measure eta with respect to the partition of the reals in two intervals of length 1/B. Sorry. So I guess it's a big --

>> [inaudible]

>> Michael Hochman: Yeah, sorry. My -- let me see. So B is a big parameter. So this is the line. And you've just divided it into -- you see down here?

>> No, it didn't work.

>> Michael Hochman: You can't see that. So you're just partitioning the line into small intervals. And you can decide that zero is going to be one of the end points so you don't have any ambiguity. And you're looking at how spread out your measure is among the atoms, you're taking the entropy.

And we're going to be looking at the measure pi phi of theta. So you should think of this line actually living on some line in the plane. We've divided it into small intervals.

We're taking our measure theta, projecting it and measuring the entropy at that scale of the projected measure.

Now, if you look at the quantity 1 over log B, HB of eta, this is an approximation of the dimension of eta. So here's a rough estimate to show you why this is true. Suppose that you have a measure where the decay bound is strict and holds everywhere. So suppose that every ball of radius R has a mass at most R to the beta, where R is 1/B.

Suppose we have a measure from which this is true. If you estimate the entropy, what you get is the sum over the elements of the partition. So you're summing over all intervals of length 1/B. And you're taking eta I log eta I, and you plug in this bound into this eta, you get an inequality like this.

So you're summing over the measures of the intervals log 1/B to the beta, but the eta I is sum to 1 because this is a probability measure and the Is are partitioned of the probability space. So you just get beta log [inaudible].

And so this is, again, the assumptions here are a little bit too strong. But it's true that in general the dimension of a measure eta is bounded above by the limits of its normal -- of the normalized entropies, 1 over log B HB eta.

Now, what we're looking for is a bound in the opposite direction. We would like to use these entropies to bound from below the dimension of eta. We're trying to show that the dimension of eta is large. But that's not generally true. Generally the inequality can be strict.

And another comment is that when you take B to infinity, you're taking smaller and smaller intervals on the line, and you again have this problem of sensitivity, that changing a small change to the projection will cause a large change in where the measure falls with respect to these atoms.

So okay. So let me take a break and define one more thing, and then I'll tell you how we actually do it. So we're going to be using dyadic cubes to define from our given measure a new sequence of measures.

So dyadic cubes are defined -- so take a point in a -- take a point X in the unit square.

Divide the square into four equal subsquares. There's one of them, which contains the point, I'm calling that D1, divide that one into four equal subsquares. One of those squares contains X, I'm calling that D2 of X, and so on. You continue.

You get a sequence of squares of side -- the nth square has side 2 to the minus N. They all contain X. If X happens to lie on -- if one of the coordinates of X happens to be a dyadic rational, you might have ambiguity, but just choose your convention how to choose your -- which square it belongs to.

So, again, to every X we associate a sequence of squares in this way, cubes or squares.

Now, starting from our measure theta, which is a measure on the unit scare, you -- and a point and the support of theta, you can look at the sequence of dyadic squares.

And for each of them you can restrict the measure to that square and scale it back up to be a measure on the unit square, just by a similarity, and normalize it to be a probability measure, and in this way for every point in the support you get a sequence of probability measures theta XN, which we're going to call -- so the sequence of measures we're going to call the scenery.

So, again, fixing X, you're basically zooming in on X along dyadic cubes and rescaling them back to have full size.

So the idea of doing this to a measure is not at all new at -- it's been around since the '30s probably. Certainly Furstenberg used it in the '70s, and since the '80s when fractals became a popular subject, lots of people have done this sort of thing. So there's nothing really new about this.

So here's the main technical theorem, and this is the really somehow I think important -- at least most useful piece of this. This is the piece that will be useful for other problems.

Take a measure on the unit square and take your base B --

>> [inaudible]

>> Michael Hochman: Well, yeah. B is not a base. B is the same parameter. B is defining the size of these small intervals. So we're taking intervals of size 1 over B. So ignore the base. And fix your projection pi of T.

And suppose that the measure theta has the property that for almost every X with respect to theta, when you look at the scenery of X -- that is, you look at the sequence of rescaled measures that you get by zooming in 2X at scale of 2 to the negative N -- if you take those measures and you project them through this projection pi T and you take this estimate, this entropy estimate, 1 over log B, and scale the entropy of this, suppose that the average value of that, as it goes to infinity, is beta.

So what you've done here, going back to this, is you've fixed capital N. You've taken these -- first N of these squares. For each of those you've looked at the blown-up measure, you've projected it and you've taken the entropy at scale B. And you've averaged.

So suppose that the limit of this is beta. Then you can get a lower bound on the dimension of the projection of theta, the bound is going to be at least beta minus an error, and the error term goes to zero as the resolution B goes to infinity.

And now the point of this is that these quantities, 1 over log B HB of pie T of theta is essentially continuous in T. So it's not quite true that it's continuous, but you do get a bound of the following type. If you take any measure -- oh, dear. This is another typo.

These thetas should be etas.

Take any eta on the unit square, and you take any pair of slopes S and T and you compare the scale B entropy of the projections pi T theta and pi S -- pi T eta and pi S eta, that's bounded by a constant, at least when S is close to T. And everything is independent of B.

So how close S has to be to T and what this constant is don't depend on B.

So essentially this quantity is very stable into perturbations. Yeah.

>> [inaudible]

>> Michael Hochman: Why this is true. So essentially -- essentially the point is that -- so let's take our line and our measure and project it. Maybe I'll draw it like this. What could go wrong when you change the projection. What could happen is that you've got some mass over here and some mass over here, and they land in different atoms. Then when you change the slope a little bit, they might land in the same atom. And so you -- and since entropy is measuring how spread out things get, this will cause the entropy to change.

But essentially when -- if you make a small change in the angle, the only thing that can go wrong is that atoms are going to shift to adjacent cells. And so when you have many -- when B is large, that will contribute very little to -- contribute a constant amount to the error. So that's the source of this --

>> [inaudible] just to illustrate [inaudible] what does it depend on?

>> Michael Hochman: It depends on the dimension. So here in fact -- so here I'm -- talking about projection from two dimensions to one dimension. What's coming into play here is how much over -- so how much overlap you have --

>> [inaudible]

>> Michael Hochman: Yeah. It's explicit it. It's not a complicated constant. I just didn't want to -- I don't know if I ever bothered to compute what it actually is.

>> [inaudible] so much will be [inaudible] not so much for [inaudible] expression.

>> Michael Hochman: You can be explicit about how close S has to be to T and exactly what it is. It's complete. And it's not hard.

Okay. So here's how you prove the semi-continuity. What I'm going to show you is not quite the semi-continuity. I claim that there's actually a semi-continuous function around,

but I'm not going to tell you what's -- on the next slide I'll give you a hint about what it really is.

What I'm going to show you is how I can produce an open, dense set of directions where the projection is alpha minus epsilon or more.

So okay. So what you can show is that if you take a typical slope T and look at these averages, so you average along that, you pick a typical point X for the measure theta, you average along dyadic cubes, the entropy at scale B of those rescaled measures of the scenery, you could show that for a typical T this is going to be -- this should be an inequality. This is at least alpha minus delta of B.

And the reason is that it's, again, this theorem which I stated in beginning that we know that for almost every T, the dimension of this projected measure is alpha. And we know that these entropy estimates are actually usually bigger than the dimension when B is large. So you can show that you get a bound of this type, where delta B is something that goes to 0 when B goes to infinity.

So this is true for a typical T --

>> [inaudible]

>> Michael Hochman: Yeah. Delta B is a constant over log B. So now you perturbed T a little bit and using the bound on the previous slide, the error that you're introducing by changing S to T is just O of 1, but you've divided by log B. So you're getting another error term, O of 1 over log B.

No, Yuval, this is not -- this one is not. This one comes from the speed of decay, and that comes from somehow the Shannon [inaudible] theorem for the measurement of data.

>> [inaudible]

>> Michael Hochman: This you can't bound universally. What could happen is that -- I mean, mu and theta -- mu and nu could be [inaudible] distributed at every scale up to

1,000. So you won't see it. So you'll need B to be large for this to be true.

So -- okay. So you've perturbed T a little bit and you've gotten this type of bound -- now, actually, I haven't told you why this limit exists, so you might have to write [inaudible].

But it's still true.

And now applying the theorem that we had about -- if you have a lower bound for these averages, then you get a lower bound for the dimension. And here you have S again, the same S as here. You get another error term, another delta B.

But you choose B large enough, and I told you that this O of 1 doesn't depend on B, and this is going to be larger than -- this will hold for every S close enough to T. And for every B large enough. And so you get a bound -- you get an open set around T where the dimension of the projection pi T theta is at least alpha minus epsilon, and you're getting a neighborhood around the set of measure 1 of Ts. So you're getting an open dense set of

Ts where this bound holds. So that's how you get an open, dense set.

Now, let me say something about the semi-continuity. So what is the semi-continuous function actually? Well, so I claim that there is a stationary measure valued process, which I'm going to call theta hat N. So that the -- which has the following properties.

So, first of all, for almost every X with respect to theta in the square, if you look at the scenery sequence, theta XN, it's going to behave statistically like this stationary process.

Where what I mean by that is if you take any observable and you average the scenery sequence for that observable, you get the mean of that observable for this process.

Then if you take the function E hat of T, which is the dimension of the projection of this random measure, so this is the -- a first realization of this process. So it's a random measure --

>> [inaudible] about theta?

>> Michael Hochman: So here theta is this product measure times -- and this reason that

I'm lying here is -- I'll say something in a second about that.

Now, so if you look at E hat T, which is the dimension of the projection of theta 1 hat by pi T, you can -- it's pretty easy to show -- assuming -- if you assume that the process data is ergodic, it's pretty easy to show that this is independent of the realization that is almost truly constant. So you get a nonrandom function E hat. And you can show that this function is lower semi-continuous and that ET is at least as big as E hat T for every T and

R.

So the proof that E hat is lower semi-continuous is very similar to what I show on the previous slide, but it does require a little bit more work, so I'm not going to explain that.

Now -- okay. Now, where am I lying? In two places I've stated that I'm lying. The reason that I'm lying is that the dyadic structure here is not appropriate for the measures that we're talking about.

The measures that we're talking about have a -- the new measure, the Y measure somehow is adapted to base-3. And so what you really would have to do here is not take dyadic cubes, you'd have to -- in some smart way you'd have to chop things up into three in the Y direction and alternately -- but then you can't -- but then you have this problem that your rectangles are getting very narrow and wide when -- they're getting very flattened when you zoom in. So you have to do something a little bit more delicate. So that's a technical point, which is not so interesting.

Besides that, I think I haven't lied very much. And I'm out of time. So I guess I'll stop here.

[applause]

>> Yuval Peres: Questions?

>> Okay, show us the most interesting of the next 17 slides.

[laughter]

>> Michael Hochman: Um...

>> [inaudible] you really wanted to show. I was giving the opportunity [inaudible].

>> Michael Hochman: I can put two more applications up on the slides. So this machinery -- so maybe here's another application where the -- you can very nicely see the distinction between where the invariance is coming and where the semi-continuity is coming.

What you have here, you take -- I haven't explained what an iterated function system is.

But you take a -- you take a fractal which is -- okay. So I can tell you what it is. You take a -- take a family of similarities in R2. So, for example, maybe you take the unit square and you map it to some other squares, some other smaller squares. And I'm going to assume that they're somehow separated. So I've taken one square, I've turned it into three squares. And then for each of them you do the same thing. I guess I should choose an orientation so you know which way is up.

So for each of them you repeat. So this one is going to break up into three, which looked like this. And I can't do the other ones in my head. And in the end, when you take the intersection of all these things, you're going to get some sort of fractal in the plane.

So this intersection, as you repeat this, it's like the construction of a Cantor set. You take the intersection, you get a fractal set and you'd like to understand what this geometry is.

So that's the attractor of an iterated function system.

Now, what would -- and so we'd like to look at projections of these things onto lines. So take, say, an R2 onto lines, one-dimensional lines.

And so the conclusion that we'd like to have is that the dimension of the projection in every direction, in this case there will be no exceptions, is the maximum that it could possibly be, and the condition that you need for that to be true here -- as we saw before we had this function E which was invariant under multiplication by 3 and division by 2.

Here what you require is that the orthogonal parts of these contractions generate -- if you look at the group that they generate, they should act minimally on the space of projections.

So you look at pi 2, 1, and you can map a projection pi to pi composed with U -- I'm being a little bit imprecise, because when you take a -- actually, I'm not. I'm being totally precise. So okay. So you have an action on the space of projections and if that action has dense orbits, then you can conclude that every projection has the largest possible dimension.

>> And two dimensions you just [inaudible] at least one of the [inaudible].

>> Michael Hochman: Has an irrational part, yes.

>> So what linear means [inaudible].

>> Michael Hochman: Well, linear means that -- you can do this thing with any family of contractions. But there's something redundant here, because I'm saying there's similarities.

>> But you mean that the similarities are linear.

>> Michael Hochman: Yes. This is actually a linear map that takes the square to a smaller square.

>> They are affine. By linear you mean affine.

>> Michael Hochman: Oh, affine. Okay. Yes. I'm sorry. Yes, yes, yes.

>> So how is this related to the times 2, times 3 [inaudible]?

>> Michael Hochman: Okay. So that's the other -- that's the -- two more of the 17 remaining slides. So it implies it. So it implies what's the known part of it. So there's a conjecture that -- yeah, it doesn't imply it, unfortunately. There's another more famous conjecture by Furstenberg which says that if you take the -- the same -- if the same invariant measure mu is simultaneously invariant under T2 and T3, then the prediction is that it should be a convex combination of Lebesgue measure and atomic measures.

And what's known about that is that -- and it's the theorem of Rudolph's and there's a number of other proofs by now, is that this is true when you assume that the dimension of mu is positive.

And this theorem about projections implies this theorem rather easily. I mean, so -- so this is the proof. From the projections theorem, it fits on one slide. And that's a complete proof. But it doesn't give any information about the open case of times 2, times 3.

>> So is there anything -- so this is for positive dimension [inaudible]?

>> Michael Hochman: Positive dimension. Yes.

>> So there is anything in between that's...

>> Michael Hochman: People have defined things in between, but you can't prove anything with it for this conjecture. I mean, yeah, that would be the natural approach, but

I don't know of anyone's who's managed to do anything.

>> Yuval Peres: Okay. Let's thank Mike again.

[applause]

Download