22904 >>: Until a few years ago, the west coast... but Benny's move kind of single-handedly changed the center of...

advertisement
22904
>>: Until a few years ago, the west coast was kind of combinatorially completed,
but Benny's move kind of single-handedly changed the center of gravity. So
anyway, we're very happy to have him here as a frequent visitor. And I'll let him
give his own title.
>> Benny Sudakov: Thanks. So I will talk about -- it's a long title. It's
nonnegative K-sum, fractional covers, and probability of small deviations. I will
explain what are these things and how they connected to each other.
And I want to say that this is a joint work with Noga Alon and my student in
(indiscernible) graduating next year, Hao Huang.
So let me start with the following point problem. You have numbers. X1 of
(indiscernible) was nonnegative sum. So some of these numbers greater
(indiscernible) zero. And you ask how many sums, subsums. So you take partial
sums. So number of nonnegative partial sums. Partial sums. Okay. So I want
the minimal. So I want to ask how few nonnegative partial sums you can have if
you have a numbers with nonnegative sum.
Now, if you think a little bit, you will see that the answer is -- so you can have at
most two, two to the N minus 1. And the example is the following. You know,
(indiscernible) seem to create few partial sums is to say -- to take, you know, big
number and lots of small numbers. Let's say sum zero. So (indiscernible) minus
1 of those.
Then obviously any partial sum which is nonnegative should contain this big
number so you can have at most two to the minus 1. Let's say empty sum, I
don't count here.
And the second thing is you can also see pretty easily that this is tight. And the
reason why this is tight because for every subset I of the set of indices, you have
sum of the XI. I belong to I, plus sum of the XI, I belongs to I complement,
greater equal to zero, because this is sum of all numbers. And therefore at least
one of the sums should be nonnegative. Therefore every ->>: (Indiscernible)?
>> Benny Sudakov: (Indiscernible) number of nonnegative ->>: No, one, one, cannot one -- (indiscernible).
>>: (Indiscernible).
>>: (Indiscernible), the word can.
>> Benny Sudakov: One can. Yes.
>>: Thank you.
>> Benny Sudakov: Okay. So every set, either set or its complement gives you
a nonnegative partial sums. So that's tight. And then too, if you think about it, it's
a little bit reminiscent of the basic question which people -- it's a toy question
which people ask early in (indiscernible). You have a bunch of subsets of an
element set and you want and intersecting (indiscernible). This would be a good
parallel to make during this stop.
So if you want the largest intersecting family of subsets, (indiscernible) indeed
this family has size to the minus 1 and that's supposed that you cannot have
more because you cannot include (indiscernible). Okay.
So that's a toy example.
So looking on this toy example, Manickam, Miklós, and Singhi -- I will not spell
their names. It's kind of long and with Hungarian accents which I probably will
have wrong, so about in '80s, they asked the following question: Suppose now
we want to restrict sets on which I'm looking at. So I want to take partial sums
which contain exactly K-sums. So let me denote that FNK, it's a minimal number
of nonnegative K-sums from XY up to XN with sum of XI greater than zero.
Okay.
So suppose I now want all my sums to have size precisely FK elements. Okay.
Then note this example gives this FNK, it's a gain, the same gain. So every sum
which is nonnegative should contain this big number which is nonnegative and
(indiscernible) N minus 1 (indiscernible) K minus 1. (Indiscernible). So this gives
you example so that the N minus 1 (indiscernible) K minus 1.
And the conjecture which they made, so FNK (indiscernible) equals to N minus 1
(indiscernible) K minus 1. Now, here, you should be a little bit -- it's not correct
as I stated -- a little bit more careful because you need to look on the small
examples where N is small. So as I said, it's a good parallel between this and
extreme (indiscernible) subsets. There is no direct connection. So the relevant
statement from (indiscernible) is so called (indiscernible) of M, which exactly like
here, it says that if you take collection of K element subset of an element set, and
you ask this collection to be transecting, every two sets intersect, then you
cannot have more than N minus 1 (indiscernible) K minus 1 sets.
This starting to work when your N is a little bit big. In case of sets, for example, if
your N is at most 2K minus 1, then you can take all K element subsets because
there would be intersecting. There is no space because of volume. So this
starts to work when N is at least 2K.
Similarly, in this problem, you can have some isopteric examples when N is
small. So therefore their conjecture was if this is (indiscernible), for N, let's say,
bigger than 4K. There are some small examples, but they disappear once your
N becomes sufficiently large. Okay.
So this is the conjecture I want to discuss, and then it will lead us to several other
interesting question which I will mention on the way.
So what is known about this conjecture? So it's known for small variants of K.
So K equals two N three. Two. This was (indiscernible) by Manickam, Miklos,
and Singhi themselves. It's also true if your N is really large as a function of K,
let's say K to the K, if it's bigger than exponential and K of K, so recently -- this is
also Manickam, Miklos, and Singhi.
So recently, this was improved to N bigger than lock K. It was a K by
(indiscernible). Ph.D. student is Sinkov, Béla and Cambridge. And so what I
want present here today -- so that's basically all (indiscernible). Another
(indiscernible) on which I want to mention. It's also (indiscernible) which shows
you that it's a little bit -- this function has a little bit strange behavior. So we know
that the conjecture is true for infinitely (indiscernible) in particular for all K which
divides N. And in a second, I explain why this is -- why this is important. So
somehow we know when K divides N, you don't know (indiscernible).
So who did this? Yeah. I think it's maybe just kind of folklore. I don't have
notes, but maybe somebody (indiscernible) or somebody's paper, you'll see the
proof is not very difficult.
So what I want to discuss today is this theorem with Noga and Hao, this
conjecture holds (indiscernible) moderately going (indiscernible) bigger than 33 K
squared. Okay. And you'll see from where K square comes.
I think it's more interesting -- so first it's interesting that we're reduce it to
something which is polynomial, but what's even more interesting is the way we
proved it because as I said, it will lead to some other questions. Okay.
So that's the first theorem which I want to mention. So this proves this
conjecture.
Now, it's customary for extremal problems, you not only want to know what the
extremal value of your function, but you also want to know what is the structure
of extremal example. Now, you can look on this extremal example and see that it
has this interesting feature. So there is one big number and lots of small
numbers and (indiscernible) this big number is so big that any sum which
contains this number is nonnegative.
So to give you the second result which we have, let me have a definition first. So
call XI large if every K-sum including I is nonnegative. Okay.
So that's the example which we have. We have the example which gives you N
minus 1 (indiscernible) K minus 1 have large number, X1. You can always
assume that the large number is the first number.
So to call the number large, if every K-sum including this number is nonnegative,
then obviously you cannot avoid having N minus 1 (indiscernible) K minus 1 and
it's obviously this example so our second theorem says that if no number is large,
then number of nonnegative K-sums goes. So you cannot get N minus 1 just K
minus 1. He actually getting almost twice as much.
So let me write you precise formula. It's N minus 1 just K minus 1 as before, plus
N minus K minus 1 just K minus 1 minus 1. Okay.
And again, here assume -- so assume that N bigger, bigger than K squared. So I
am assuming my N is large, but not very large. Okay.
So if you look about N being bigger, bigger than K squared, then this is roughly
also N minus 1 just K minus 1, so getting it twice as much, but the purpose of this
formula, because this formula is tight. So this is an example which shows that
you can't have that many. So we actually not only know to show that the only
extremal example look like this, this is a one large number which in a sense also
known to say what would be the next configuration. Okay.
>>: (Indiscernible) true for all (indiscernible)?
>> Benny Sudakov: Probably. We didn't really. So again, the example works,
yeah. Probably (indiscernible) and (indiscernible) constant time scale. So but
our proof, you will see, there are so many places which require K squared, that I
don't think we would be able to do anything about it.
So let me say a few words about proofs and why it leads to fractional covers and
why it leads to probability of small deviations.
Okay. So one lambda which I said that I want to show that FNK equals in mine
one just K minus 1, if K divides N. So this is true. Okay. So here's the proof.
There are several proofs. You can do probabilistic proofs but one really short
proof model very nontrivial theorem, but I want to mention the theorem. I think
it's nice. It's use is claimed by (indiscernible). So it is known that all K subsets of
NLM set can be partitioned into N minus 1, choose K minus 1 families, which are
(indiscernible).
So what I mean by this, the families I, J, where I runs from one up to N over K. J
runs from one up to N minus 1, which is K minus 1. All IJ of size K, and IJ and I
prime J (indiscernible).
So that's why you need N to be divisible by K. You want that N over K sets which
cover all the (indiscernible). So you can take all the universe of your K element
subsets of N element subs, this would be my indices, and I can split them into the
mention. Mention is a collection of these joint sets. Each of size over K, and I
can split it into these joint mention which chooses all K elements subsets. That's
pretty nontrivial theorem, even in case K equals two for complete graph. So take
a complete graph on two N vertices, shows that (indiscernible) can be split into
these joint mentions of size. And it requires some proof, but when you go to
(indiscernible), it's really not easy.
There is a very beautiful proof by Barron Yay (phonetic) which uses
(indiscernible) theorem, but (indiscernible) theorem. And now of course, how you
get the result, you get the result because sum of XI, all the elements if one up to
N, which is nonnegative, it's actually equal sum of XIs where I belongs to KL sum
over K for any fixed (indiscernible).
Take one mention, and know that this sum is just splits to these joint sums of N
over K-sums. And since sum of this number's nonnegative, at least one of these
guise.
>>: (Indiscernible).
>> Benny Sudakov: (Indiscernible). So this is a mention.
>>: (Indiscernible).
>> Benny Sudakov: Ah, yes. T. This is I here. T. So I have this I one L and
then I two L. IN over KL.
So this is one mention, sum of the numbers here plus sum of the numbers here
plus sum of the numbers here give you all the numbers and one of the sums is
nonnegative. And since the number of mention is N minus 1 just K minus 1, it
give you the result.
But you can prove it without using Barron Yay's (phonetic) theorem. Okay?
>>: (Indiscernible), right? You can just take a -- why can't you just take the
partition and make a random (indiscernible)?
>> Benny Sudakov: So if I don't want to use Barron Yay (phonetic), I can take
some random order and move the element and just split it into K sets and do
double counting. That's true. That's true. But it's important in order to give the
right number that N is divisible by K. So (indiscernible). Okay.
Now, why I wanted to say things in these words, because it's kind of will give you
connection to this fractional cover so it at least indicated that it might be useful to
look on the language of -- from the theorem of hyper graph, because what this
statement basically said, what you are using here, using the following
(indiscernible). If you take all K (indiscernible), which actually do give you
negative sum, then this K to plus cannot contain a perfect mention. So there's no
perfect mention among the K to pulse which do give you negative sum. That's
the only simple observation which we use plus the Barron Yay (phonetic)
theorem. You cannot have sets, these joint sets which cover all the
(indiscernible) set and give you negative sums because then the sum of all
numbers are negative. Okay.
So what would be our attempt to a part of this conjecture? So we do the
following. Think about your numbers. For simplicity, it would be convenient to
think of them as ordered. Okay. Try to check indeed if there is a big number,
large number, as I called it. Try to check that. If X1 plus XN minus K plus one
plug XM, so you take X and the last -- maybe K minus two. Let me check. Yeah,
K plus two. Okay. So X1 plus the last, the smallest K minus 1 number. If this is
already nonnegative, then X1 is large in my definition, and all K-sums involving
X1 are nonnegative.
So we're done. So we can assume that this incident happen. So here we're
done. So now suppose that that's actually opposite. So X1 plus XM minus K
plus two plus XN, actually negative.
Then I can take away these numbers so I would be left with the numbers. So
take them away. And look on the numbers, X2 up to XN minus K minus 1 to the
remaining N minus K number plus one. Okay.
Now, I know that if my N is divisible by K, I have many nonnegative K-sums.
This is the example so maybe if this number -- there are N minus K numbers
here. If this number is not divisible by K, you know, (indiscernible) a gain maybe
K the lowest numbers, and the rest would be you will get them -- so here, you
can find at least N minus 2K numbers. Such set (indiscernible). I will take this
numbers and maybe throw away some smallest numbers which went I do this,
the summary mains nonnegative. And because I will get the number of numbers
here divisible by K, I can use this lambda. The number of numbers which I have
is at least N minus 2K. So I'll get this (indiscernible).
So what now would be the idea? The idea would be I have this X1. It's a big
number. So probably there are lots of sums of size K which contain 61 which are
also nonnegative. I can put them back. And my idea would be indeed to show
you that there are many. So go -- okay. So before -- I'll tell you the goal. Let me
just give you a rough estimate by how much we're off. So let me just ->>: (Indiscernible) with X1?
>> Benny Sudakov: (Indiscernible).
>>: You multiply by two. Adding X1 ->> Benny Sudakov: No, no, these are K-sums.
>>: Ah, K-sums.
>> Benny Sudakov: These are K-sums. What you ->>: (Indiscernible).
>>: N minus 2K plus one there because you know that N itself is not divisible by
K.
>> Benny Sudakov: Yes.
>>: So to get to a divisible number, you would --
>> Benny Sudakov: Oh, plus one. I don't care. But I somehow losing. Indeed.
So here I can have N minus 2K plus one, but I don't care. But it's not that every
sum, every K-sum here ->>: (Indiscernible) by X1.
>> Benny Sudakov: But that's a good idea. That's what we tried. So what
(indiscernible) takes the K-sum here, throw away largest element, put X1 inside.
Of course it will remain positive. But then many K-sums can give you the same
set, because you know, they can be the set X five plus some guy's positive, and
X seven plus this guy's positive. If I substitute X five and X seven with X1, it
would be the same number. I will not be able to control over counting. So that's
a big problem here.
But just to give you the idea by how much we below what we want, so this
roughly, instead of N to the power K minus 1, it's N to the -- N minus 2K to the
power K minus 1. So this is roughly something like one minus constant K square
divided by N times N minus 1 just K minus 1. Okay. Because you have ratio N
minus 2K divided by N to the power K. And actually, the C is smallest too.
But I would not really be bothered to show you 33 K squared, so just remember
that if your is much, much bigger than K squared, so I'm trying to put back that
many K to plus.
So N minus 1 just K minus 1, I already have, so I'm trying to put back the
(indiscernible). Okay.
So the goal is to show there are many K-sums, nonnegative. Nonnegative.
Okay.
>>: (Indiscernible) the most frequent element, and that's already ->> Benny Sudakov: It doesn't work. So somehow, don't have good estimate.
>>: (Indiscernible) K squared or something.
>> Benny Sudakov: So what ->>: (Indiscernible) one element (indiscernible) has the most frequent givens ->> Benny Sudakov: It's one factor one over L. Yeah.
So you can, you know, we have -- now we have less, maybe 35 more minutes
we can system try to think about [laughter] how to do this. It gets complicated.
Somehow we were not able to control this double counting.
So the idea is really to analyze this K-sums involving X1. So how I will analyze
this K-sums. So let's do the following. Let's actually count the K to pulse
involving (indiscernible) when which are negative, and let's show that there can
be too many of those. And then the complement would be that actually there are
many which are positives.
So do to do this, let me do the following. Define K minus 1 uniform hyper graph
on the ground set from two up to MS. So I of size K minus 1 is an edge. If sum
of all numbers XII belongs to I plus X1 is negative.
So actually, my hyper (indiscernible) are exactly K minus 1 says which I cannot
add to X1.
And now I want to show, so the goal would be to show, show that the number of
edges in this H, instead of being so it has N minus 1 vertices and edges of K
minus 1, instead of having N minus 1 choose K minus 1 edges, I will show you
one minus some constant delta. Okay.
So what this will imply, so this will imply that if this is -- this is imply that there are
delta times N minus 1 choose K minus 1, nonnegative K-sums involving X1. To
this guise, I can add this number. So plus one minus CK square over N, N minus
1 choose K minus 1. This is from X2 up to XN minus K plus one.
So if I add this numbers, this would be already one minus CK square over N plus
delta. Choose N minus 1, choose K minus 1, it will beep N minus 1 choose K
minus 1. And you actually see why I needed to assume that my N is at least
constant K squared because I wanted to ratio to be small as a constant delta.
But delta would be some explicit constant. One over 33.
Okay. So that's the idea.
Now, how I'm going to deal with this hyper graph. So again, from now on, the
main equation would be to deal with this hyper graph H. So H is all sets of K
minus 1, (indiscernible) K minus 1 from to up to N which together with X1 give
you a negative sum.
So here, how we get to matching, the idea of matching is pretty useful here. So
let me remind you definition, new of H. So given the hyper graph H, new of H is
a size of maximal matching. So what is it? Matching is a set of these joint
edges. So if you have K minus 1 uniform hyper graph (indiscernible) N vertices,
the that maximum matching can be of size N minus divided by K minus 1. So the
first observation which would be crucial here that actually matching cannot be
anywhere close to this. You have gate. You cannot have big mention in your
hyper graph. So claim.
It's much bigger number. You have linear gate. Why is this so? So here is the
proof. Suppose I have a matching in this hyper graph. So here is my matching.
I one, IT. So these are all the guise of size K minus 1 such that sum of XII
belong to IJ plus X1 less than zero.
So suppose I have many K minus 12 pulse disjoined, which together with X1 give
you still negative number. Then I get a contradictions. Then I claim that sum of
all numbers is negative. Why? Because let's take sum of all numbers. And let's
place them. Some of the numbers which fall inside the sets, so I will have sum
over J (indiscernible) up to T. Sum of XI, I belongs to IJ. So this one. Plus sum
of XIs where I does not belong to the union of IJs. Some of the numbers outside.
Now let's do the math. So I claim that this is less than the following. Inside every
edge of this matching, by definition, the sum of the numbers were smaller than
minus X1. So all this guise game me minus X1 or less. So here, I will have
minus X1 times T. Okay.
All the guise outside, do you remember X1 was the biggest number? So all
these guise are at most what's left, so it's N minus T times K minus 1 times X1.
Now, it claims that if you substitute here T equals N over K, then you get a
contradiction because this -- let's see. So this equals what is its annex one
minus T times K, X1. So if you substitute here T equals N over K, you get zero.
So you get zero, less than zero, it's impossible. So this guy does not -- so I got a
hyper graph which does not have a perfect matching. Wasn't even close to be
perfect. So it has instead of N over K minus 1, roughly N over K.
So now you can ask, now you get the nature of equation. You have a K minus 1
uniform higher graph, which instead of having maximal possible matching of size
N minus 1 over K minus 1, it had matching which is smaller. What you can tell
me about the number of its edges.
So now I want to get the conclusion that this hyper graph cannot have close to all
possible edges. You losing some constant (indiscernible). Okay.
So let's leave this go and ask the following question. And that's apparently
equation which was asked well before us. It was asked by Dirichlet (phonetic) in
1965. It's an old question, which says the following: H is K minus 1 uniform. I'll
stick with K minus 1. Maybe let me put here R and I'll choose R to be K minus 1.
Our uniform hyper graph. On N emphasis with no matching of size T. I really put
T as the size of the matching.
What you can see about the number of edges. So I forbid you to have these joint
edges. How many edges you can have?
And so what I would observe, there are two very simple competing configurations
which if you think for a second you can easily come up with these configurations.
So what would be one configuration?
One configuration, you take a click, you take a complete out of uniform hyper
graph, but on so many vertices that just there is no space to put there
(indiscernible) these joint edges.
Okay. So what would be this click? So you take T times R minus 1 vertices. So
there is no space so put here to these joint edges. And you take all R
(indiscernible) set. That's a run construction. Just a click. Okay.
Now obvious as a construction, take a hyper graph, which has a heating set of
size T minus 1, just take T minus 1 vertices and all R to pulse which intersect this
set. Of this (indiscernible), there is no way in this configuration to have T these
joint edges.
Now let's write the number of edges. So I will write this N choose R from the
total, so this is why I'm saying heating set of size T minus 1.
So from N to the R, I deleting all the edges which miss this heating set of size T
minus 1 so it's minus N minus T plus one. Choose R. Okay.
So just again, from where I got it, I fixed some set of size T minus 1, and all R to
pulse which hid this set. So I taking total number of edges and deleting the guise
which do not heat this.
Okay. So these are two competing configuration, and the conjecture that E of H
is always less or equal than the maximum between these two. Okay.
And now, if you take T equals two, which says that there are no T to these joint
edges, which exactly says that your hyper graph is intersecting, you get Dirichlet
(phonetic) (indiscernible) serum because indeed, N choose R, choose N minus 1
choose R. This is exactly N minus 1 choose R minus 1. For certain range. And
for certain range, it's two R minus 1 choose R because this is a click. Okay.
So this is a conjecture which Dirichlet (phonetic) made. So this conjecture is
wildly open so you feel we realize that would be good to have, but then we
immediately see that basically nothing is known. So this is open from 65. And
it's only known for T which is much, much smaller than N over K. You see, we
needed for our range something like for K minus 1 uniform, so we needed like
something like instead of N over R, maybe N over R plus one. So this is only
known for T less N over R cubed with some constant. It's known but for range
which is really too small a source.
But before proceed, let's just look for a second on this conjecture and let's see
what it suggests. So let's look what this conjecture suggests.
So what is my setting? My setting is that my R is K minus 1. So I have K minus
1 uniform hyper graph and my T is the size of the matching is approximately N
over R plus one.
Let's forget about it actually my number of edges is N minus 1 instead of N. That
would be not be relevant.
So you take this thing and you just substitute here T equals N over K and R
equals N K minus 1. So let's see what this gives. This gives you N choose K
minus 1, minus, N minus N over K plus one. Choose K minus 1.
So if you look between these two numbers, so this is one minus so you
remember, this gives you the factor of one minus 1 over K. Roughly to the power
K minus 1. N choose K minus 1.
Okay. So that's the maximum number of edges you can have. And the
important thing to notice is that this is one over E. So at least if the conjecture
would have true, it would suggest that indeed, if you take a K minus 1 uniform
hyper graph which does not have mention of size N over K, then the maximum
number of edges in such hyper graph is one minus 1 over E. This is my delta
times N just K minus 1. N minus 1 or N, it's not important.
Good. So that's a conjecture. So what we can do, so what we can do, we can
do the following thing. So there are three observations to make. First of all, we
don't need to get this conjecture precisely. So we want to have something like
that, but we can have here some smaller number. We don't need one over E.
And the second important observation, we actually don't need to do this integer,
an integer version. We can do it in so-called fractional versions.
So let me give you the definitions. New start of H is the following. So instead of,
you know, matching is just basically you giving weight one to some edges in such
a ways that every vertex is covered at most once. So now I give a fractional
weight to the edges.
So I want the maximum of omega E over all edges E. Where omega E is
between zero and one. Subset for every vertex. V and H. Sum of omega E for
all edges which contains this vertex is at most one.
And what I didn't prove but what I can prove, using roughly the same
computation, which I did is really simple computations that I can put here stuck.
So if I look on all these K minus 1 to pulse, which together with X1 give me still a
negative K-sum, this hyper graph actually have even fractional matching number
at most ten over K.
Okay. So now, I want to estimate this problem. Now I want to kind of prove at
least Dirichlet (phonetic) conjecture or fractional version of Dirichlet (phonetic)
conjecture. And even I want to prove it approximately. So before I show you
how to do this and why is this actually would be relevant to this question of
probability of small deviations.
Let me just remind you that by drawing to your linear programming, this is the
fractional cover is actually equal to the -- the fractional matching is actual to the
fractional cover. Fractional cover has a dual problem. It's a minimal sum -- what
would be -- G of V over all vertices of H. Again, the weight of the vertex is
between zero and one, so I actually trying to heat all the edges with the minimum
number of vertices, but now vertices can -- I can split their weights.
So I take the minimum of a sum of the weights of all vertices -- the weight is
fractional between zero and one -- such said for every edge E sum of the weights
of the vertices of inside this edge is at least one. So I want to heat every edge.
(Indiscernible).
And now we are in this final step which I want to prove. So what was my setting?
So what is a setting? So H is our uniform hyper graph. So you remember my R,
R was K minus 1. Has N vertices. Okay. Now. The fractional matching, instead
of SB is inch over R, which is a maximal possible, was at most N over R plus
one. Because it was N over K. Yes.
And I want from here to deduce the number of edges in H is at most one minus
delta times N choose R. I want to say that there are -- and delta fixed constant.
Delta constant.
So I want to say that if you can't have matching which is close to be perfect, if
you lose fraction, one over R square actually, then you already losing constant
fraction of edges. Okay.
So here would be the idea of the proof, and then you'll see why it leads to
probability of small deviations.
So let's take this -- remember that the fractional matching is also fractional
covering. So let's look on this covering which gives me this weight. So what do I
have? I have this function. So this function is given. I have sum of GV over all
vertices, which is less or equal N over R plus one. And for every edge sum of
the weights of the vertices inside these edge is at least one.
So let me create you random variables. Okay. So I will create R identical
distributed random variables. X1 up to XR would be independent identical
districted random variables. And they do the following. So XI equals G of V.
With probability one over N so the experiment is very simple. You pick a random
vertex. You look what its weight in this fractional cover. This is value of your XI.
Okay.
So this is my random variables. So what I know, I know that expectation of XI,
let's say my cover is actual equals N over R plus one. It's just I can ->>: (Indiscernible).
>> Benny Sudakov: Sorry?
>>: (Indiscernible)?
>> Benny Sudakov: No, no, (indiscernible). Just N vertices. You pick one
vertex at random. You see what its weight in the cover, and that's your weight
(indiscernible) of the probability. And all these random variables are the same.
So expectation of XI is just sum of GV over N which is one over R plus one.
Okay.
So what's equation which I want to ask. So you see a little bit, the obvious
question which I want to ask is the following: What is the probabilities that sum
of this variables, XII runs from one of two R, is less than one? Why? Because if
you for a second forget that when I picking random vertices, I can't have
collapses, if you assume that all R -- so this random variables, they actually pick
random vertices. If you assume that I pick this vertices and they will no
repetition, I get an R (indiscernible). And leave this sum less than one because
this was a fractional cover, this R to pull cannot be an H.
So what I'm saying, and this can be made rigorous, so this probability up to
(indiscernible) of R choose two over N, because of collapse of the vertices, gives
fraction of R to pulse, which are not edges of H, which is exactly my delta I'm
interested in. Okay. It's exactly equality, an equality in the right direction.
So this probability minus R choose over N, why choose two over in? Because
there are just two pairs, you know. Are just two pairs of random vertices can
return you the same vertex and probability of this is one over N. So this
(indiscernible) this is may error term. So up to this error term, I got a distinct
(indiscernible). And then it's really, the sum is less than one. Then it's non-H.
That's what I'm counting.
So this probability, what I'm interested in. If I show you that -- and again, you see
this is again the place where I need that my N is bigger than K squared. I want
this number to be small because I'll show you that this probabilities, I don't know,
one over 13 or something.
So now I have this question. I have (indiscernible) random variables whose
expectation is one over R plus one. There are all of them, so the sum of the
expectation is a constant but less than one. And I'm asking what the probability
that this sum is less than one. And I want to show you that this probably is
constant. So if I show you that this is constant, I am done.
Now when we got here ->>: (Indiscernible).
>> Benny Sudakov: (Indiscernible). When we got here, we start thinking with no
(indiscernible) about this question. Looks like a question which should be -should interested people in probability. And then you know, very fast we realize
that actuality we know this question. It's interested people in probability, but they
could say very little about it because you know, you don't assume anything about
your random variables. You don't assume that exist a second moment. You
don't assume various.
So apparently this question was floating around, and it was a thing proposed by
Yuri Fenghi (phonetic) who had also application for this question for some
computer science problem. And luckily for us, he could prove some estimate and
now it will lead to some interesting parts, so I will tell you more about this, but he
could estimate this probability.
So what question which we ask, so he said, so just to be precise in his setting,
the question of Fenghi (phonetic), so you have this random variables, let me call
Y1 up to YM. And then we'll reduce it.
So the expectation of each YI is at most one. There are M of them. And then the
only thing -- okay. So YI are nonnegative and independent. Even doesn't
assume that they are ID. So somehow we cannot use it.
And then he asked what is a probability that some of the YIs less or equal than N
plus delta. Okay.
So if you think about it for us, so my YIs here would be R plus one times ten
times six I, yes. Yes.
And then my delta would be one in this setting because if you take YI R plus one
times (indiscernible), you multiply this on R plus Y, then your expectation is R.
So M is R. M is R. And then here you will have R plus one instead of R. So
your delta is one. So this setting, delta is one.
And what he proved he did prove the constant.
Now let me say which of course we can substitute there and then finish the
problem.
Let me say a few words because I think this is interesting.
So first of all, why this is the nature of minimum function? Why this is the nature
of on set to this problem? Okay.
So what you can have, so you have N independent variables. The expectation of
which is one. And you want to say that probability which is bounded away from
zero, the sum of this random variable is small as the sum of the expectation
which is N plus some constant.
So what is the possibilities? So this possibility delta over delta plus one comes
from the following: You take X1 to be one plus delta with probability one divided
by one plus delta. Sorry, not X1, Y1. And then Y two and YM all to be one.
That's a very, very choice of random variables. Then let's put here straight in
equality. Just exactly like in my case.
Then the only way that you go up to N plus delta, if your YI is one plus delta. So
this (indiscernible) probability one plus -- one divided by one plus delta. So the
probabilities that it doesn't happen is exactly delta divided by delta plus one. So
that's a one setting.
So another setting is all YI, they are the same, and they all equal N plus delta
with probability one divided by N plus delta. Okay.
Then again, the only way not to have N plus delta is to have them all -- and zero
has a voice. Okay. Zero has a voice.
So here, the only way to have -- them not to be N plus delta is just to have them
all zero and then it's one divided by N plus delta to the power N which is one over
E. Which is if you remember this Dirichlet (phonetic) conjecture, which you were
supposed to get based on Dirichlet (phonetic) conjecture. And it actually should
be like that because this thing, if I would know it, would apply fraction of version
of Dirichlet (phonetic) conjecture in the appropriate range. Okay.
So and indeed, (indiscernible) conjecture and our range is this range because
our delta is big. Our delta is one, so this is half. So only conjectures that you
can get here, one over E, but could prove only constant, which is again good for
us.
Now, still five more minutes of your time. This is not the end of the story. Okay.
Apparently when we start looking on this, because this problem was many other
applications which I don't have time to mention, so this fractional Dirichlet
(phonetic) conjecture, if I would knew it, I could solve -- there are some other
long-standing open problem in extreme non combinatoric. For example, one
question which people ask, you know, I have good a graph or more precisely for
graphs, things are known, but I give you a hyper graph, let's see three uniform
hyper graph with certain minimal degree. Degree, now you can think about how
many edges containing a given vertex. Or there is a (indiscernible) version. You
can think how many edges contain a given pair. So two versions for this
problem, both are (indiscernible).
But you give me a hyper graph with certain minimum degree, and you ask: What
value of this degree guarantees that your hyper graph has a perfect matching?
People starting this question for graphs. The answer for hyper graphs, it's an
open problem. So it's a conjecture, what should be the value of this degree. And
of course, not necessarily for three uniform, I can ask this question for
(indiscernible) hyper graph and there is a conjecture. And if I would use a
fractional version of Dirichlet (phonetic) conjecture, I can solve this problem. So
there is a reduction which immediately allows me to solve this problem.
So this conjecture is very interesting. So we start looking at it. We could get
some further results which I have no time to mention here. But on the way, we
actually discover that I want to finish with my talk, that well before Uli (phonetic),
like many times it happens in mathematics, so the good problems people think
about them. So well before Uli (phonetic), there was a statistician. He is still -he's retired now in the Purdue, whose name was Sam Wells, who for his own
reason got interested in this question. And he actually have a very precise, very
nice conjecture which says a much more general statement which of course for
which the Uli's (phonetic) game, Uli's (phonetic) conjecture is a special case.
And let me just put this conjecture.
So what Sam Wells asked is the following, and this is 60s. So you have some
numbers (indiscernible) one or less or equal -- I mean two, less or equal. Let's
say less or equal (indiscernible). These are your expectations. Such that some
of this new I's is less or equal than one. So you have bunch of expectations such
as of sum of them is strictly less than one. And you have X1 up to XL.
Nonnegative. An expectation of XI is (indiscernible) I. And you ask what is the
minimum --
>>: (Indiscernible).
>> Benny Sudakov: And you ask what is the minimum of the probabilities that
sum of X1 plus XL is less than one? Okay. So what is the minimum?
And his answer, he says: I will write you bunch of polynomials, I will in a second
tell you these polynomials. They very intuitive. Very similar to what is question.
So I will write you bunch of polynomials. And this is a minimum of what these
polynomials will depend on the parameter T. In a second I will tell you what this
T is of this polynomials from this numbers (indiscernible) one up to (indiscernible)
L.
So what is this polynomials? So QT of new one up to new L equals the follow:
It's product of I runs from T plus one up to L. One minus new I divided by one
minus sum of J runs from one up to T.
If you think for a second from where this creature came, and you already
remembered what I told you for is equation, then you will understand what is QT.
QT is obtained as follows: So you take your X1 to be new one and so on. XT to
be new T. Just constant random variables.
And then for every J, bigger than T, you take your XJ to be exactly what will
complement this numbers to one. So you will take it one to be minus sum of
new -- let me call this L and not J. Yeah. Not P. R. Just don't want to have -new R runs from one up to T, with probability exactly what it should be to give
you the right expectation. New I one minus sum of new R runs from one of two
T. And zero as a voice.
Okay. So you take first T random variables constant and the rest random
variables, every random variable takes the values that if you add this constant
plus this number you get one. So the only way to have this probability less than
one is to have all these random variables to be zero. And the probability that
they're all zero is exactly this expression. And what Sam Wells says by these
polynomials takes the one which is the minimal, and that is answer.
And it's a really nice conjecture by itself. Very beautiful. It's of course implies
this is a very, very special case. So Sam Wells actually knows his conjecture.
(Indiscernible) equals three and five. It's already allow us to get some good
mileage out of it for various combinatorial problems, but as I said there are much
more you could get out of it if you would know it for values of L. And I say it's
beautiful by its own rights, so let me stop here. [Applause].
>>: Comments or questions? (Indiscernible) conjecture that first raised the
question I mean first (indiscernible) the Y, you know, you could immediately
restrict (indiscernible) two values. So then the question really becomes one of
linear programming rather than probability.
>> Benny Sudakov: True. I don't know. But maybe you can do it some other
way around.
>>: All the (indiscernible) machinery ->> Benny Sudakov: Either way. Either way. So far it looks like it's more a
calculus of variations. If you take his proof of sums for three and four, it's a
complicated expressions which you need to minimize, but maybe there is, I don't
know, some other transformation to this random variables which you can get
which leads to you something nicer.
>>: (Indiscernible).
>> Benny Sudakov: But you see, the thing is that when you do the
(indiscernible), for example, even if I start with the guise which I ID, you destroy
this immediately by (indiscernible) formation.
Now maybe you can do a different transformation and proof this for IID random
variables, which is already would be good for my applications.
>>: (Indiscernible) not ID.
>> Benny Sudakov: Yes.
>>: Any other questions, comments? Thank Benny again.
[Applause.]
Download