Document 17865250

advertisement
>> Yuval Peres: Okay. We're delighted to have Gil Kalai from the
Hebrew University as well as Yale and MSR Hertolia [phonetic] come and
tell us about, give us a glimpse of his extremely influential work on
Boolean functions.
>> Gil Kalai: Thank you very much, Yuval. It's great to be here. And
I will talk about Boolean functions and mainly about open problems. I
think this is after some iterations of discussion, this is what this
means. I won't assume you know the topic but I'll assume you're not
entirely stranger to the topic, but also if you are you can ask
questions at any time. And I want to start -- and so I want to mention
several problems and tell you a little bit what I know about them. In
the main talk will be about Boolean functions, functions from 01 to the
N. Mainly 0 to 1 and sometimes we'll talk about functions from 01 to
real functions from 01 to the end and on 01 to the end, this is the
discrete cube. Of course, you can identify 01 to the N, the vectors of
length N with 0 and 1, also with subsets of an N element set. And also
Boolean function you can identify the subset of the discrete cube. So
if you do these two identification together the Boolean function also
identified with the collections of subsets of an N element set and
there are many problems in various of these notions. Something moving
between them is useful, and if we have a Boolean function, so I want
first to mention the probability measure on 01 to the N so let's denote
by UP of X1, X2. XN to be P to the K 1 minus P to the N minus K where
K is equal to X1 plus, plus XN. So I call this the Bernoulli measure
and this is the product for relative description on the discrete cube
and K is the number of 1s. And of course if P is equal to one-half,
then this is the uniform probability measure on the discrete cube. And
given a Boolean function, we will denote by I -- IP of F is the sum of
H of X, mu of X, X -- let's denote this by omega N. X equals belong to
omega N. H of X is equal to the number of Y so that the distance
between X and Y is 1 so Y the first from X in one coordinate and F of X
is not equal. So if we look -- so here is our -- this is our discrete
cube. So it has many points. We have here the function F. Let's say
that this is the function here if F is 1 and here F is 0. And for
every point we look at its neighbors and we look at how many neighbors
the value of F change is changed and when we sum this, sum this number,
it is between 0 and N, although all points with the probability weight
of the probability motion this is called the total influence. This is
called the total influence, and it's also called the average
sensitivity, sensitivity of the edge. Expansion. So this is our
weight in names. And there's a very basic inequality which says that
IP of F is greater or equal than T times log to the base P of T, where
T is the measure mu P of F, which is the mu P of all X so that F of X
is even.
So this is a fairly basic inequality. So another way to write it, this
is great or equal than T times log T, log 1 over T divided by log of 1
over T. That's the base 2. Base E. And this is a fairly basic
inequality. So this inequality, it's held as equality if F is, say, X1
and X2 and so on and X3. So if F is X1, if F is -- and this we can
think about it as a sub, if this is the discrete cube, this is a sub
cube of dimension N minus K. So for this sub cube, mu P of F, so the
probability that we have 1s for X1 up to XK is P to the K and I. So IP
of F. So this is equal to I think K times P to the K. Up to some
points -- hopefully one. So here this is precisely T times log to -okay, maybe there is two here. And so this is a precise two times T
times log P of T. Okay. So this is a very basic isoperimetric
inequality. If you want to prove it there is a famous case that you
may be familiar with which is for one-half I of F is greater or equal
to 2 T log to the base 2 of 1 over P. So if the measure, for example,
if T is one-half, then we get the I of F is greater or equal to 1. And
this is a very basic inequality which another way to say it is that if
you have a subset A of the discrete cube, then the number of edges from
A and the size of A is less than 2 to the N minus 1, then the number of
edges from A to the complement is at least the size of A. So they're
very basic inequality. There's a canonical pass proof that you may
know and there are many other proofs. But with the log it's a
different inequality, but still there's proof by induction which may be
a four, five sentence proof by induction. Okay. So the first problem
I want to mention, this is problem one, is understand the approximate
cases of equality. What do we mean by approximate case of equality.
We want to understand cases where IP of F is, let's see, is -- maybe
there was -- I think it's okay. Except maybe I should -- I see. I
think I want to say PP log -- this is the equality, if I'm not wrong.
Okay. This is the way to do it. So I wanted P times IP of F is less
or equal than a constant times log-log -- constant times T log 1 over.
So this was the one to understand. Okay. So this is a sort of
stability result for this inequality. I usually felt this inequality
as [indiscernible] inequality, how can a proof be related but something
like that. But I don't know where this particular inequality can be
traced too and what's known and conjectured about these things. So
there are several interesting cases -- interesting cases. So the first
interesting case is the T, remember the T is mu P of F. The T is
bounded away from 0 and from 1. So this is a constant. And here we
can divide the two cases and the first case is that also P is bounded
away from 0 and 1. So, for example, T is one cell and P is 250. Then
there's a serum by Freewood which says that if -- so in this case we
don't have to worry about the P here and we don't have to worry about
the log here. So it says if IP of F is less or equal than a constant,
then F is approximately [indiscernible]. Well, Junta is a function
which depends on -- so let's say epsilon approximately Junta. Junta is
a function which depends on a bounded number of variables and so F
depends on exponential number in C and 1 over epsilon variables. So
exponential of some expression C over 1 epsilon. So this is theorem of
Freewood and it's theorem for various application it follows, it
follows from the proof of an earlier theorem by Kahn, Linial and
myself. And this deals with this case. Okay. So the next case is
that P -- you know, this is actually okay if a log of P divided by log
of N goes to 0. So P does not need to be a constant, but it cannot go,
it cannot be too small. And fortunately for many applications, it's
the interesting case is that P itself behaves like a power of N and
then there are several other theorems. So here P is bounded away from
01. No, I'm sorry, P is now a little off 1. Even a little off some
power of N. And T is still bounded away from 0 and 1. And then there
is a theorem. So essentially there are several theorems, but let me
state the theorem of Boogie. So it can be stated as follows. So if F
is monotone, monotone increasing, and so in this case P times IP of F
is less or equal than a constant, then there exists a subset S of
variables. So that the probability mu P of F conditioned on XI equal
to 1, I belong to S is greater or equal to 1 plus delta times T. Okay.
So we need some quantifiers and the way -- okay. And the size of S is
bounded by a constant. So K, which depend on C, depend on C. So I
think the way you want to write it, there exists this function K of C
and there exists a delta so that this -- so if you have a function
where it is close to optimal with respect to this, I don't know -- now
anybody can say anything okay. So if you have a function which is
close to optimal for the case P is little 0 of one, T bounded 0 to 1,
then there's a set of variables. If the function is monotone there's a
set of variables if we condition on these variables to be 1 then the
measure of the set increased by a positive, by some substantial amount.
Substantial amount delta. So this is a boo gay theorem and there are
two other theorems which are closely related. This is from '99. And
actually Boogie theorem is appendix to a paper by Freegood and Freegood
theorem is a much more detailed theorem. So this theorem tells you
some information about the function and Freewood theorem tells you
exactly what the function approximately is but for the special case
where the function represents a graph property. So the Boolean
function can represent properties of graphs. In this case they have a
much very high symmetry and in this very symmetric case you can get a
stronger characterization theorem. So I will not state it but it says
that such functions are local and this is also from 1999 and there is a
theorem by Hatami, from 2010 goes for the general case, general case,
and again give you a characterization. So it doesn't identify property
of the function but will actually tell you what the function is
approximately, and it's build on a Boogie theorem, but it's not for
monotone function, for general functions. And I should say that also
Boogie theorem applies for general function. If it's not monotone in
case -- in this example I describe this sub cube but I could take any
sub cube. I could take X1 must be 0. X2 must be 1. X3 must be 0.
And if you want to formulate Boogie's theorem this way instead of
saying XI is 1, I is equal to S, you'll say SI is equal to some sigma
of I for all I equal to S. Now if you ->> [indiscernible].
>> Gil Kalai:
Oh, okay.
>> Across the threshold.
>> Gil Kalai: So you can replace the conditions, monotone condition
for some Boolean condition of general type. Because we are having just
two-letter alphabet, if we have such condition then we can either
insist that all the variables are either 0 or they're 1, but this is
not instrumental, because if you want the full structure theorem we
really need to take also the sub cubes and to do some gymnastics of
this one. Okay. And I should say that although this is a general,
this is a general characterization for general functions, if you just
want a monotone function, this doesn't give you the full result for
monotone function, because this is sort of a problem that we have in
many of the conjectures, that when you restrict to a very special sub
case, either in terms of symmetry or in terms of monotonicity, usually
the general theorem does not automatically applies to this more severe
structure. So in many times we see one conjecture and much more
general conjecture and even we cannot prove that the much more general
conjecture much more special conjecture. So this is a situation for
when P is -- I will give an example in a minute.
>> But in this case.
>> Gil Kalai: So in this case there is a structure theorem, which says
that you can describe the function in terms of something like -- it's
called pseudo juntas, something like sub cubes, but these sub cubes
allows signs. Okay. And now if you insist that the function is
monotone, then you can assume that these sub cubes will also be
monotone but this does not follow this. This is a motive which appear
in many questions in this area. Okay. So both Freegood theorem,
Freegood Boogie theorem useful for various properties in particular.
This theorems are very useful to prove what's called the sharp
threshold behavior. So the sharp threshold behavior deals with the
scenario where we look at mu P of F -- this is a good thing because we
can show it entirely, eliminate it. It will not come to light at all.
Okay. So we look at mu P of F as a function of P and so this function
starts from 0 and usually there is a sharp window. So if there's some
critical probability where mu P of F is one-half, one-half, then it is
known then the window where the value of mu P of F changes rapidly is
for every Boolean function it has order of magnitude of PC. It's big O
of PC and sharp threshold behavior corresponds to where it's little of
PC and a case where we don't have sharp threshold, which is called the
course threshold, corresponds precisely to the situation where this
influence, where this total influence is low, is very low. This is the
typical application of this theorem.
Now, the conjecture, so the problem -- so this was not a problem. The
conjecture is that Boogie theorem -- and also Boogie is the easiest to
describe and it also has all the consequence applies for -- it applies
for also when T for T small and there the size of S is less or equal to
a constant depending on all the other things C and maybe delta times
log of 1 over T. So this is a conjecture that precisely the same,
precisely the same theorem applies to characterize the mere equality K
or stability case of this inequality also when P is small. So this is
the first problem. It's sort of a nice problem but we don't know how
to solve it. But now okay so is it clear you can please ask me any
questions.
>> So the question I'm just, there's no T or log 1 over T in the
Boogie's theorem.
>> Gil Kalai: Right because Boogie's theorem is the case where T is
bounded away from 0 and 1. So when T is bounded away from 0 and 1,
then the size of S must be a constant.
>> Right. So what's the assumption in this case, so the assumption on
P times the influence is what?
>> Gil Kalai:
In this conjecture, this is the assumption.
>> That conjecture.
>> Gil Kalai: This is the assumption. If T is bounded away from 0 and
1 we can forget about all these theorems, but otherwise we have to. So
what this conjecture is used for ->> Think of an canonical example [indiscernible].
>> Gil Kalai: Yeah, we are a little bit short of examples, good
examples. I mention a few other examples but, yeah, okay so okay so I
will refer to your question, you see the point is it depends on T. And
okay so let me mention a few other questions. So certainly this is a
positive type example. But I will mention in a minute a few other
examples. Okay.
>> The case with the probability of the function is going to 0, the
function is 1.
>> Gil Kalai: Yeah, the probability is going to go to 0 and can be
very, very small.
>> So usually we're interested in T which is fixed but P which is going
to 0 really fast.
>> Gil Kalai: Now we're interested, we change our interests from
reasons I will explain. It's not that this comes -- this is fairly
interesting on its own, but there are good reasons to be interested
also in the case where T is itself small. And maybe I will explain
what are the reasons. This is the second problem.
>> When you combine, take a power of a Boolean function, when you
combine Boolean function, the critical thing for new composite function
comes from very advanced forward from one of the components.
>> Gil Kalai: Yes, I think this is a very important remark. There is
some ways to move from somehow to shift starting with a function F we
have some tricks that we can use. One trick is take the same function
on this joint set of variables and take the N, for example, or the O or
we can take a many variables representing the same variable and take
the O. And this allows to move the critical probability to a friendly
domain of the T to a friendly domain. But unfortunately it doesn't
help in these cases, but it helped in some other cases, but it's still
something that we try many times until we see it doesn't help. And
maybe it will help.
>> But the point is sometimes in order to understand critical case for
one [indiscernible] you need to know this conjecture of critical
[indiscernible].
>> Gil Kalai: Okay. So I should say that the motivation -- so for
this theorem, for the theorem of Boogie, Freegood Boogie and Hatami,
the motivation was to understand the sharp threshold behavior. And
this problem, what we want to understand is to use this general
isoperimetric things to understand the location of the critical
probability. And the reason this is relevant to the location comes
from another problem. So this is now problem two. And it says the
following: For if F is monotone, Boolean function, and if P is the
critical probability for F, which means, i.e., mu P of F is one-half.
So if you have a Boolean function with critical probability one-half,
then there is Q, there is another probability where Q belongs to the
interval P divided by log N. P such that with respect to Q, Q times
IQ, I don't know maybe here some constant, Q times IQ of F is less or
equal to some C of mu Q of F log A of 1 over mu Q. So this means that
any Boolean function in the world, if you modify the critical
probability, if you reduce the critical probability by no more than a
logarithmic factor, you reach equality case like this. Okay. So this
is -- now this problem is especially illustrating from reasons that I
would explain. I should say that there is a theorem or proposition -what?
>> [indiscernible].
>> Gil Kalai: I think this -- it's related, the type of question I
will -- so this conjecture and this conjecture, the first problems
refer to a work with Jeff Kahn from 2007, I think, or 6. Sometime. So
Kahn and me. And it is related to work by Telegran, where sort of
related, the piece small things. It's related to an array by Telegran.
So okay we -- so what we know is that it is true for Q but not in the
critical PC -- I'm sorry, PC divided by N to the epsilon. So if
instead of log N you write N to the epsilon where epsilon can be
officially small then this statement is correct. So this means that
such cases of equality, we can always identify not so far from the
critical probability. So this refers to your question. So on one hand
it's well to have cases of equality, we want to understand their
structure. It resembles the sub cube but on the other hand every time
we have a Boolean function we believe that by modifying the probability
and reducing this T we can reach such a behavior. If T is, if we
reduce P so that T is exponentially small, then this is automatically
fruitful. So this is the problem number two. And what is frustrating
sort of number two, this essentially the function mu P of F, the
function mu P of F is completely described by the number of vectors
that F, that give the value 1 to F with K-1. And there's a complete
characterization of such sequences of numbers which is called the
[indiscernible] theorem. So once we have a complete characterization
in principal this should answer any question we have. So one tragic
scene in this case, but this applied to many complete characterization
is that even after you have the complete characterization you still
cannot answer reasonable problem. In any case. But this may be -- so
in particular the fact that we have a complete characterization is that
we have to very specific monotone function that characterize, that
describe all the possible behaviors of mu P of F, which is functions
which are the value of F on sets of the same size is 1 for the first
lexical graphic vector and 0 for the rest of them. So this is very
special behavior, and every mu P of F is described by such a behavior
but still -- anyway. So this is problem number two. Now, these
problems came from another problem, and now I want to mention this
other problem. And this problem, and there's a pair of problems. But
problem three is about graph properties, graph properties. And this is
the following conjecture. Also by Tom and myself. You look at the
random graph G and P, G and P, and you look at the property so you ask
about the property to contain a graph H. Okay. So this is a very
familiar type of property. H is not affixed. H can depend on N. So
in motivating example H is Hamiltonian cited or matching. Perfect
matching. Okay. Over 3. What?
>> [indiscernible].
>> Gil Kalai: Size N. Let's say the three -- the three -- there's a
famous three, which looks like this. Square root N vertices here and
sizes of [indiscernible] okay. So this is the property. And now we
can ask what is the critical probability. Let me move this this way.
So PC is the critical probability. And now let me define another
number. In this number is the not so good student answer to the
question what is the critical probability. So not so good but not
terribly bad. So I'll tell you what it is. So if you are asking an
example what is the critical property, one way to answer it, let's look
at the Q where the expected number of copies of H is one-half. Okay.
So this is the lower bound for the critical probability. And if you
think that expectation is enough to any probability calculation, then
you can think that this is -- okay. But if you do this, then you
really may be mediocre student, but if you're slightly better student
you realize that you need to worry not only about H but also about any
subgraph of H. Because it may happen that for some subgraph of H, the
place with expected number of copies is one-half. Appears only later.
And this is also a lower bound for probability that H. So Q is the
maximum number wheel. Such that the expected number of copies of H
prime for every H prime subgraph of H in G, in G and Q, is greater or
equal to 1. Okay. So you see this Q is typically easy to compute.
You are laughing, Yuval.
>> Students will realize by themselves the expectation of subgraph, I
will take them [laughter].
>> Gil Kalai: I thought about it.
so bad students, but they don't --
Okay.
Yes.
So I said they're not
>> [indiscernible] [laughter].
>> For you they're mediocre.
>> Gil Kalai: Yes, the standard is also it's a random thing. So when
I entered the university, my first master student was pretty good and I
thought this was more or less the median. But after all that time I
didn't have such great so often. In any case, I may withdraw the full
power of this job, but Q is easier to compute in PC. In many cases ->> [indiscernible].
>> Gil Kalai:
What?
>> [indiscernible].
>> Gil Kalai: It is the minimal. The minimal wheel. So the expected
number of any H prime in H is greater or equal to 1. So Q or less or
equal to PC.
>> Change to minimum?
>> Gil Kalai: Yeah. Okay. Minimum and maximum rows and columns, yes
and no. These are all interchangeable. This is the minimum. So again
in many cases Q is much easier to compute than P, than the critical
probability, and the conjecture is, which is that P, PC -- so PC
divided by Q is less or equal to some constant log N. And even
constant times N to the little of 1 is very good. It's okay. So the
idea, the conjecture is that in any, for any graph property this naive
computation gets you fairly close to the truth. Now, okay. So when we
worked on this property, so this is still very specific to graph
properties, so we wanted to find a contra example. We still -- I think
the reasonable man would assume that this is false. So we want to find
an example we couldn't find. And therefore we make a very large
[indiscernible] instead of graph properties, in very particular graph
properties to contain a specific subgraph we consider general Boolean
function. So I don't want to write the -- so there is a much more
general -- so problem four is a much more general conjecture for
general Boolean function. So for general Boolean function there's the
critical probability can be defined the same way. And there is an
analog of this Q for general Boolean function. And again we conjecture
that the gap between the expectation probability, expectation
probability and the expectation critical probability and the real
critical probability is less or equal to some constant times log N.
Okay. So this is a conjecture, and the interesting thing is that if we
know, if we know, if positive answer to this -- so if we know this
extension of Boogie theorem to small Ts and appropriate result in this
direction, then this will get us fairly close to refining these general
conjectures. And as before one a little bit strange factor is that
although this conjecture for general Boolean function is infinitely
more general than this conjecture, this conjecture does not follow from
this more general conjecture, for the same reason as before. Because
here we make some computation which is also graph [indiscernible] and
when we move to general Boolean function also the Q changes. We don't
expect it to change too much. But this fact would be still another
conjecture. So this is a first cluster of problems I wanted to
describe. And I want to describe another cluster of problems about
influences of large sets. So modest -- 10 to 20 percent is what I want
to talk about, but it's okay, I don't know. See I didn't even believe
I would talk about everything. So my lecture preparation became
sketchier and sketchier as we get further.
>> The only problem with that is if your flight is in less than four
hours.
>> Gil Kalai: Don't worry, I will not keep you here. I want to
mention the question about influence of large sets, because this was
open for a while and okay so so far we talked about -- okay, we talked
about influence, the total influence. There is also a notion of the
influence of set. So let me note the S is a subset of the variable and
IS plus F of F is, this is the measure of all X -- no, this is the
measure of all X so that there exists Y. So that XI is equal to YI, if
I is not in X. Minus -- so this is all with respect to P. But let's
assume now that P is one-half because this would be easier. And then
we can write this mu for mu of one-half. So this is -- okay. Minus
mu. So the influence of a set of variables toward one is the
probability that when you start from a vector, you can change the
values of variables in S and make it 1 minus the probability that the
value of the function is 1. What? XI equal to YI if I ->> [indiscernible].
>> Gil Kalai:
For all I.
Not in it.
>> I'm sorry.
>> Gil Kalai:
I not in it.
I for S you don't care.
>> XI equals Y.
>> Gil Kalai:
This is XI equal to YI.
>> You can take Y equal X.
>> Gil Kalai:
You can take Y equal X but you can do better.
>> [indiscernible].
>> Gil Kalai:
What?
>> Access this property?
>> Gil Kalai: No, this is the measure of all X -- oh, I forgot
something. F of Y is 1. Very good. Yeah. Okay. No, there was no
statement. This N, I don't know. So we want to make the value of the
function 1. And similarly we can define also I as minus. And okay the
case that S is a single element is of special interest. And in this
case I plus, I plus K is equal to I minus K and this is one-half of the
so-called influence of the case variable and this was a much studied.
And the case of a larger set was not so much studied. I should say
there is a theorem, theorem mentioned before. So KKL theorem says that
maximum of IK of F is great or equal to mu of F, 1 minus mu of F log N
over N. So this was a useful theorem. Again, it's mainly of interest
when mu of F is bounded away from 0 and 1. Okay. So for a while there
were several questions what can be said about inference of large sets.
And then for many is nothing happened, and then two years ago or year
and a half ago we managed to have some improvement on what was known
and for a little while we thought aha, we settled the problem. But
then we saw that essentially large chunk of the problem remained so it
was just the way it was formulated that gave us the solution. And
since then there are some additional results by Boogie. And instead
there's a fairly large gap. Here's the question. What is -- okay. So
suppose that mu of F is T. How small can T be? Such that the always
is S where the size of S is less than half the variables, one-half
minus delta times N. Such that the influence -- okay. The
influence -- the influence toward class of F is greater or equal of S
is greater or equal to 9. 9 over 10 is arbitrary. But I say it to not
to worry too much about. So we want a set with less than half the
variables, we want a function whose measure is small, and we ask how
small can it be that we can still guarantee always a set with large
influence toward 1. Now, the reason we say here one-half minus delta
is that if we take more than one-half, then there is a famous theorem
which is called the sour shell of lemma, lemma, which says that if the
size of F is exponentially small but not like, if T is 1.999 to the N,
I'm sorry, T is 0.999 to the N, then still we can have a set of size
and S of size 0.49 and then we can have such a set where the influence
toward 1 is everything, is 1 minus the measure of the set. So we can
have -- actually, we can force this measure to be 1. This is
essentially the shattering lemma of a Sour and Schiller [phonetic].
But if the size of the set is less than one-half, then the shattering
lemma, the Sour Schiller lemma says nothing, and we're in unchartered
territory. And this is very interesting question, and for a while
there was a conjecture that this can be done also when T is
exponentially small, but there is example -- example again by Kahn and
me. And this example is that T can be as large. T, which is 1 over E
to the square root N does not suffice. Now, there is a theorem, it
follows from the KKL theorem that 1 over N to a little constant C do
suffice. C is greater than 0. But small. And so this is the way -so still there is a huge gap. There's a huge gap. We don't know where
things -- we hope that our example -- so our example I should say it's
a variant on the tribe example. So there is a well-known example in
this business which is the tribe example now this tribe example is not
so. Okay. So it's quite important here and this is a variant type
example, but do this way and this is where it starts and there's a
recent result by Boogie that he showed that 1 over N to the big C
suffices when delta is delta of C. So if here delta is small, then so
for every big C there is some small delta where this is also true and
this always requires quite a hard work beyond the application of KKL
theorem. So this C grows from being small to being large but it's
still polynomial and then on the other side we have something which
looks exponential. So this is fairly far away. Okay. So okay
maybe -- so maybe I would say something about the example of tribes and
dual tribes, I'm sure you saw it before but relevant to both questions,
maybe I should not ask more questions but the tribe example is the
following you divide your Boolean variable and then the tribe example
is that F is 1 if there exists a tribe. So these guys are called
tribes. T 1, T2 up to TM. There exists a tribe so that XJ is 1 for
every J in the tribe. Tribe T. And this is called the tribe function.
And the dual tribe function is that F is 1 if for every tribe there is
a J, there is a J in the tribe so that A.J. is 1. So there are two
functions, one is dual to the other and they're relevant, they're
relevant to this question. They're also relevant to the question about
the gap between the expectation threshold and the threshold and, for
example, there is -- I should have mentioned but this may be obvious
that the log N in the previous question cannot be improved because for
the first example that people tried to compute the critical
probability, there was this log N gap. For example, connectivity in
graph. So perfect matching. So the expectation that, the place where
random graph contained a perfect matching is roughly when the number of
edges, when the number of edges ends. So P is 1 over. The place that
guarantees a perfect matching is log N over N. The reason is that
before you have log N over N times, before you have log N edges
expected log N edges for every vertex, there will be isolated vertices.
Now if you look at the property and essentially the property to contain
a perfect matching is equivalent for the sake of random graph is a
property of not having isolated vertices, if you look at the property
not having isolated vertices, essentially this is the dual tribe. For
every vertex you have the tribe of edges containing this vertex, and
you know these tribes are not entirely disjoint, but they are -- what?
Yes, certainly adept to circuit, but it's very close to the dual tribe.
And there is some method I believe that the dual tribe is the reason
for this log N that we cannot even formulate it. So I think -- I think
maybe I'll just mention without the Blackwald one question that we're
curious about, there's some fact technology in the study of -- let's
say monotone Boolean function between the majority function and other
functions, the majority function has a very strange behavior. It has
large total influence, but if you talk about the [indiscernible]
coefficient, which relates to the influence, they're concentrated on
low degrees. It is a noise stable and it would be nice if Boolean
functions there will be some decomposition theorem which says that
every monotone Boolean function is some ingredient which is majority
like and some other very noisy or this character similar to composition
for the dynamical systems between mixing and compact and something, and
this could be very useful, can save a lot of conjecture that they were
killed by prematurely by some contra examples.
[laughter] maybe I'll stop here.
[applause]
>> Any questions or comments?
>> Naively maybe tempting to guess that any type of behavior that can
occur in this world can be exhibited by some extension of tribes.
Maybe multiple levels. Is that just flat ->> Gil Kalai: Okay. This is really a bit too strong. I like strong
conjectures. And I continue to believe that there's some truth to them
even when they're false, but I think this is -- you know, there are
some other types of behaviors. For example, the point of view
computation or complexity, the tribe represents very simple type of
functions. And the tribe have very small influence. And there are,
for example, another famous function in this area is the so-called
recursive majority of suites which you take the majority of majority
log N steps and this has a sort of different feature. Another
function, which we like is the crossing event in percolation. So this
is sort of more sophisticated question but it's still more
sophisticated function still a Boolean function and it still has -- so
I don't know if it's close to the majority of suites in some sense it's
certainly closer to recursive majority of suites than it is to tribes.
So I would certainly recursive majority of suites. Again this is only
for monotone.
>> So this crossing of percolation, is kind of like a graph property.
>> Gil Kalai: Yeah, you can think about it as a graph property. This
time it's not subgraph of the complete graph. But subgraph of agreed.
It's a graph property. So it's not technically graph property in the
sense of Freegood theorem, because he needed, essentially needed enough
symmetry so for any sets of, any sizes only bounded number of orbits,
depending on that.
>> Yuval Peres: Okay. Thank you again. [applause]
Download