21658 >> Yuval Peres: So we're happy that today our...

advertisement
21658
>> Yuval Peres: So we're happy that today our very own alexandra Kolla will tell us about unique
games. You've seen all the cakes in honor of this before.
>> Alexandra Kolla: Hello. Let's see, we have two devices for the talk. One remote and one
pointer. Okay.
So I'm going to talk about unique games. Probably most of you have heard something about
unique games. But a lot of you might not know. So I'll explain hopefully everything and give you
some algorithms about them.
Okay. So let's start with a problem that you hopefully all have heard of. Simple problem, the max
cut problem. You're given a graph, G, and the objective is to partition the graph into two parts
such that the number of edges that cross the cut between these two parts is maximized. That's
the max cut problem.
Okay. So a long time ago Carr proved that this is an NP complete problem. So there's no hope
we can solve this in polynomial time unless [inaudible]. So the next best thing to ask what to do
with it is to approximate it.
So what about approximating max cut? There have been several approximation algorithms in the
literature starting with a kind of folklore knowledge that random cut achieves ratio of 2. So it's as
least one-half of the optimum.
And then in a breakthrough through paper Gamonson and Williams 1994 showed there's a 0.878
approximation algorithm for max cut. And they used this very exciting convex programming
technique and rounding of the output of the convex program, and they were very excited about it.
And this ratio is something that comes out our codes of R over pi, 1 minus R over 2. It's a very
nice formula.
So how many of you think this is the best we can do for max cut? Simple. One, two, three, four.
Sorry. I have to -- three arms. Okay.
>>: The majority.
>> Alexandra Kolla: No, we only have four people that think that this is the best we can do.
>>: That it exists? What do you mean that it's no or that it's possible.
>> Alexandra Kolla: That it's possible. So you vote yes?
So very few people think that this is very nice quantity is the best you can do for max cut.
However, if unique games conjecture is true, that is the best we can do; and, in fact, this is not a
coincidence.
Unique games conjecture is a tool basically that captures exact approximatability of many
important problems for computer science like max cut. So as you see here, unique games
conjecture hardness is exactly matching what we can do algorithmically for max cut. Similarly for
vertex cover, we have an approximation algorithm of factor of two and we cannot do better.
Similarly for max KCSP. So you see that this is amazing. This seemingly unrelated thing proves
exact approximatability of several problems.
Okay. So let's see what I'm going to talk about today. I'm going to tell you what is this unique
games conjecture. Then I'm going to give you the connection between unique games and graph
theory and hopefully manage to present the proof that unique games conjecture is false when a
graph is an expander and some other cases as well, and then end up with some open questions.
Okay. So let's start with the unique games conjecture. So this next slide basically is based on
research of one of my collaborators, [inaudible]. So unique games, what are unique games?
Unique games are popular not only among computer scientists, in fact if you search on Bing you
can see 70 million pages about unique games and play unique games and arcade.
>>: The phrase unique games or just the two words?
>> Alexandra Kolla: The search that -- when here ->>: Yeah. Unique games, just the word unique and the word games.
>> Alexandra Kolla: Well, okay. Okay. But a lot of them have unique games held like a phrase.
And this is one example. Moreover, if you search at Yahoo! you can see that you can purchase
unique games online. So that's very nice as well. And if you search at Google you can see that
unique games might have something to do with the unique games conjecture. Okay. But unique
games for us now for the rest of the talk is going to be relevant to the unique games conjecture.
And this unique games is a unique label cover problem. You're given a set of constraints as a
simple case which is as hard as the general case. Assume that you're given a system of linear
equations modulo some prime K of form XI minus XJ equals CIJ mod K.
>>: You said K was prime?
>> Alexandra Kolla: Yes. So NK is called the alphabet size. So is there any question?
>>: No, that's okay.
>> Alexandra Kolla: And then the goal, unique games, is basically is to find a labeling,
assignment of 1 through K to X size to maximize the number of constraints that are satisfied.
Simultaneously.
Okay. Let's see an example. Let's say we had K equals 3 and we had X1 minus X3 equals 0.
X2 minus equals X 3, X1 equals 1 sample of an unique games, and you can see it as a graph, the
equivalent representation, the constraint graph. So you put one vertex for its variable, one edge
whenever there's a constraint between the variables and you just put a constraint on the edge.
Okay. So this would be the graph that corresponds to that instance. And let's say an example of
an assignment. Assume I assign 0 to every single variable. What does that mean? It means
that this constraint or this edge is satisfied as well as this one.
But not this one. And in fact that's the best I can do for this game. So the value of this game
would be 2 over 3. So, you know, that's the idea. That's what unique games says. And I'm
going to explain unique games in terms of deregular graphs for simplicity for this talk.
>>: What's unique about this?
>> Alexandra Kolla: The one-to-one correspondence. That's a good question. It's the
one-to-one correspondence, that if you assign one variable to, 1 value to X1 then you only have
one.
>>: Two variables.
>> Alexandra Kolla: It's a permutation, in the general case it could be any permutation just linear
equations.
>>: But the case speaks -- you're not playing with K.
>> Alexandra Kolla: An inference is given along with K, yes. So the graph is given K, is given the
instance here.
So that's what are unique games. Everybody clear?
>>: Why is the game -- [laughter].
>> Alexandra Kolla: Because you can buy the line, you can play, you know -- okay. So it's a
game because I mean historically basically it came from two prove one round game. So in
computer science complexity theory we like thinking of equations and relations between variables
as having a verify in one or two or multiple provers and then you model, if I ask questions of the
prover, the prover responds back and you can model each answer of the prover as taking, map it
to 1 through K, one value and then you put the constraint that the verify accepts if the prover
sends these five values and otherwise and this is the constraint and if this is unique one-to-one
correspondence, this is a unique game. I don't know if that's -- but I don't want to get into the
history of that right now, yes.
>>: Talking, where would be 0, 0, 1, so 1 minus X is 0 that's labeled on ->> Alexandra Kolla: You can think of it as a label on this. This is not the graph I'm going to be
using mostly. So it's going to be more clear in the future.
Okay. So what is the conjecture now? Khot in 2002, given the unique gesture of unique games
with actual value one minus epsilon greater than one minus epsilon, it's NP hard to satisfy even a
delta fraction of all the constraints.
So in fact, basically to be more accurate, it says that there's a sufficiently large K that if you give
me an epsilon and delta, I have this hardness promise.
>>: I don't understand. You started by saying given an instance but the conjecture isn't defined in
any particular instance.
>> Alexandra Kolla: But the conjecture says give me epsilon and delta I'll give you an instance
that has large enough alphabet size K such that it's NP hard to distinguish between these two
cases so it's parameterized.
>>: In every instance.
>> Alexandra Kolla: For every epsilon and delta you give me I give you different instance.
>>: I'm confused about the order -- because you start up with given an instance. So to which ->> Alexandra Kolla: This is more rough. I mean ->>: We're asking you to say it clearly.
>> Alexandra Kolla: That's what I'm stating clearly.
>>: But it didn't make sense. If you're given epsilon and delta and you're given an instance it can
be NP hard to distinguish in a particular instance.
>>: [inaudible].
>> Alexandra Kolla: I'm not really understanding the question.
>>:
[MULTIPLE SPEAKERS] .
>> Alexandra Kolla: For epsilon, for every delta.
>>: Such as unique games such that alphabetize K.
>>: Such that what.
>>: Unique game instance such as K [inaudible].
>>: Okay.
>> Alexandra Kolla: Does that -- sorry. So as we said before, it implies that many known
approximation algorithms like max cut, vertex cover, KSP are optimal if unique games conjecture
is true.
And in fact Ragavendra [phonetic] 2008 in a breakthrough through paper proved that for CSP
problems, semi definite programming is the best. You can do to approximate them assuming the
unique games conjecture. There's a deep conjecture between semi definite and unique games.
So is unique games conjecture true? That's the major question. Not that by the definition of the
problem is it's kind of embarrassing to not know if it's true or not, because we are able to solve
system of linear equations exactly in polynomial time. So asking to even approximate them by
really distinguishing one minus epsilon to delta we don't know how to do. So that's kind of
shocking at least to me.
So in fact people have worked on it and thought about it for a long time. And there have been
several attempts to prove or disprove the unique games conjecture. So this proving, you have to
find an algorithm. So the best known polynomial time algorithm is due to Chi [phonetic]
Makaarychev. And in fact Makarychev showed it cannot be proved by semi definite
programming. So a particular attack, method of attack of unique games, semi definite
programming is doomed to fail. And in fact Khot Kindler McDonald showed if you improve this
algorithm, beyond the parameters that Chi Makarychev, Makarychev have then you would refute
the conjecture.
>>: What is the [inaudible] the attribute you can do epsilon.
>> Alexandra Kolla: Is 1 polynomial time. So this is -- I'm talking about efficient algorithms now,
and then I'm going to explain more about other algorithms. I mean, in particular, if unique games
is NP complete you can always hold exponential time. Okay. So there have been attempts to
disprove it, to prove the conjecture. Maybe with strong parallel interpretation. Maybe you can
reduce a gap max cut problem, you can't see that, right? I don't know why. It shows better on my
laptop but -- sorry. Okay. So I'm going to read what it says and say what it says. But basically
there have been attempts to reduce problems to unique games to say, to get a feel of like maybe
unique games is hard. And one hope towards this direction would be to reduce max cut, a gap
instance of max cut to unique games by using strong pair repetition. I'm not going to say much
more what that is, but the point of this is that RAZ 2008 showed that this is not possible. So the
attempt to prove the unique games failed. And so okay let's now since we can prove or disprove
it let's see what our hard instances is if we can show anything, since all these several attempts
have been around but there is no, still no consensus about it. Maybe some intermediate steps
that could help us decide or get an intuition about what is happening. And towards this direction
there have been people have studied expander graphs and local expanded graphs including joint
work with Arora Barak Tulsiani and Vishnoi and Tulsiani. Local expanders are NP hard so
Marcus and Steurer and Steurer Ragavendra [phonetic]. And now we know that these graphs
are easy, they're polynomial time algorithms that can solve unique games and expanders and
local expanders.
>>: Didn't think how the graph is related to the problem yet?
>> Alexandra Kolla: Not yet. I will. That's the second section. Okay. And then has been
another line of research that said, okay, maybe let's relax my polynomial time requirement a little
bit and allow for like slower algorithms. Maybe quasi polynomial or sub exponential and in fact for
a few graphs there's a quasi polynomial time algorithm that solves unique games on those
graphs. In particular the class of graphs that are hard for semi definite programming are solvable,
and this is a paper of mine last year. And then a breakthrough paper of Arora Barak and Steurer
showed that for any graph you can solve unique games in sub exponential time, which still
doesn't disprove the conjecture there. NP problems that have sub exponential time algorithms.
Then it was proving maybe we can find distributions, maybe it's easier to argue about
distributions that are hard. But it turns out that at least random instances the answer is no.
Because random graphs are expanders. Roughly, and then for quasi random instances, I know
you can't see that. I'm sorry. For quasi random instances everything works with by myself and
the two mar car chefs we show random quasi instances are easy. So there is this -- this is all
failing as a direction. Okay. So that's the state of the art. And here is a summary of the
algorithms that random polynomial time for general graphs and approximate highly satisfiable
instances of unique games with parameters that depend on either of the alphabet size or log N.
So this algorithms all dependent asymptotically what the alphabet says log N and in fact they're
all based on semi definite linear programs.
>>: What's the right-hand column?
>> Alexandra Kolla: One minus epsilon minus instances the algorithm finds an algorithm that
satisfies that many constraints. If epsilon is tiny -- if epsilon is tiny, then that's constant. That's
good. But if epsilon is relatively some constant, then there's -- if epsilon is bigger than 1 over log
K or 1 over log N then this gives nothing. And in fact this is the best the algorithm I was
mentioning before that if you improve the parameters of this algorithm, then you would have
repeated the unique games conjecture and it's tied for semi definite programming, and then for
expander graphs the people I mentioned before, the [inaudible] Makarychev's with improved
parameters on 1 minus epsilon satisfiable instances, we can recover constant assignment
depends on how good of an expander the graph is and similar for local expanders. So the
grouping that, the similar thing about most of the approaches here is that, as I said before, based
on semi definite or linear programming.
>>: Constant.
>> Alexandra Kolla: Depends on the expansion. But the interesting case is when it's bigger than
delta. So you have to ->>: I know you talked about algorithms about the exponent is algorithm.
>> Alexandra Kolla: This is for algorithms.
>>: Function of epsilon, right?
>> Alexandra Kolla: Yes. Okay. So the thing that to note here is that this algorithms all depend
on convex programming. Semi definite programming or linear programming. And lately there is
this past year a result of mine, there was this algorithm that solved unique games with completely
spectral techniques in graphs that had few large eigenvalue or graphs with certain spectral
profile. So this is different in that sense, with the previous algorithms, and also it solves in quasi
polynomial time that instance that was tight for the semi definite program. And then this recent
year ABS gave us sub exponential time algorithm, used this algorithm of mine in the paper. And
as I mentioned before, we have this semi random instances that are easy. So this is what the
picture is for unique games. This is the state of the art. And let's now go and see all these
graphs that are related to unique games. Do you have any questions so far? So here's the
constraint graph for this particular set of linear equations that I mentioned before. We can see
unique game is a constraint graph and we can see another object that's called the label extended
graph. And the label extended graph basically goes as follows: So for every original vertex of my
graph, which was a variable in my set of linear equations, you replace it with K vertices. And then
every edge you replace it with a matching that matches the little nodes here that satisfy this
constraint. And you do that for this edge and that edge and this edge.
And that's a label extended graph. Is that clear? And this is a graph. It has an adjacency matrix
which is nothing but the original adjacency matrix of the graph with each of vertex replaced by a
block with a permutation a constraint that sits between this vertex and neighbors.
Okay. So that's the label extended graph. That's the representation of this label extended graph
that I'm going to use. And let's try to use that now to show that unique games conjecture is false
on expanders.
Okay. So we have this constraint graph. And now assume that this constraint graph is an
expander graph. What do I mean by an expander? How many of you have heard and know
expanders? How many of you don't?
Okay. So expanders basically I can define expansion to be a ratio of the number of edges
crossing the sparsest cut over the one size of the cut. You can think of it as a quantity that shows
you, if I see the worst possible cut, the sparsest possible cut in my graph, how can I quantify it?
So edge expansion is just a minimum of the ratio of edges over the one size of the cut.
And then there is spectral representation of expanders that you can use the second eigenvalue,
the second largest eigenvalue of the adjacency matrix of the graph which is just the max over all
vectors perpendicular to the all once vector of the quadratic form of my adjacency matrix over the
norm of the vector.
Probably if you've never seen expanders in eigenvalues this is not very helpful, but I hope for the
rest of the people this is helpful. And expander graphs, if you look at the spectral gap, the
difference between the first eigenvalue and the second eigenvalue, expander graphs have large
spectral gaps.
And a lot of edges cross every cut. So for large spectral gap, a lot of edges cross every cut,
these quantities are indistinguishable and the relation between them is given by Shigger
[phonetic] inequality, which says basically that, makes that connection precise. So for us you can
think of it as a constant spectral graph or some other constant, for other values it's not a precise
relation. But for constant it is.
So are you clear what is an expander graph, roughly? Okay. So let's assume now that my
constraint graph is an expander graph in that sense that basically the lambda of the spectral gap
is large. It's constant.
And remember we're talking about deregular graphs for -- you can think of these as some large
constant or log N or something like that. And the result that appeared in joint work with Arora
Barak Steurer Tulsiani and a group with Tulsiani which says that when in a game instance is
satisfiable and the graph is an expander, spectral expander, then there's a polynomial time
algorithm that recovers. A good assignment, 99 percent satisfying assignment, a half is good
enough in polynomial time.
So that's the result. And that's what I'm going to talk about today. But why look at expanders?
It's a natural object somehow, but why did we start looking at expanders? As you remember the
previous table of inexact approximatability results if you look at another problem, the sparsest cut
problem, then what happens is that the best known algorithm is given by Arora Ralve [phonetic]
Tulsiani in 2004, an approximation of square root log N. However, even assuming the unit games
conjecture, there is no hardness known for the sparsest cut problem.
And that bothered researchers in computer science a lot of the time, because everybody was
wondering why is it the case, and everybody was trying to prove some hardness for sparsest cut.
Okay. But since there is no such thing, let's see why. In fact, all the known techniques, so it's
unlikely that there's such a reduction from unit games to sparsest cut, because if you start with
the sparse test cut instance that does have a sparse cut, and you apply known reductions, using
the instance you get also has a sparse cut. And that does not depend on whether or not the
unique games that you started with had a satisfying assignment or not.
So this is true unless the unique games that you started with had some expansion. Because then
any sparse cut in the N instance would correspond intuitively to just a good labeling. All the cuts
that could originate from the initial graph we started with would not be sparse because a graph is
an expander. So every sparse cut you would find here would just correspond to good labeling.
So there was this off the record belief which was made more precise by minus group called
Vishnoi that showed the hardness of sparseness cut, assuming that unique games is hardness
expanders. There was a belief that expanders were the hardest instances in fact. However, as
we know now this is not the case. And let's see why.
So the proof that I'm going to present today we have the proofs one with semi definite
programming and one with spectral T I'll present the spectral proof. It's probably nicer and more
intuitive to present. So we have this label extended graph which I hope you all remember what it
is.
And now let's just pick one little node out of this blurb of nodes, exactly one. And take a second
and convince yourselves that this set exactly one node out of its big bunch of nodes corresponds
to some labeling. Some assignment to my variables. From 1 to K or from 0 to K minus 1 I guess.
So you can also see that as a characteristic vector of the labelling, which is K times dimensional
vector, that has exactly 1-1 in its block that corresponds to its vertex and 0 otherwise.
And as you remember this big object that was adjacency matrix of this thing, just for intuition now,
let's look at the perfectly satisfiable game.
So perfectly satisfiable game, we know how to solve. I'm not saying anything deep here. But
observe that this object, the label extended graph, is just K disconnected copies of the original
graph.
And it's a very nice thing to look at, because this characteristic vector now that correspond to
perfect labeling, satisfy all equations, are just eigenvectors of this matrix of eigenvalue highest
possible, exactly D.
So okay that's very nice. But I haven't done anything. Well, the intuition here is that a 1 minus
epsilon game is almost a satisfiable one in a graph theoretic sense. So it basically can be seen
as perturbing this graph a little bit and then sees what happens to the spectra. And that's in fact
what we use in the proof.
So we start with the 1 minus epsilon satisfiable gain. Let's go back to our previous example. We
can think of it sort of, some reverse engineering here T we can think of it as coming from a
completely satisfiable game, when a malicious adversary picked epsilon fraction of the edges,
change the permutations in the edges or the equations, rather, and now the game became 1
minus epsilon satisfiable, but before it had the perfect satisfying assignment.
And in the label extended graph case, from the completely K disconnected copies of my graph, it
would go to something not quite K disconnected copies of the graph.
Okay. So how can we analyze this? Well, as we saw before, this K characteristic vectors of
perfect assignments on this perfect game were eigenvectors with eigenvalue D, and we can think
of their -- you can take their span, this K dimensional eigen space Y of this graph, and there is all
the information about the best possible labeling.
But now you don't have a handle on that. You don't know that. You don't know where the
adversary picked the epsilon fraction of the edges, but you do have a handle on the eigen space
of this graph. So let's take the first few eigenvectors of this graph and we'll see how few we take
in a second.
So if you take, say, all the eigenvectors, the span of all the eigenvectors with eigenvalues greater
than 1 minus 200 epsilon be some 200, then is it a -- I'll show you in a second how all these nice
characteristic vectors that we wanted to define have a large projection on this eigen space.
And know that if any of those vectors or a vector that is close to them, then we would be done,
because we could just read off the assignment from the coordinates.
Okay. So it's basically ->>: I'm sorry, can you go back to something you just said. You said we should think of the
epsilon game as a perfect game [inaudible] but only the definition is that ->> Alexandra Kolla: You change basically -- you can think of it as changing these graphs in the
major perturbation sense.
>>: A question of whether vertices versus [inaudible].
>>: When you say ->> Alexandra Kolla: Just edges.
>>: Edges. It's a proportion of those that are constraint is given by an edge. So if something is 1
epsilon E that you can actually get ->> Alexandra Kolla: Fix an assignment, fix an assignment and change it backwards.
>>: Change epsilon -- okay.
>> Alexandra Kolla: There might be a lot of ways to do it. But it doesn't matter which one.
They're all isomorphic. Right? So now this slide is about convincing you that every such vector
has a high projection on to this eigen space and it's just very simple math. The colors are all
wrong and it's not my fault as I disclaim. Always these colors and what you couldn't see
previously I have different colors in my laptop. But anyway -- not that it matters now because you
can see. But before that you couldn't see.
>>: [inaudible].
>> Alexandra Kolla: Yeah. So I mean just basically by looking at the quadratic form of this
vector, you can convince yourself pretty easily that it has a high projection on to the high eigen
space W of the game that we see, the game that we have a handle on.
>>: That's the inner product?
>> Alexandra Kolla: I mean, you show that there's a high product just because the quadratic form
is close. It's a very simple calculation. Yeah.
So if we knew this projection, if we knew this vector that's really close to this nice characteristic
vector we would be done because in our case some blocks of it would be all messed up. We
forget about them, but we could still recover 90 percent assignment just by reading off the good
coordinates that are close to those ones. Because these two vectors are close to L2 norm. But
how are we going to know this vector? We know this eigen space. But I mean it's kind of like
looking for a needle in a haystack, look for a vector in an infinite cardinality set, basically. The set
W has infinite number of vectors.
Okay. So...
>>: [inaudible].
>> Alexandra Kolla: Yes, but how do you find a particular vector in that space is the question.
>>: So which ->> Alexandra Kolla: It's W which is the eigenvalues larger than 1 minus 200 epsilon D of my
own, the game that I see.
>>: Span of eigen spaces?
>> Alexandra Kolla: I use eigen space sort of abusively in a sense, I mean the span of all
eigenvectors. Okay. So we are searching for a needle in a haystack. But we can do it sort of
efficiently if we take a net. If we take an epsilon net of the sub space. Discretize it and you look
at every single point on your net. And then one of them is going to be close to my vector W that I
was looking for.
And then I can do the same. Maybe lose another epsilon and read off the coordinates out of
each log and recover the assignment just by looking at what coordinate is maximum in its block.
And we observe that then the running time of this algorithm is just exponential in the dimension
W, because you take epsilon and that's the bottleneck.
Let's see for expanders, what's the dimension of W for expanders? I mean, ideally the dimension
was low constant or log then we would have a polynomial time algorithm, and in fact that's what's
happening. So remember that the spectral gap was -- it's the difference between first and second
eigenvalue. The expander. And we had these two sub spaces here. In fact, for expanders in
such games, you can show the spectral graph is the same for the label extended graph.
So in a more generalized sense, and I'm not sure why this is not showing either. But this was
high. So Y now is a K dimensional space, contains the K characteristic vectors, and you can see
similar spectral gap between Y and Y per, that complement, and then you can basically see W is
perturbed analog of Y.
And things are easy for us after this observation, because long time ago people that are expert in
matrix perturbation theory Davis and con were able to show if that is the case then the angle
between Y and the perturbed version of Y which is W is small. And after that through them
basically you show that for every vector here there is a vector in Y that is close. So the
dimension of W cannot be larger than the dimension of Y.
Okay. So I mean this was a general algorithm like a black box algorithm that we plug in
expanders. We observe that or rather we prove that these dimensions are bounded by K. And K
is log N. I mean, it's enough to consider K log N by the statement of unique games. It says that
there's a large enough constant that depends on epsilon and delta. If you take K log N it solves
all your problems.
So the running time is 2 to the K which is 2 to the log N at most, which is polynomial there. So
that's the algorithm for expanders. Any questions?
In fact, you can go one step further and say let's plug in other graphs there. Let's start with the
notorious graph that was proved that STP fails there and you cannot hope to do anything with
semi definite programming due to this graph. It turns out that if we look at the corresponding
eigen space of this graph, then it's poly logarithmic dimension, so the algorithm would be quasi
poly. Not exactly polynomial time, but almost polynomial time and you can apply this algorithm to
other graphs of your choice. And that's the proof.
>>: You need a separation. There's a gap.
>> Alexandra Kolla: I'm sorry.
>>: There's spectral gap.
>> Alexandra Kolla: This proof is not exactly the same. It's not identical. So you have to prove a
lot of other things to obtain equivalent result. I mean, you don't use the sine set of theorem as it
is. But it turns out that it's similar.
And then that concludes my talk except for open questions. Do you have any questions?
>>: This represents how do you describe ->> Alexandra Kolla: So the graph is basically a quotient of the hypercube but I had a Markov -- in
the instance it's a little more complicated with what permutations. But the point there was that the
spectral of Kelly graphs, the label extended graphs of Kelly graphs have spectral profile that
scales K times the spectrum, roughly it scales like K times the spectrum. So there are K copies
of each eigenvalue roughly. You can bound the eigen space with the instance with the eigen
space of this graph, which extends the characters.
>> Yuval Peres: Any questions.
>>: Using this polynomial algorithm, can you break the [inaudible] bound for the special classes of
graphs, for example?
>> Alexandra Kolla: Sorry, what was your question.
>>: Kind of going back to max cut, can you use the techniques to design better algorithms, better
max for algorithms?
>> Alexandra Kolla: I haven't tried. But I think that ->>: There's obviously no reason why ->> Alexandra Kolla: No obvious reason why it should or shouldn't. But, yeah. So, okay, let's see
what's open in the area. Proving unique games conjecture. And then meanwhile you might want
to think about something easier.
So what about other special graphs? What about the hypercube in particular? The hypercube is
a very easy-to-describe graph. Boolean cube, 0-1 to the end. The problem with the hypercube is
that every known approach fails in such a simple graph. In fact, the spectral approach that I
presented before gives sub exponential time which is the best we can do for any graph anyway.
And we basically don't know what's going on. And I mean I think Subass [phonetic] himself asked
this question first. So I don't want to take credit for asking it. But that's a good question that I'm
interested in. And then how do you compare -- so those who know semi definite programming,
there's this canonical ways to strengthen semi definite programs and their goal is to be
hierarchies. How can you compare spectral techniques with higher rounds of semi definite
programming and not just the one that we know that fails? And then like what is the best we can
do for general graphs. For now we don't know exactly what's the bottleneck of this approach.
And hopefully improve it by searching more efficiently opinion just taking brute force epsilon.
[applause].
>> Yuval Peres: Any questions? Comments.
>>: So what do we know about the average case hardness of this [inaudible] is it ->> Alexandra Kolla: I'm assuming by average you mean ->>: You said polynomial sample size distribution for which ->> Alexandra Kolla: What I said in the beginning actually is ->>: Random.
>> Alexandra Kolla: That we know that these instances are easy. So with high probability eigen
graphs expanders, therefore you apply the expanders, and very new result is not published yet by
myself and Makarychev shows even if you have some sort of randomness, if you have, say, a
fixed graph and you fix it perfectly satisfiable assignment and then you randomly choose epsilon
fraction of the edges to change them, or you do some other step at random, similar to that, then
it's also with ->>: Expect this planted ->> Alexandra Kolla: It's sort of like a planted.
>>: So that's ->> Alexandra Kolla: So that's also in polynomial time.
>> Yuval Peres: Any other questions? Let's thank Alexandra.
[applause]
Download