University of California in Berkeley telling us about merger techniques... optimization. >> Yuval Peres:

advertisement
>> Yuval Peres: Okay. So this morning we're delighted to have Alexandra Kolla from
University of California in Berkeley telling us about merger techniques for combinatorial
optimization.
>> Alexandra Kolla: Hi. So I have a story to say, which I am going to say since it's
relevant. So four days ago I left my bag in the car and my car got broken into and my
laptop got stolen. And I lost all my work, which I hadn't backed up. So I had to redo my
(inaudible) from a really old version that I mailed my advisor and the moral of the sorry is
that you shouldn't leave everything for the last day because not everything can be done
in a day. Right? So, we'll see how it goes. And also, back up. Always back up.
So that's me and I'm from Berkeley. I'm going to be talking about two areas that are very
like heavily worked on in theory and outside theory, spectral graph theorem and
semidefinite programming and how these two areas are connected basically in more
ways than one.
So sorry. Question. Do I have a roller for the slides or just with the -- that's fine.
Okay. So spectral graph theory is basically studies the eigenvalues and eigen vectors of
graphs. And the relation between spectral and expansion introduced spectral techniques
as a very like popular tool for applications and, you know, in a lot of areas. For example
(inaudible) recommendation deals with use of spectral techniques and basically because
I've put spectral techniques in with (inaudible) limitation. Eigenvalue and Eigen vectors of
run-and-walk matrixes led to (inaudible) algorithm and concentrate on the great
expanders have been a very useful and popular in coding, communication and you know
theory in general.
Also, eigenvalue methods and complex optimization have been used long back, as far
back as local state of function, band incremental number of graph, graph partitioning,
graph labeling, routing in networks and a lot of other endless applications.
And now semidefinite programming is basically an optimization, a complex optimization
problem optimizes over the cone of positive semidefinite matrixes. And it has been a
major tool in fields like -- yeah -- automating control systems, signal processing,
communication networks, circuits, statistics find them and then a long list.
And in theory in particular it has been used heavily to create approximation algorithms for
the heart problems mostly and basically sometimes it works great, sometimes it doesn't.
And the study of semidefinite problems and theories secure for (inaudible) of semidefinite
programs, I'm going to get back to, is very interesting by itself.
So we have these two very popular areas. And basically if you look at them closer, then
you see that they're connected in more ways than one. So in particular semidefinite
programming, the constraints of semidefinite programs take the form of linear matrix and
equalities and they just have eigenvalue bounds.
On the other hand most eigenvalue optimization problems can be cast into an SDP. So
in my work basically what I do is I try to merge those two things to create approximation
algorithms for combinatorial optimization problems and examining, you know, limitations
of SDPs when they cannot create good approximation algorithm for empty heart
problems and all these references to my work is what I'm going to be talking about some
more some less.
But let's see an example of what I've been talking about so far. So what is this duality
between spectral and SDP? The max cut problem, which everybody probably knows, it's
find the max cut in a graph. And it's a maximization problem, which can be cast as a
quadratic integer programming formulation with XI is plus one for one side of the
(inaudible) for the other side of the (inaudible). So is that clear why that's the max cut
problem?
>> Question: Can you maybe specify what the W's are...
>> Alexandra Kolla: It's the weight -- so it's a weighted max cut problem. And I'm just
giving a very intuitive slide of why these two things can be connected. I'm not going to
talk about max cut at all after that. So yeah.
Okay. So however integer programming, you know, it's empty(phonetic) heart, so you
can all actually solve max cut. Yeah. And the next best thing or maybe the next best
thing is an (inaudible) of max cut. Once again, this one came up in '95, and you know
relax the condition that, you know, these things are vectors and not plus/minus one wires,
you have (inaudible) on max cut. But let's see what happens if you look at the dual.
So we look at the dual and it turns out that the dual gives an eigenvalue bound on the
value of max cut. And actually this (inaudible) mean that it's basically bound on the dual
value and therefore by strong duality they bind to the (inaudible) value, it's the most
negative eigenvalue of the graph that we're looking at. So it turns out that you start with
an SDP. By SDP duality you come up with an eigenvalue and these two things give
basically an idea of why this SDP and spectral are very well connected.
So let's see now, in particular in my work I'm interested in expander graphs. So in
expander graphs through SDPs and through their spectral.
>> Question: Excuse me. For that particular application, you could do a directory, too,
right? You don't need to go through SDP.
>> Alexandra Kolla: The max cut?
>> Question: For the eigenvalue diagram.
>> Alexandra Kolla: No, sure. No, of course. I'm just giving an example of how -- I'm
not going to talk about max cut, you can do it directly, you can do it with SDP. I'm just
giving an example of how an eigenvalue of the graph is connected to an SDP. Of how to
duality you come up with why do you start a spectral and why would one think that max
cut, you know, it would be useful to look at spectra, for example, and not something else.
>> Question: (Inaudible) -- between spectral and (inaudible) is older than SDP cuts, so
from one perspective.
>> Alexandra Kolla: That is very true, of course.
>> Question: From one perspective, the convention of spectral to cuts is not -- it is not
surprising.
>> Alexandra Kolla: No, I'm not -- no, but I mean in the next -- second part of my talk
I'm going to be showing similar like SDP duality thing that from an SDP gets an
eigenvalue and I want to indicate that is why I'm introducing this through max cut.
I just thought it was a simpler example to look at. I didn't want to imply that that's the only
way to do it.
>> Question: (Inaudible) -- SDPs and spectral are duals of each other, sector points?
>> Alexandra Kolla: That is the point by very more broad definition of duality. So it's
not the only point. It's one of the points.
I actually give a practice talk and somebody asked me to give an example of what I mean
by duality and that is why I have this slide there. So I'm sorry, maybe I shouldn't have
given a practice talk. (Laughter)
But anyway, that happened after my laptop got stolen so it's actually one of the things
that can be done in a day. Bear with me.
Okay. So I'm particularly interested in expander graphs and let's look at why and what
basically. So in computer science there are like major goals to create (inaudible)
networks, fixed connectivity problems, split up software testing, plaster and social
networks and everything.
So -- what did I do? So for example you have a network and you're required that like this
highway here fails then you have other ways to go. So that's a fault on our network. And
for all those problems, the major component, common denominator that actually
facilitates the solution of those problems is an expander graph. So that's the expander,
not the graph. So yeah expanders are graphed with large conducts and fast mixing time
and we'll see more.
So as expansion. Well, expander graph has large expansion. Expansion is just the ratio,
you know, the minimum of the ratio of edges that cross, the cut over the number of notes
in the sparsest, like the smallest side of the cut. And if, you know, a lot of obligations
actually look for balance and sparse cuts like the ones I mentioned before, segmentation,
divide and conquer, heuristics and other stuff.
So it would be great if we could find the sparse cut. It would be amazing. But we can't
because it's empty heart to compute. So next best thing is to find reasonable
approximation for that. And what is a reasonable approximation for that? Well, it's
algebraic connectivity. Just a quick review, a graph Laplacian(phonetic) is this matrix
across the diagonal, in diagonal, the degree of it's (inaudible) matrix of the graph and
algebraic connectivity is basically the second smallest eigenvalue of the Laplacian.
And it's polynomial dicomputable, so that's great. But why would I look at that? Well,
there is -- see here (inaudible) inequality, I don't know how you want to reference it, but
let's say (inaudible). It gives a reasonable good approximation of expansion within a
quadratic factor in the worst case and, you know, also it relays to mixing rate of Markov
chains, another nice properties of graph and so there you go, it's a very good -- most of
the time, a very good. Not most of the time, a lot of the times, it's a very good
approximation for the sparse cut. And sometimes during this stuff I'm going to be going
back and forth longitude of the Laplacian, and longitude of adjacency matrix, which are
sort of related. So if you have any confusion about which (inaudible) I'm talking about,
just please ask, but I'm going to try to be --
>> Question: What is it that's a good approximation for the sparsest cut, an
approximation for the ISO parametric expansion ratio.
>> Alexandra Kolla: I'm not saying that it's always a good approximation.
>> Question: Where is the cut?
>> Alexandra Kolla: The -- the ratio, H. The one I defined before?
>> Question: The ratio of equality was the cut.
>> Alexandra Kolla: I'm saying the sparsity of the cut. I defined here in the previous
slide.
>> Question: Right. I thought you wanted to find the cut, though.
>> Alexandra Kolla: I'm not sure what you're asking.
>> Question: I thought you wanted to find the sparest cut and you wanted to
approximate finding the sparsest cut.
>> Alexandra Kolla: I -- I'm approximating the cut, so if there is, you know, the longitude
gives a bound of how big the cut can be. So it's not that you can find the sparsest cut.
All with like looking at eigenvalues is not that you can say, okay, maybe from quadratic
factor apprises in the worst case it's really bad, like it's not a good tool.
But a lot of the cases when it's not -- you know maybe these things are closer together,
that's just the bound. Then you can actually find what is the lower bound of the sparse
cut.
>> Question: The sparsity -- I think the confusion is you are not finding the cut -- where
the cut is in the geometric graph, you are finding the ->> Alexandra Kolla: Oh, yeah, yeah, yeah, yeah. I'm just saying what is that graph
be -- like. I mean, in particular ->> Question: (Inaudible) application?
>> Alexandra Kolla: No, in particular the eigenvalue cut, like if you do spectral
partitioning with the eigen vectors, then you can find cuts that behave with that
guarantee. But I'm not -- I didn't say that be -- I don't know if that's your question. Okay.
>> Question: (Inaudible) ->> Alexandra Kolla: I mean, yes.
>> Question: So you have ->> Alexandra Kolla: You have a (inaudible) function and, you know, you can basically
find it by randomizing the thesis. It's not necessary to plus, minus because it could be ->> Question: But -- (inaudible) ->> Alexandra Kolla: But, yeah, I mean, again that's not my point. I mean, we are -- I'm
just trying to show why longitude is important. Okay. So longitude is important. Great.
We all agree. Yeah. Okay. I'll be talking about this. So I'm going to give you like how
the talk is going to be structured. I'm basically going to look at expander graphs through
semidefinite programming and going back and forth between spectral and expansion. So
the first part I'm going to be talking about recent work on creating cost effective
expanding networks via local sparsifiers with identifying. Second part I'm going to talk
about the Unique Games Conjecture, which is false on expander graph chain graphs and
third party, which is just going to be like a slide with two of my papers that I've done
explaining -- well, learned bounds for SDPs. If I have time, but maybe not.
So creating cost-effective expanding networks with use of local sparsifiers. Let's see
what does that mean. So problem -- by the way, this algorithm, we have a patent with
Microsoft, so we have a patent with Microsoft. I want to just say.
So the problem is that making a graph with a few good edges. So you are given
parameter K and asked to find K from just a set of candid edges to maximize algebraic
connectivity. So we have this graph and then optimal solution has the (inaudible) edges
there and say this gives maximum (inaudible) connectivity. In general that's an empty
part problem and now proof no approximation algorithm has known before our work.
There were a lot of various things done, but nothing with a guarantee.
So a special case which I'm going to be talking now in this talk is that if the graph actually
could be made into an expander with K edges, then find them. So I'm going to give a
constant approximation algorithm for that problem.
>> Question: (Inaudible) -- connectivity?
>> Alexandra Kolla: Second longitude of the Laplacian.
>> Question: So you want to get -- you want to increase ->> Alexandra Kolla: Optimize longitude of the Laplacian by adding K.
>> Question: Maximizing the connectivity means to get a largest possible spectral on
gap.
>> Alexandra Kolla: Yes. So, yeah, you could see it either way. But the point being
that, I wanted to introduce algebraic connectivity and see why I care about this problem
and not as expansion like why don't I look about expansion and I look algebraic
connectivity and that problem deals with that optimization quantity of the graph. You look
optimizing algebraic connectivity. And in particular if a graph -- if this optimal solution
here was an expander graph and you know we're guaranteed the graph could make an
expander, I'm going to show you how to basically do it with maybe 10-K edges, not K
edges. So it's criteria.
So the interesting case like 1 K is not redone necessarily, but around there because if it
was random then you could put an expander, if it was constant you could just (inaudible)
and find everything so you need something in between.
Okay. So that's an idea eigenvalue of optimization problem, as we said before. And it
has (inaudible) relaxation. So we have an SDP. We're at the SDP, but the optimum is a
weighted graph. It could be the complete weighted graph with all the way at K. Instead
of K exact edges I have all the way to K and, you know, the unfortunate enough to have
like the complete graph and I don't know how to take K edges out of those. So we need
a way to round the SDP and find K edges. So now SDP rounding has been like long
back very crucial problem, very important problem that people worked on and (inaudible)
on how the random hyper playing rounding, many other problems use that. Then they
are introducing new technique for rounding and actually what we do, we have yet another
new technique for rounding, which is called local sparsity. Quite unpredictable problem
and what that does is we find order of K edges and approximate the best longitude or
algebraic connectivity within some constant.
So...
>> Question: (Inaudible) ->> Alexandra Kolla: See we have (inaudible) with optimization. So a quick review for
whoever doesn't know what a graph sparsifiers is, this one slide review. We are asked to
approximate any graph by sparse graph H. So we have graph G and we want like sparse
graph A with like weights and edges that somehow approximate G.
And the notion of approximation we use is basically that these two quadratic forms are
very close to each other or that basically eigenvalues are the same, the whole spectral
are the same.
>> Question: (Inaudible) ->> Alexandra Kolla: Oh, sorry, yeah. That's Benzur Kargur(phonetic) with cut
sparsifiers (inaudible) Spillman and (inaudible). Sorry, yeah. I'm really bad at reading the
references.
So basically we're asked to find a graph that approximates for all practical purposes
spectral of G.
>> Question: (Inaudible) ->> Alexandra Kolla: For some practical purposes, true.
>> Question: Spectral ->> Alexandra Kolla: The spectral approximated within one plus or minus epsilon by this
definition. So, but I mean that doesn't make much sense so why would we look at that?
Well, this is actually a generalization of cut sparsifiers from Benzur and Kargur paper like
long back. And if we relax that requirement ask that H approximate the original graph,
only for cut vectors, just the -- for every cut across the way that goes across the way the
cut is preserved meaning that, you know, you just -- restrict that requirement to 0, 1
factoristic values of the cut just because this is double weight of the cut. Then you can
see somehow how this idea comes from.
And in fact the last paper (inaudible) and Spillman proved that every graph has a
sparsifiers with linear number of edges or if you see deeper in what that says, it says
whatever eigenvalue want to approximate, I can find order of that dimension number of
edges to approximate it.
And for example, I'm just -- yeah, example, scaled constantly re-expander, which is not
an expander here, but I just could not draw an expander, gives sparsifiers of the
complete graph. So if you scale everything by (inaudible) then you have basically whole
spectrum of complete graph is same as whole spectral of scaled expander, because -okay. Great.
So that's a graph sparsifiers.
>> Question: What you mean by scaled standard?
>> Alexandra Kolla: By an algebraic, you know.
>> Question: What?
>> Alexandra Kolla: Because you need to preserve the total number of ways, you need
to scale (inaudible) already. So say the regular (inaudible) expander. So in order to
complete graph has eigenvalues and order of ->> Question: Edges.
>> Alexandra Kolla: Yeah, yeah.
>> Question: Okay.
>> Alexandra Kolla: So in particular you just need, you know, instead of all the edges
across the cut to have the same way, you just need it all to be preserved. So what is
local sparsifiers and what do we do?
So as we said before, we're looking for an integer solution. But the SDP could be the
complete weighted graph. So that's our SDP and if we could drive the sparsifiers by
(inaudible) and Spillman, I'm going to call it BSS from now because it is too long to say,
we could get back to edges, which is not good enough for K squared then. Well, this is
really bad. Exactly.
So that's really bad. So we come up with a notion of local sparse sparsifiers.
So by the way, I'm again really horrible. I didn't mention this is joint work with San
Quatang, Hughy (inaudible) and Berry from when I was in Microsoft. Yeah. Sorry about
that.
So we introduce the notion of local sparsifiers and the idea is to relax the specification
requirement and only ask the sparsifiers is good in a specific subspace. So, you know, if
(inaudible) requirement for this Xs in some subspace of dimension K then by modification
of the proof we get order of linear number of edges in the dimension of the subspace and
find order of cages that approximate my graph in the sense that I want.
That is very good and now we remember that the problem was if the graph kind of made
the expander, then find the K edges so what space do we look at, though? There are so
many choices of K space.
It is unclear what you should look at. So if my original graph plus the optimum number of
edges was an expander, explains that the longitude of the resulting graph was some
constant and by looking at the min-maximum eigenvalues it says that basically by adding
K edges is the best you can do in K in increasing order. You cannot do more.
And I mean in particular if K was bigger than N, you would look at the round of the
Laplacian you were adding off the Laplacian of the new edges and if you say that just the
SDPs are relaxation of the original problem, then basically if you're strict of the eigen
space of the first case I can vector Laplacian OG. You get good graph with constant
algebraic connectivity. So the approximation that we get depends on (inaudible) K and
(inaudible) SDP.
So if both of them are constant, I mean if one of them are constant, then we're good. In
particular if the graph can be made into expander, one of them would be constant.
However, there is a catch that works great for expanders, but that is not our general
theorem, which I'm not going to present right now, but I need to say that what happens if
(inaudible). Horrible, like one within or want to (inaudible). Then we have a more general
theorem that also has a different notion of specification. Yeah, exactly, so what happen
fist it is horrible? Yeah, okay. I already said that. I should have sound. I'm sorry.
So then we also -- we also have a general theorem that actually specifies for the
(inaudible) space, but observes the following. We already have G. So G is already a
really bad sparsifier maybe for G-plus optimum or G plus SDP graph. So maybe it's
(inaudible) classifier, but some directions it's going to be good. Like if you look at very
largest eigen space then it's going to be good. You don't need to do anything.
So by using that intuition and a lot of technical work, which I'm not going to talk about, we
get the generalization of that sparsification requirement with few extra edges and that
works well in a bigger class of graphs. And that's the people I've worked with on that,
which is -- yeah?
>> Question: You won't tell us which graphs, can you define ->> Alexandra Kolla: Oh, yeah. It works for graphs that lump Ks. For example, order of
K over N. And we're currently working on the whole spectrum, but that's work in
progress. So I don't want to -So that's the first part of my talk. If you have any questions about that, that's a good time
to ask.
>> Question: So (inaudible) if you generally (inaudible) what makes this possible local
flow?
>> Alexandra Kolla: Oh, no (inaudible) named it, it's ->> Question: That's what I mean.
>> Alexandra Kolla: Oh, global sparsifier basically, so this thing, this work the general
theorem, it's not a local sparsifier, it's a global sparsifier in the sense it approximates
every direction. So before like the previous slide when we restricted that subspace of
first K eigenvalue, I call it local because you look up restricted subspace.
>> Question: What (inaudible) in the end is expander, so why is it ->> Alexandra Kolla: No, the sparse fire, you basically want to choose K edges out of
how many ever edges the SDP. So I'm not going to say ->> Question: (Inaudible) -- edges, I mean, are you saying the sparse fire condition isn't
holding for all vectors?
>> Alexandra Kolla: Yeah. (Inaudible) but I'm not requiring it.
>> Question: So in this situation when you get an expander, which direction -- which
directions because ->> Alexandra Kolla: Well, when you did the expander, nothing matters because
everything is within constant. So in that case, I mean, it really bear with me, I just name
today local by myself. It's not, I just wanted a name for it and it is not local in the sense of
locality that you might have in mind. I just didn't know how to name it basically. I mean, I
have all these crazy names in the paper that --
>> Question: There's nothing you can't (inaudible) name, he just wants to understand
the (inaudible) ->> Alexandra Kolla: No, no. The only localities that you only look at that subspace.
You project everything onto that subspace and you work as if your Laplacian was around
K matrix on that subspace. However, the other -- I mean, the next slide is more global
and may actually does not project onto anything, it just proves that there is a way around
it.
>> Question: More general notion of (inaudible) sparsifier is actually (inaudible) ->> Alexandra Kolla: It is (inaudible), yes. Remind me to change that. I mean, I call
those star sparsifiers and the reason being because when we're working we have this
star on the board on the property we want it. We always would call it star, so I have a
whole paper with a theorem that says star sparsification. Not kidding a whole paper. I
didn't want to come up -- come and say, okay, We have this star specification. So they
get local specification. Yeah, anyway.
So now I'm going to be talking Unique Games Conjecture and how the Unique Games
Conjecture is false and expanded constrained graphs. Well, I mean nobody can
conjecture for expander constrain graph, it will solve that Unique Games Conjecture is
easy on expander constrain graphs. You can solve the Unique Games Conjecture.
But let's go back for a step and say and color basically is going on with empty heart
problems. So empty heart problems are hard to solve. But the next best thing is
approximation algorithms. So for example there are a lot of important problems like click,
max, set cover that have almost optimal -- they have approximation algorithm that
actually it's proved that basically that's the best you can do up to some low order terms
sometimes.
So this printed (inaudible) using the (inaudible) theorem and the power of partition here
and, you know, there's not too much more to do in approximation. However, there is this
other important problems like vertex cover, max cut, max case be that there are
sometimes folklore, sometimes SDP base, sometimes -- I'm not sure how this works,
SDP also. Groups that actually then (inaudible) proved for them has a large cut, like is
not matching the approximation algorithm guarantee.
Are those problems still open? That we're going to see in a bit. And other problems like
the sparsest cut that basically we have a scare log and approximation algorithm to
(inaudible) and no hardness other than like no (inaudible) basically is no. So this is still
open.
So let's see. In (inaudible) in the glance of Unique Games Conjecture. So now we
wanted to have all this, the vertex cover and the max cut and max cases problems and
we wanted to find a way to close the gap either way. Like is better approximation
algorithms or better in approximatability lower bounds or better with some definition of
better.
So let's see the previous board or table that we have there. We have this gaps here and
if you assume that's called the Unique Games Conjecture, basically this gaps close and I
mean for the (inaudible) program on this becomes better. For the uniform version, there's
nothing else known. Again, unless there is no Unique Games Conjecture graph has
some expansion. That is just the motivation. I haven't defined the Unique Games and I
haven't defined the graph.
So let's see what is Unique Games Conjecture? Special games, of which (inaudible) to
do with linear equation small K. So you're given a much of constraints that have the form
(inaudible) in the space of some -- equal some value of K when K is the alphabet says it's
a prime and you're asked to find the maximum number of constraints that can be
simultaneously satisfied, so it's a maximization problem and you can see as constraint
graph so you have for each variable a big node and then it's ads represent, you know,
one of the constraints between the variables so that's the Unique Games Conjecture
constrain graph, the underlying graph I'm going to be talking about. But you could also
see some other graph which is called label extended graph and it's variable now, it's
original variable is replaced by three little variables that correspond to the labels. Here
we have label size three and it's, you know, original (inaudible) is replaced by, you know,
edges between the corresponding labels that satisfy the constraint.
The reason why it's called Unique Games Conjecture and not just game system because
this thing is a matching, it's a permentation. So it's constrained just as one-to-one
matching. Yeah, and function. That's called the label X in this graph.
Are we clear? So far I'm going to be using these two a lot. So we need to understand ->> Question: (Inaudible) ->> Alexandra Kolla: Okay.
>> Question: (Inaudible) --
>> Alexandra Kolla: So we have like this graph and that's clear why that's the Unique
Game -- or the game.
>> Question: But I don't see what game means here, but ->> Alexandra Kolla: Okay, the games comes from a different motivation, which is
(inaudible) games. I don't want to get into that, but forget the games. Yeah. So this is
just modeling of what I'm saying. Find the max number of edges that can be colored or
satisfied. But you can see as here we have three labels, white, silver and gray. White,
silver, I guess one, two, three, one, two, three. And here we have before the identity
permentation so white goes to white, silver goes to silver, gray goes to gray.
So the way to satisfy that original edge was to assign either white-white, either
silver-silver, or gray-gray. That would be an assignment that would satisfy that
constraint. So just a different model of the Unique Game, which it is like a lift. In graph
theory, these are I'm pretty sure you know these are called lifts. So in particular this lift
can be used as modeling of the unique game and basically now the nice thing about that
is that you know, you can visualize the labelings by picking out which -- I mean if you
assign White-white, I don't know, white and white, then this is not completely satisfying
labeling and you can see why from that picture.
>> Question: So the thing is the problem of what you have to find in this new such thing
is strength, right? You have to find ->> Alexandra Kolla: Well, you don't have to find anything so far. I'm just using this as a
tool to find what I need to find. But if you can bear with me for a couple of slides you
might have a better understanding of why I'm modeling it like that.
>> Question: Okay. But there's the problem corresponds to satisfying the maximum
number of constraints and the original question is a bit more (inaudible) -- I think you
have to find the most one edge on each ->> Alexandra Kolla: Sure. I'm going to be talking about that a lot. So just -- if you still
have questions after that, please let me know.
Okay. So in particular, as I said before, these three edges of satisfied here and this is
not. And (inaudible) of the game is three over four. I mean, that's just a picture of how
you can relate satisfying the sentence and stuff. Okay. So that is Unique Games.
And what is the conjecture? Basically it says that this unique game problem is hard to
approximate and it's due to (inaudible) 2002.
Right. In other words, it says that for (inaudible) says simply hard to tell if the game is
99% satisfied or 1% satisfied. That's great because as we saw before it leads to all this
tightness in approximation and approximatability results.
So however nobody could say why is it true or false? What's going on? It's notorious
open problem. Nobody knows what to do. They have been approximation algorithms,
though, for Unique Games and these are some of them. And you see all the first five
depend on -- and the alphabet size, sorry, that is who has been K. I had P for prime, but
this is K.
So either depend on K or N, so synthetically it could be really bad. And what we proved
in the paper, which is joint work with (inaudible) and it is merge of two papers basically,
that is why you have so many people. We basically have an algorithm that does not
depend on K or N. It depends on expansion. And it basically says -- finds a 99%
(inaudible) if the game is one minus (inaudible) and the graph is a spectral standard.
So, you know, we have a different kind of dependency here with the rest, which works
very well as expanders and that's what I'm going to be talking about in the rest of the talk.
But let's sit back and say, you know, why one would even look at expanders? You know,
why not look at planer graphs or I don't know, something else. Well, there is this
expansion and sparsest cut things that we're talking about and also previous table that
says sparsest cut is very weird problem. Nobody knows how to prove (inaudible) results
for that.
So there is a likely even assuming with Unique Games that there is a reduction from
Unique Games to sparsest cut. Not true, just unlikely. The reason being if you start with
Unique Games instance that does have a sparse cut, then old (inaudible) actions say that
have been used in approximatability, they are going to create a (inaudible) cut instance
that also has a sparse cut and that does not -- is not going to depend whether or not the
Unique Games system is a yes or a no to begin with.
So there's no like way with this kind of techniques that we have to do this. To create, you
know, such a good reduction. Unless (inaudible) expansion, which also was observed by
(inaudible) manuscript, so there is actually a reduction from an expanding Unique Games
system through sparsest cut. And the reason being that situation here cannot happen if
the graph has been expanding, so the only good cuts in the sparsest cut instance would
correspond somehow to good labels.
So there was this (inaudible) that expanded graphs were (inaudible) hardest instances,
but hard instances. So there was this going on. When I was talking to people while I was
working on that problem, they were surprised that I thought it was easy on expander
(inaudible). So that's the picture.
And now what we show basically is as I said before that when the game of Unique Game
is a yes instance and one (inaudible) satisfied instance then in the algebraic connectivity
of graph is large then we have a good algorithm that recovers a 99% satisfying
assignment.
>> Question: (Inaudible) prove this to the title because if you say Unique Games
Conjecture is false in some cases it means (inaudible) false.
>> Alexandra Kolla: Okay.
>> Question: Unique Games are easy on experiments, right?
>> Alexandra Kolla: I explained before, yeah, so I should have (inaudible) them, you're
right. I said that it's not false in expand -- yeah, okay.
So why? Let's say because before I said expanders were believed to be hard, what
happened so far? Well, for information, let's look at a perfectly satisfiable game and a
perfectly satisfiable game it's easy to detect. Like it's easy to determine if a Unique
Game instance is 100% satisfiable. And let's look at that weird graph and pretend this
was an expander because it's not. But pretend that this is an expander. As we observe,
this is a perfectly satisfiable game, here you can pick gray, white, white, and silver, gray,
silver, you know and he can pick, exactly three perfect labelings. And if the graph was an
expander a cut which corresponds to the original graph would cut a lot of edges because
it would correspond to the expander cut, which would be large.
And in particular the only good cuts that cut exactly 0 edges are those if you pick out the
correct labels for each node. So that's why this is a nice picture of a Unique Game,
because you can see in that cut sense.
So you see that if you pick out the satisfying labeling and put it in one side and the rest in
the other side, you cut zero edges. So this instance of linear equations have actually
three perfect (inaudible) and is disconnected as three copies of the graph disconnect.
Okay. So that's a very nice picture, but we already know perfectly satisfiable games are
easy to determine.
Well, in (inaudible) game is almost perfect, satisfiable one. So I'm going to talk about
what that means, but intuitively, one would expect now we are in good shape, we could
do something about it.
Okay. But then I was talking about all these SDPs and eigenvalues and sparse cuts are
hard to determine so even I just said sparsest cut is an (inaudible) hard problem, what do
we do? Like how can you find a (inaudible) that corresponds to good labeling or
something? There's no way.
Well, let's go back to my max cut slide, please. We probably should or should shouldn't
have been there, but we'll see. And let's look at what now we can do.
So remember in max cut I showed you a reduction -- well, not a reduction in duality
between SDP value and eigenvalue, and what happens in Unique Games. Well, it is a
combinatorial optimization problem. It can be cast as an SDP. And in particular what
happens if you look at the dual? Well, let's see what happens if we look at the dual? It
turns out that we again have an eigenvalue bound. So dual objective is bounded by
some longitude, but what is at longitude? Well, remember the label extender graph
before you were asking me? It is longitude (inaudible) graph.
So there didn't (inaudible) graph, it's just the original (inaudible) matrix replaced -- it's
(inaudible) that was one would correspond to an Edge that is replaced by the
permentation that is sitting on that edge . So now we have a gray tool. We have SDP
duality gives you longitude bound of that nice matrix, yeah, the matrix of the label X in
this graph. So now let's go back to the previous slide. And we will look at the game
again. So we have all these cuts that correspond to expander, they were big, the other
cuts that were good that correspond to labelings, how do we relate cuts with spectral?
The other eigenvalues of the matrix N they were saying before, just to make sure the
label X in the graph that are large are the ones that correspond again to perfect labels.
And of the other eigenvalues for expanders are comparable to the longitude of the
original constraint graph or the expansion.
So basically this longitude, as well as the rest that correspond to assignments, are large
for expanders only -- if and only if the game is satisfiable and for however many satisfying
assignment it has.
So now we're in good shape. We have a tool. And let's do some reverse engineering
because I promised before that we look at perfect games and then something magic
happens and we have one (inaudible) games. Well, let's start with the one (inaudible)
game. This edge here is a problematic edge. It's not satisfiable. But we can -- what did I
do? Sorry.
But we can think of it as coming from a perfect game that some adversary chosen
(inaudible) or epsilon fraction of the edges and, you know, adversarily perturb those
edges to make them bad.
So that's now my original game that was preservation of the other game or vice versa.
We look again at that matrix that I talked about before and we observed that as I said
before there are as many large eigenvectors that correspond to large eigenvalue as
many as satisfying assignments and these are actually a basis for those is characteristic
vectors of the labeling of the assignments. So one, it's block has one unit and the zero
is -- there was one corresponding to the flag that lights up for white, gray or silver. And
that's like the eigen space. Well, yeah, that's supposed to be the eigen space. And for
expanders what we prove in the paper that the live eigenvalues have really small
eigenvalue and now let's see what that means. Look again at the one minus epsilon
game. You know, that game can be seen as now in a matrix sense as an excellent
preservation of that matrix that was here, but now it's not anymore, which should have
been.
So it is an epsilon (inaudible) matrix and epsilon preservation of the eigen space. Well,
that doesn't say anything, only that the facto (inaudible) for expanders implies that in the
perturbed sense the assignment eigen space has a large gap from the rest. So there is
label K, K dimension of large good eigen Space and K plus one and more. They're just
really bad eigen space. So there is a big spectral gap. And for pervasive matrixes tells
us that when that happens this preservation does not change things by much in the
spectral sense.
So eigen vectors in the right-hand side space are close to eigen vectors in the left-hand
side space and in particular what does it mean? Well, it means that for every vector here
you can find the vector here that are close enough to norm. And remember what were
the vectors here, though. These vectors were satisfying assignments. These vectors
were characteristic vectors of satisfying assignments so in particular you could somehow
with one more extra step find those vectors and let's see what the extra steps so we have
a labeling algorithm (inaudible) eigen vectors have the (inaudible) game and this
(inaudible). So basically after that, since K is (inaudible) again in the application that
we're looking at, this (inaudible) many number of points and basically, as I said before,
there is a vector in this epsilon N that is close to an assignment vector.
And what does closed mean? Well, it basically means that this vector here is going to
have most blocks with a single big entry and, you know, some blocks really bad. But this
block with single big entry would be the big entry that corresponds to the correct labeling.
So we just have that vector to us and there you go, that's 99% satisfying assignment.
So in particular I want to say one more thing because we have two proofs of that
theorem. The previous proof works for linear equation (inaudible) P. This works for
general case. There's yet another connection with spectral and SDP, which can be
greatly demonstrated in that proof, which I'm not going to talk about and it's in the paper if
you -- yeah, yeah. Anyway, so basically now in the other proof what it does it's again yet
another noble rounding of the SDP solution for the Unique Games, the SDP that I
showed you before.
And you basically have a vector solution and for Unique Games and you create a vector
solution for the sparsest cut SDP without trying (inaudible) and if expansion is big,
constant, know that basically if you have local correlations between ads, the names of
SDP value aligns vectors for edges, then it somehow would mean that the (inaudible)
vectors because in expander (inaudible) you walk, in edges you walk in a random pair
over (inaudible), which I need like another half-hour to explain, that's why I'm not going
to explain.
Okay. So that's basically the end of the Unique Games part of my talk and if you want to
ask any questions, you should now.
Okay. This is just one slide about a couple of other things that I have done. One of them
is with Lee and we have a construction of (inaudible) alpha sparsest cut, uniform sparsest
cut. That (inaudible) hardness and gives some intuition and probably some way around
proofing tighter bounds, which I'm going to probably send to CCC. And this we have -- I
don't know, for those of you who know SDP hierarchies, we're looking at less hierarchy
for a while.
It turns out it is harder than we hoped it would be, but we do have one additional wrong
gap. We developed a whole new technique basically to get one addition around gap for
vertex cover. To my (inaudible) that is with Satiem Collin and we're sending that to CCC,
as well, I suppose.
And now if you don't have any more questions I'd like to talk about what I want to work in
the future and what I've been working on.
>> Question: Well, we'll have six hours of questions later.
>> Alexandra Kolla: Yeah, no, I'm with you. So I've already been thinking about some
very exciting problems that relate to my work mostly and I want to share them with you,
so one of them is resolving Unique Games Conjecture using spectral techniques because
we know that SDPs fail. We have bounds. (Inaudible) showed SDP cannot do more
than that. And I wonder like since there is another way around it, can we use that to
solve Unique Games and graph if SDP fails? Nobody knows I've worked on it with
people in spectral graph theory, with JOel Freedman, an expert in expanders and stuff,
but we couldn't -- for a day I thought I solved Unique Games, but then I realized that I
hadn't so that was disappointing. Then, you know, uniform sparsest cut, as I said before,
I really want to work on that problem more and, you know, see if I can use the techniques
we developed in that paper with Lee and, you know, close the gap. Basically I believe
there is a lower bound of square log in that matches the (inaudible).
Maybe we're going to have to use like heavier free analysis (inaudible) conjecture and
stuff like that in the past, but we still don't know how to put all these things together and,
you know, investigate the power of this notorious SDP hierarchies, which basically very,
very few lower bounds are known and very few algorithms are known so either way I'm
going to be very happy to say well, this strong, powerful (inaudible) SDPs can give you
better algorithms or not. Those are some things I've been thinking about and then some
exciting problems I planned to think about, which I didn't have time yet, so that came as a
natural consequence of our work in sparsifiers.
The question is the following. Give me a graph and describe it with the effective
resistance map. For those of you who don't know, I'll kind of explain one to one. So I
can describe the facto resistance map. Is that enough to specify the whole spectrum? Is
that like one to one function say? Or what is the property of the graph that can be
revealed by that map?
Then I'm interested in random behavior like random graphs behavior and SDPs and
hierarchies and not hierarchies and whatever else. I've been looking on the Click
problem and it turns out it's much harder than we thought, as well. I mean, we have
some half finished work in that, but I don't think I even want to, you know, start digging
into that right now. And then something that you have all suggested in the past which
how been, you know telling myself to work on, but then I've been busy with all this other
stuff, that there isn't really -- regard majority function and using techniques for (inaudible)
combinatorial (inaudible) to, you know, to get intuition for random matrixes, basically.
Like little (inaudible) problems and stuff like that. I tried to work on that in couple of years
back, but turns out that these two people are better than me. That's a joke.
But, yeah, I would like to explore techniques for my combinatorials in random matrixes
and see what information can I get from that.
>> Yuval Peres: Okay.
(applause)
Download