Document 17865178

advertisement
>> Mohit Singh: Hi everyone. We’re glad to have Shayan in the area and also giving a talk over
here now. Again, he has some very, very exciting work on the Traveling Salesman problem and
he'll tell us about effective-resistance-reducing flows spectrally thin trees and ATSP.
>> Shayan Oveis-Gharan: Hello. So this is a joint work with Nima, who is a PhD student at UC
Berkeley, so thanks for inviting me here. I want to tell you about TSP. So basically the first half
of the talk would be more or less overview, some not that deep, I hope all of you understand.
Then we’ll make it more deep. I'll try to keep it simple for this talk, but then there will be a
more detailed talk right after this. So let’s start.
So I want to talk about Asymmetric TSP. So we are given a set of cities and their pairwise
distances which is nonnegative and satisfies the triangle inequality. The goal is to find the
shortest tool that visits all cities exactly once. And we say that this is Asymmetric if the cost
function is not necessarily symmetric. The cost of going from here to Udab may be different
from of the cost of going from Udab to here because of the traffic or whatever. So this is
Asymmetric TSP.
So there is a natural LP relaxation for this problem by Held and Karp in ’72, and you don't need
to understand this LP. I'm not going to use it; I just put it for the sake of the talk. So the nice
thing is that it's basically simple LP and for a very long time people thought that this should be
very useful, it should give us a very good estimate of the value of the optimal tool, we should be
able to use it to design algorithms, but on the other hand we didn't know much about it for
many years. It's also interesting to study the integrality gap of this LP which is the ratio of the
optimum, sort of worst case ratio of the optimum integral solution to the fractional solution.
So let me just say very briefly this LP what it says is that in the TSP tool you have to leave each
vertex, whenever you enter a vertex you have to leave a vertex, and for every cut you have to
leave that cut at least once. That’s what it says, but you don't need to understand that. And
like I said, here are some previous works for ATSP. As you can see there are quite a lot of
works. There was a log(n) approximation by Frieze-Galbiati-Maffioli and this has been
improved. Some of the people in the audience have often proved it. So recently, like five years
ago, this was improved to log(n) over log(n). So finally better than a log(n) in asymptotic way.
This was a joint work I had with Asadpour, Goemans, Madry, and O-Saberi; and also we know
that if the graph is like planar bounded genus then the problem is really easy, like we can get a
constant factor approximation.
So the belief is that we should be able to do constant in general. So what do we know for the
integrality gap of the LP? We know that it is at least two and it's at most log(n) over loglog(n).
>>: So for planar case what does this mean?
>> Shayan Oveis-Gharan: It means that when you drop the directions it's planar. So basically
you can>>: [inaudible]?
>> Shayan Oveis-Gharan: Yeah, you can assume that you have a cost function only on pairs of
vertices and then for the rest of the pairs the cost is the shortest path metric. But another way
of seeing it is that forget the cost function, compute the LP solution. If the LP solution is planar
then this thing works. So for whatever cost function if the LP is in planar this works. You get
constant.
So that's it. So here is the result I want to tell you about that we show that for every cost
function the integrality gap is polyloglog(n). So from log(n) basically your loglog(n) you can get
down to loglog(n). So this is one of the instances of the problems that right now we know how
to bound integrality gap but we still don't have an algorithm. We're thinking about it, but still
we don't have. If you’re also interested to know what's the exponent of the polyloglog it's
about probably 10 or something. Is this clear?
So here's the outline of the rest of the talk. I start talking about thin spanning trees, and I'll tell
you how they are related to Asymmetric TSP, then I'll tell you about spectrally thin spanning
trees, and then I'll tell you what our approach, the algorithm, and something about the proof.
So let's start with the thin spanning trees.
So in this slide I'm going to define it and then I'll tell you how it's related to ATSP. So what is a
thin spanning tree? Suppose for the sake of this slide, suppose we have a K connected graph.
So the graph is now undirected, unweighted, just a simple graph. You can have parallel edges,
but it's a simple graph. You have a K-edge-connected graph and you say your spanning tree is
alpha thin with respect to G if for every cut you have this. You have that the tree has a few
edges in the cut with respect to G.
>>: [inaudible]?
>> Shayan Oveis-Gharan: I’ll save it. So by K connected what I mean is that there are multiple
ways you can see it. One way is that in every cut the graph has these K-edges, another way is
that the graph has at least K over two edge disjoint spanning threes so it's kind equivalent up to
a factor of 2. Then another way of defining is that for every pair of vertices there is a max flow
of value K from one to the other so all of these are properties of K-edge-connected graphs.
So you have a K-edge-connected graph, you say a spanning tree T is alpha thin with respect to G
if for every cut it has a few edges in the cut, alpha fraction of the edges of the cut. So here is a
simple example. If you have a complete graph and you take this Hamiltonian path then I claim
it’s a two over N thin tree. And why is that the case? Look at every cut in the graph and the
corresponding cut in the tree. Note that the cuts are not [inaudible]. I can, for example,
choose every other vertex, so for every cut. Now for every cut in the graph if I have K vertices
on one side and N minus K vertices on the other than there are K times N minus K edges in the
cut but the tree has at most two K edges because the degree of each vertex is two. So you get
two over [inaudible].
So it's easy in the compute. In fact, for a complete graph every bounded degree spanning tree
is a thin tree. It's like a constant over a thin tree. So if you think it's a simple problem you can
try to find a one over K thin tree in a hypercube. I’ll try to do it with the rest of the talk. One
other thing, so there are a couple of things which is interesting here. One is that the tree is not
something special here. You can ask for any, like you can define this for any sub graph. Now
the thing is if you have a sub graph which is alpha thin any sub graph of your sub graph is alpha
thin so in fact I can look at alpha thin connected, spanning connected sub graphs. This is also
interesting, or I can look for alpha thin matching, the edge covers, any sort of that’s of edges.
Another thing is that this object is also closely related to cut sparsifiers. So a cut sparsifier of a
graph is a sub graph that sort of approximates the size of the cuts. Now the problem, so you
can think of this object in a sense as a one-sided cut sparsifier. If here we also have some
maybe alpha times 1 minus epsilon, E of SS[phonetic] complement then this would be exactly a
cut sparsifier because the tree would approximate value of every cut within like one plus or
minus epsilon fact. Here we don't have this so it's like we can think of it as a one-sided cut
sparsifier. And for those of you who are familiar with the literature of the sparsifiers the reason
that this is not an easy problem is that the tree must be unweighted. So if you are allowed to
make the tree weighted then the problem becomes very easy. You can make a tree weighted
and the number of edges of a tree in the cut you just add up the sum of the weight of the edges
in the cut and the problem becomes easy. Is this clear? Good.
Now let me tell you why I'm saying all this and how is this related to ATSP. So here is something
that we proved with Asodpour, Goemans, Madry, and O-Saberi that if for log(n) connected
graphs alpha is less than this F of N over log(n) you get order of F in approximation. If you can
just prove that a tree exists you get an order of F in approximation. On the integrality gap if you
can find the tree you get order of F in approximation for algorithms. So basically, again for this
problem, something which is important that you need to assume that K is really small, like
log(n) or even smaller. And this is interesting regime of the problem especially for log(n)’s what
we have for ATSP.
>>: [inaudible]?
>> Shayan Oveis-Gharan: And is the size of the graph. Yes. Now let me tell you what do we
know. So here are what we know, some conjectures and some results of previous done work.
So this was the conjecture by Goddyn that he came up to this conjecture because of
applications of this problem to [inaudible] zero flows. If you don't know it you don't need to
know it. So he conjectured that there is a number K, maybe two to the 1000, that any K-edgeconnected graph of any size N is arbitrarily large has a .99 thin tree. So basically what you
should think that this conjecture says even one minus epsilon, .99 thin threes is something
nontrivial. And we still don't know the answer to this conjecture. So for the rest of the talk
whenever I say a thin tree without saying the alpha I mean something of this kind, something
less than one.
And the reason that this is interesting is if you have this, this says that you can use this to say
that log(n) connected graphs has a log(n) to the minus epsilon thin trees. So you get better
than log(n) approximation for ATSP. Whenever you want to go from thin [inaudible] to ATSP
you need to multiply this by log(n) so you get log(n) to the one minus epsilon for ATSP.
>>: [inaudible]?
>> Shayan Oveis-Gharan: Yea. This doesn't directly imply that. You need to do something. But
it implies. If you repeatedly apply the same thing you'll get that. And here are some other
results. So in the 2009 paper we showed that any log(n) connected graph has a one over
loglog(n) thin tree and this gives log(n) over loglog(n) approximation. This work would show
that planar bounded genus graphs has one over K thin tree; and this last work, I’ll say much
more about it for the rest of the talk, says that if your graph is edge transitive, meaning that
think of it is a symmetric graph, like a hypercube, then it has one over K thin tree. One over K is
the optimal thing. If we can get one over K, if we can get one over log(n) for log(n) connected
we get constant. So I'll say more about this last result later. Any questions here?
>>: [inaudible] having thin tree [inaudible]?
>> Shayan Oveis-Gharan: The proof is just the max flow, min cut argument. So what you do is
that you say if the tree exists you use sort of a max flow with a lower bound of one on the
edges of a tree and then you use the solution of the LP to show that this flow can be routed in
the graph. So you put the lower bound of one on edges of tree, some upper bound on the rest
of the edges using the LP, and then you say that you can use the max flow min cut theorem and
say that this flow can be routed. It's not that complicated. When you know the connections it’s
not that complicated.
So what do we do here? Here is our result basically. It says that we say that for every log(n)
connected graph has a polyloglog(n) over log(n) thin tree. In fact, like more generally we show
any K-connected graph as long as K is more than log(n), has a polyloglog K over log K thin tree.
>>: [inaudible]?
>> Shayan Oveis-Gharan: No. I mean yeah. For all we know it can be, every K-connected graph
can have a one over K>>: [inaudible]?
>> Shayan Oveis-Gharan: It would get very interesting.
>>: [inaudible]?
>> Shayan Oveis-Gharan: Yeah. [inaudible] doesn't work. So is this clear?
>>: [inaudible]?
>> Shayan Oveis-Gharan: Two over K probably right, but not more than two.
>>: So it’s very possible that you have [inaudible] over K [inaudible]?
>> Shayan Oveis-Gharan: Okay. So is this clear? So I want to tell you, so for the rest of the talk
I'm only going to tell you about thin trees; I'm not going to tell you about TSP. Forget about
TSP. So it’s basically a graph theory question, and I just want to tell you about that. Actually, if
you ask the computer about a thin tree here are some answers I found. I think the right one is a
Google answer, the left one is Bing’s answer. Anyhow, so let’s think about thin trees. Who
here knows about Laplacians of graphs?
>>: [inaudible]?
>> Shayan Oveis-Gharan: So this is like a very faster slide about Laplacians. I need to say this to
define a generalization of thin trees so bear with me for one minute. So the Laplacian of a
graph is sort of you can think of it as a normalization of the adjacency matrix. This is a
Laplacian. Basically the Laplacian is the degree matrix minus the adjacency matrix. So this is a
Laplacian of this graph. On the diagonal you have the degrees, off diagonals you have minus
one if there is an edge from [inaudible]. And then you can write Laplacian this way if you write
Chi UV to be one minus one for the edge UV and the Laplacian is Chi E, Chi E transpose for all
the edges, some of this; and then there is a natural quadratic form you can assign to Laplacian.
It says the following, for every graph the Laplacian, Y transpose LY for some vector Y we write it
this way. It's equal to some summation of YU minus YV squared over all edges. So these are
some basic facts of Laplacians. If anybody wants more details I'd be happy to tell you.
So I'm going to use this quadratic formula in the next slide. So what do I want to do? So the
thing is that thin trees are nice because they are sort of very graph theoretic, like you don't
need to worry about direction of the edges, the cuts of the edges, nothing. You just have a
graph you want to do something with. So they're nice in this sense, but they are not nice in
another sense which is that if I give you a graph and a tree it's not easy to see, to prove that a
tree is thin with respect to the graph. So if you think about it it’s basically an instance of the
sparsest cut problem; so it's something, we can approximate it but it's not an easy object to
deal with.
So because of that we want to work with a generalization of thinness which is called spectral
thinness. So what is this? Again, we have a K-connected graph G. Now we say tree T is alpha
spectrally thin with respect to G if this is true. If the Laplacian of T is at most alpha times the
Laplacian of G and this is in the PSD sense, in the sense that this matrix minus this matrix is
positive [inaudible]. The Laplacian of a tree is just the summation of Chi, Chi transposed on the
edges of the tree. So it's what it should be.
So why are we defining this? There are two reasons. One is that it's a generalization of
thinness. So if a tree is alpha spectrally thin it’s also alpha combinatorially thin. Why? The
proof is really easy. Look at any set of the vertices and let's say this one S is the indicator vector
of the set S then by that I can write this for every set S one is transposed L of T1S is less than
alpha times 1 is transposed L of G1S. And then the left side this object is exactly the number of
edges of the tree in the cut SS [inaudible] this is the number of edges of the graph in that cut SS
[inaudible]. Just remember this is the quadratic form. So this is basically summation of you
want to be with your neighbors 1S of U minus 1S of V squared and then, so we are in the
different sides of cut [inaudible]. So this is exact quadratic form and then this is true.
So again, spectral thinness implies combinatorial thinness. It’s a stronger quantity. The other
property is that it's [inaudible] in polynomial term so basically all you need to do is to compute
the maximum eigenvalue of this object. If I multiply LG minus one half to both sides of this on
the right I'm going to get I and the left I'm going to get this object. So the max eigenvalue of
this is exactly the spectral thinness. So for every tree it’s just an eigenvalue problem. I can
compute it, really understand it, and [inaudible] it's a very good object.
>>: [inaudible].
>> Shayan Oveis-Gharan: Right. So one thing is that L of G is not necessarily, it's always
singular. It has the first eigenvector, the second value is zero, but by this I mean that, forget
about the first eigenvalue and just inverse all of the other eigenvalues and take the square root
and take the sum. So basically if forget all the first eigenvalue and eigenvector projecting the
rest of the linear space you can do the square root of it. But you don't need to get into this in
the talk. Just assume that it is not singular.
So now we have this. So it seems good. We have an object which is we can really good
understand it. Now let's see how we can use it and if there are some barriers. The thing is that
there are some barriers with this new definition. There are graphs with no spectrally thin tree.
So there are graphs that have combinatorially thin trees but no spectrally thin trees. So what
do I want to do in this slide? I want to give you a necessary condition for the spectral thinness
of the tree and then I want to use that to show that there are graphs with no spectrally thin
tree. So here is a lemma. It says that if you have a graph, you have a tree; the spectral thinness
of a tree is at least this object, the max effective-resistance of the edges of the tree. So what is
the effective-resistance? Effective-resistance of an edge UV you can think of it as a leverage
score of the Laplacian matrix if you're familiar with leverage scores. If you're not it’s just this
object. It’s just Chi UV LG minus 1 Chi U. And you remember that Chi UV is just one U minus
one V, like the vector one minus one on the endpoints of the edge. So the gain at the
maximum of this is the lower bound on the thinness. It's very simple. Just two lines. There's
the proof.
So suppose you have a tree which is alpha spectrally thin. This means that L of T is less than
alpha times L of G. That I know. Now, on the other hand, for every edge in the tree this is true.
Chi E, Chi U transpose is less than L of T because L of T is a summation of Chi E, Chi U transpose
over the edges of the tree and these objects are all positive semi definite so just one of them is
less than the whole sum. So you have this, you have this first inequality, now all you want to do
is just multiply both sides and LG to the minus one half and then you get this. On this side you
get alpha, on that side you get [inaudible]. I did a little bit more tricks. So the thing is you get
alpha times I on this side but the left side is rank one so if you do it correctly you get exactly
this. But essentially, yeah. Just multiply LG to the minus one half on both sides and this will
become I or one, essentially the left side will become the effective-resistance.
So again, what did we prove? We proved that the max effective-resistance is the lower bound
on alpha on the spectral thinness. So now what I want to do I want to use this to show that
there are graphs with no spectrally thin trees. So in other words I want to show that there are
graphs such that in every tree there is an edge with large effective-resistance. So here is the
graph. This is a graph where for every tree there is an edge with large effective-resistance. So
how do I prove that? I show that in this graph there is a cut where every edge in the cut has a
large effective-resistance, and because any tree must choose at least one edge of every cut it
says what I want it to say.
So look at this graph. What is this graph? It’s like K parallel paths at the bottom and at the top
and then K parallel edges in this cut. The distance of these edges N over K, each pair of them is
N over K. So these bottom edges have effective-resistance one over K. It just follows from this
series parallel property of the effective-resistance, K parallel edges effective-resistance one
over K. Now these vertical edges have effective-resistance that one minus K squared over N.
It's very close to N1. Think of K as log(n) or something smaller. So it's very close to one. In fact,
the reason is that if you're familiar with effective-resistance it’s closely related to effectiveresistance flows. So the effective-resistance of an edge is the energy of a flow, of a unit flow
from one endpoint to the other. And for this edge, for example, if you want to send one unit of
flow from this side the other side, think of this as an electrical network. One way is to go
directly; the other way is to go all the way here and then go on. And because it's so long the
energy of any path that goes all the way to the right is so high that you wouldn’t do that. You
would just straight up.
So again, what did we show? We show that there is graph where in the cut every edge has a
large effective-resistance that says that this graph doesn't have any spectrally thin trees. Now
what do I want to do? I want to show you, I showed you a necessary condition. I want to show
you a sufficient condition for a spectral thinness. So this is a nice theorem that came up from
this nice work of Marcus-Spielman-Srivastava who proved the [inaudible] conjecture, follow
from their work that any graph has a spectrally thin tree with a spectral thinness this much, the
max effective-resistance over all edges of the graph, note that it is all edges. So if I have a
graph for where every edge has a small effective-resistance then my graph has a good
spectrally thin tree. So as an application, if you have an edge transitive graph remember these
are the symmetric graphs were talking about, and then because of this symmetric property the
effective-resistance of any pair of edges is equal so if it is K regular the effective-resistance of
every edge is one over K case, so from this you get one over K spectral thinness and
combinatorial fitness.
So let me summarize. Here is what I said so far. So I talked about four things. So we have Kedge-connectivity and you want to go from this ideally to one over K combinatorial thinness.
This is the thin tree conjecture that we are thinking about. Now what are these bottom ones?
These are the spectral versions. Something that we know is that if effective-resistance of every
edge is one over K, is at most one over K, then we can go from here to one over K spectral
thinness and from that one over K combinatorial thinness. So if effective-resistance of every
edge is small I'm done. But the bad news is that K-edge-connectivity doesn't imply a small
effective-resistance; it’s the other way around. Small effective-resistance implies K-edgeconnectivity. So if this was the other way around I could go to all the way to the other.
So what I want to show next is our method. We’re going to replace this with something much
more complicated but it’s something that we can go from here to there and from here to
something similar to this and then on. So any questions?
So next I'm going to give you sort of a high-level overview of our approach. If we have time
then I'll dig more into it. So before saying the high-level overview let me say one more thing.
So I told you that in this [inaudible], in this proof of MSS, if effective-resistance of every edge is
small then we get a thin tree. But something even stronger is also true. If the average
effective-resistance of every cut is small, so there are edges with big effective-resistance, but if
you look at the average effective-resistance of edges in each cut that’s small, it’s one over 10
let's say, then there's a .99 spectrally thin tree. So we don't need something that strong.
Something much weaker is also true. This is something that we observed and followed from
some generalization of MSS.
So you have this in mind, now let me tell you how we think about this problem. So here's the
general idea’ so what's the problem? The problem is that we are given a graph with large
effective-resistance so we want to make it good. We want to sort of somehow reduce the
effective-resistance. So the general idea is that to try to symmetrize the graph in the L2 sense
reduce the effective-resistance says while preserving the structure of the cuts, so don't disturb
the cuts; cuts are essential objects for the thin tree. We don't want to ruin them, preserve
them, but try to maybe add some edges, remove some edges, do something to make the graph
symmetric in the L2 sense in the effective-resistance. So before giving you more detail I want to
give you one example. So remember this bad graph that we have. I want to tell you what can
do with It? [inaudible] start of our idea. What can we do for this graph?
So here's one idea. So we’re going to add these red edges. They all have weight K and each of
them goes from one endpoint of the vertical edges to an endpoint of another vertical edge, the
next vertical edge, like K of them. The thing is when you add this you can make sure that all of
the black edges that I have now will have a small effective-resistance because now when you
want to route the flow from bottom to the top instead of going through all these long paths you
can just jump on this edge, jump up and jump back and that saves you a lot.
And the thing is this edge is like K; it’s much easier to go along this edge than these edges
because it's like it has a very, the conductance of this is, the resistance is so small. The
resistance of this edge is one over K, the conductance is K. So you can easily go here and back
or so on and so forth, like three hops, four hops up and back. So because of that we can show
that, let me call this red edge graph D. So we can show that in this new graph, G plus D the
effective-resistance of every edge is one over root K. So we are only adding these red edges
that can decrease the effective-resistances and it’s this also not hard to see that the value of
every cut is preserved. I just boost up the value of every cut by a factor of two, no more than
that because you can use the flow to route these red edges. Is that clear?
So now in this graph every black edge has a small effective-resistance and I can hope to do
something interesting. The red edges may have big effective-resistance, but the black edges
have a small effective-resistance. So the general idea is to do something similar. So here's the
general idea. We want to find the graph D such that the effective-resistance of every edge of
the original graph, every black edge with respect to D plus G is small, one over 10 or one over K,
something small and such that D is not much larger than G in the cut sense. So the value of
every cut in D is less than the value of that cut in G. So by this notation that I'm going to use
the rest of the talk I mean that the value of every cut for the left matrix is less than the value of
that cut for the right matrix or this one is transposed L of D1S is less than one is transposed L of
G1S. So it has two properties; it reduces effective-resistance, it preserves the value of the cuts.
Now let’s see what’s going to happen. So if this is true, if I add D to G then I can make sure that
effective-resistance of every black edge, every edge of G is good, is one over 10 or depends on
the factor that I have at the top. Something small. So factor is S of every edge of G is a small.
Now because D has few edges with respect to G in every cut this brings down the average
effective-resistance of every cut. So although the edges of D may have big effective-resistance
because they are not so large in every cut the average effective-resistance in every cut is small
so I can say G plus D has a spectrally thin tree. And when I have a spectrally thin tree for G plus
D this would be combinatorially thin with respect to G. Why? Again, because D is small with
respect to G in every cut. So just losing a factor of two this would be, you had one tenth
spectrally thin tree; for this guy I would give one fifth of spectrally thin tree for G, the
combinatorially thin tree for G.
>>: [inaudible]?
>> Shayan Oveis-Gharan: Right. So it may not happen from, so I can apply that to the small
effective-resistance edges. I can apply the Marcus-Spielman-Srivastava, MSS proof to the small
effective-resistance. So basically what this helps me to do is that allows me to bypass the
spectral thinness barrier. There are graphs with big effective-resistance that can add something
to decrease the effective-resistances and then use this machinery. So this feels so good, like a
general idea; like basically all you need to do is show that this matrix D exists for every graph.
And if you think about it this is in fact the convex optimization problem. You can write, I’ll say
later if I have time, but you can write the convex program that gives you best D. It feels so
good, right? And when we were at this point we are thinking probably it takes like no more
than a month that we should be done. And this was last Christmas that we were here and the
thing is it doesn't work this way. The thing is there are graphs with no good D.
So here is a bad graph. So there are K parallel paths at the bottom and then these other paths
that jump. Like this first path jump like two hops, then jump four hops and so on and so forth,
so there are log(n) levels of paths and the thing is although first level edges all have a small
effective-resistance edges these other guys have big effective-resistance. So for every D that
you add to this graph there is no way to decrease the effective-resistance of these long edges.
So I'm not going to tell you the proof of this now, probably in the next talk I'll tell you about it,
so this doesn't work. But the good news is that something a little bit weaker works. So
remember here I wanted to decrease the max effective-resistance of all edges of G. I want to
find a D to decrease the max effective-resistance of all edges of G. And here what this graph
shows is that you cannot to do for all edges but maybe you can do for some of the edges. And
that's what we do. So here is the overview of the proof.
So this big object, I’ll parse it for you, but this big object is something that really plays with what
we had before. Before it was that every edge of G has effective-resistance one over K. We
replace it with this object. What is this object? It says the following, there is a matrix D, so first
of all D is not a graph it's a matrix but you can think of it as a graph. It doesn't matter for our
application. So there is a graph D or matrix D, whatever and a set F of edges such that F is K
connected or omega K over 10 connected and the max effective-resistance over F with respect
to D is a small. Again, here it's nothing more than to have D plus G over D. They are the same.
So we relax in sort of in two senses. In one sense we are not looking for a graph we look for a
matrix, but that's not something that much difficult. The main thing is that instead of bringing
down effective-resistance of every edge you bring down the effective-resistance of a sub set of
edges that induce the K connected sub graph of our graph.
So note that the graph F that we get may have, there could be cuts of the original graph with
like N edges but in F I only have only K edges. So F is not in a sense related to the original
graph. It's K connected, but it may have much fewer edges in some cuts or equal to the graph
in some other cuts. D is not a graph, I told you; now the bad thing here is exactly, one other
problem that we face when we prove this is that this F, as I said, is very sparse with respect to
the graph. Now because of that from this we cannot show that the average effective-resistance
in every cut is as small. If F was not sparse, if F has half of the edges of every cut then the
average effective-resistance in every cut would be good and we could use like this MSS proof.
But unfortunately F can be very sparse in some cuts. You could have much fewer edges. So
because of that you have to sort of extend these MSS results and prove that even with this
weaker condition D plus G has a one over K spectrally thin tree. And this is tilde here means
that there’s a polylog K hidden here. And note that K must be log(n) for this to be true. And
then from here to there it’s obvious. Any spectrally thin tree of this guy would be
combinatorially thin with respect to G because D is less than L of G.
>>: [inaudible] for a thin tree to be a tree? So what kind of edges would we have?
>> Shayan Oveis-Gharan: Say it again?
>>: So in the bottom right, so D plus G have a spectrally thin tree. So in one sense>> Shayan Oveis-Gharan: The edges of G>>: [inaudible]?
>> Shayan Oveis-Gharan: So the tree is only from G. So I can choose some edges of G such that
the tree that I get is spectrally thin but not with respect to L of G at the right, with respect to D
plus L of G, this other matrix.
>>: [inaudible]?
>> Shayan Oveis-Gharan: It’s a tree in G itself, yes. So what I'll do I want to tell you more about
this, maybe in the next 10 minutes. In the next talk I'll also tell you about the bottom direction,
this one. Now before that I want to tell you something weaker. So note that I can go from the
bottom to the top. From here I can also go to there because basically if I have one over K thin
tree the graph is K connected. So we can go like these all four are equivalent. So what I want
to tell you about is how to go, like in the next slide, is how to go from here to there. This is
something much, much easier to prove than this thing. It’s just a one slide proof. I want to tell
you how I will go from here to there. And this was our motivation for coming up with this box.
It's no [inaudible] t clear why something like this should be true. So the thing is it’s very easy to
prove this assuming the strong thin tree conjecture and then you may hope that some thin
weaker assumptions this thing could also be true.
So here is what I want to show again. So assume that every K connected graph has a one over K
thin tree. I want to show that there is a subset F that is root K connected and a matrix D such
that this is true, the effective-resistance of every edge with respect to D is one over root two.
So again, if every K connected graph has a one over K thin tree there is a set F that is root K
connected and a matrix D less than L of G and the cut sense such that the effective-resistance
of every edge respect to D is small. This is weaker than what we prove, right? We prove that
one over K essentially this is one over root K. It's interesting. The proof is very easy.
So the thing is that if every K connected graph has the one over K thin tree it must have a bunch
of thin trees because if you have a K connected graph like one over K thin tree, if you extract it
the remaining graph is still K connected, maybe K minus 10 connected. So we can find another
thin tree and then extract it and so on and so forth. So we can keep doing on this, find the root
K disjoint one over K thin trees, maybe two over K thin trees, something like that. So let's call
these trees T1 up to T root K. So maybe this is the graph, this is the first thin tree, this is the
second thin tree and so on. Now I'm going to let F to be the union of these trees that I found;
I’ll let D be the following. So what is D? D is the, basically for every edge that I've chosen I want
to add an edge to D with weight root K. So if D is root K times the sum of the Laplacian of the
trees that I chose. So here I add of these red edges parallel to every edge that I've chosen and
the weight of every red edge is root K.
Now the thing is we can see that D is less than G in the cut sense because I have root K tree,
each of these is one over K thin so their sum is one over root K thin; when you multiply by root
K it is less than G. So it's easy that this D is less than L of G in the cut sense. The other
properties that effective-resistance of every edge of D is that every edge of F with respect to D
is small because every edge of F is parallel with the edge of weight root K so the effectiveresistance is one over root K. It's very easy, right? If you have thin tree conjecture the easiest
way to find the D is to just choose many thin trees at parallel edges to the trees that we've
chosen with some big weight root K or whatever and the edges of the tree would be the set F
that you're looking for.
So I’ve shown you this direction, now let me tell you in five to ten minutes a little bit about this.
So I don't want to tell you exactly this, this is complicated; I want to tell you something
[inaudible]. I want to tell you about the following, instead of finding a set F that is K connected
I want to give you a set F that has K over two edges incident to every vertex. So I just want to
make sure that degree cuts are fine, forget every other cut. So I want to do find a matrix D set F
such that, basically what I want to do is that I want to make sure that the average effectiveresistance’s expectation of average. The average effective-resistance of edges adjacent to V
every V is small. If this is true then you can simply set F to be the edges of a small effectiveresistance adjacent to every vertex V and then F good at least in the degree cuts. Just think
that I want to think about degree cuts. And this inserts with something highly nontrivial
because when you add [inaudible], this MSS proof, it implies that there are one over K thin
edge covers that are subsets of edges with at least one edge incident to every vertex. This is
already [inaudible]. Sounds good?
>>: [inaudible]?
>> Shayan Oveis-Gharan: [inaudible]. You just want to make sure that the average effectiveresistance of degree cuts are good and then you let F be the small effective-resistance edge. So
now the claim is that this thing is exactly a convex program and I will use its dual to analyze it so
it’s the following convex program. Note that I don't necessarily have to do this. Basically all I
have to do is find a matrix D that's good, but I don't know an easy way of doing it. That's why
we look at the dual of convex program to show that the optimum D is good. So this is the prime
of the convex program. It's easy. It says that the object is the following; it’s the minimum of
the max of expected, of average effective-resistance of degree cuts. So for all vertices we look
at the average effective-resistance; we minimize the max. And then the constraint is that the
matrix D that we get preserves the degree cuts. It’s less than G in the degree cuts. So let me
say two things, first of all this is a convex function because these properties are matrix inverse
so effective-resistance is a convex function so this is a convex optimization problem.
The second thing to note is that this also, although this is so small it has exponentially many
constraints. You need to check this for every cut. So for this convex program you cannot run it
in polynomial time but there are variants of it that you can run in polynomial time. I'm not
going to say it today, but there are variants of this which is weaker in a sense but you can run it
in polynomial time.
The other interesting thing to note is if I drop this C here, just look at this, then the optimum is
obviously L of G. The best thing you can do is just put D to the L of G. The whole thing is that
this inequality lets you to choose a matrix that preserves the cuts and changes the structure of
the graph, changes the L2 structure of the graph. And the claim is that for every K connected
graph the optimum of this is little or one. So in fact it's one over K but I'm not going to prove
one over K. So let me not go through the proof of this. We'll say it later. Let me just finish the
talk.
One idea would be to try to do all that and then analyze it, so if you’re interested you can come
to the next talk. So this was the main result. Again, we showed that any log(n) connected
graph has a polyloglog(n) over log(n) thin tree and that says that the integrality gap of LP is
polyloglog(n). The main idea was to try to symmetrize the L2 structure of the graph by
preserving one structure, its cut structure. There are lots of tools that we used that I didn't
have time to tell you like we used this method of interlacing polynomials with [inaudible]
polynomials. I'll tell you about it in next talk. There are some graph partitioning tools, high
dimensional geometry that I probably will tell you in the next talk.
And here are some open problems. One thing that we’re thinking about is that if the proof of
these guys, Marcus, Spielman, Srivastava can be made algorithmic we now have a real
interesting application I think in approximation algorithms. We are thinking about it. So it
would be interesting. The other thing is that if we can prove C over K thin trees, constant over
K thin trees that would give constant factor approximation. We expect that our methods can
do this but we still cannot prove it so we'll see.
It's also interesting to see other implications of this. So this thing is these kind of STPs that we
have is related to the sparsest cut STP and we can get many results out of this. We can, for
example, get bounded degree of spanning trees after the other ways kind of techniques. It's
interesting to see if there are some other new implications of this kind of things.
>>: Questions? Comments?
>>: I had a comment [inaudible]. It was somewhat weird because when you said if you assume
[inaudible] the constant [inaudible] possible K then it divides only log(n) to the [inaudible]
approximation.
>> Shayan Oveis-Gharan: Yeah. As far as I know.
>>: But now you’ve proven a bigger result because you said that one is [inaudible] weaker or
something but [inaudible] we still don't know [inaudible] proven sparser [inaudible].
>> Shayan Oveis-Gharan: So the assumption that I have is that you just prove that for some
cases for one K there is a .99 thin tree. If you prove that for every K there's a one over K, of
course it keeps constant factor. You don't even need to do that. You just need to say that for a
log(n) connected graph it is a constant over log(n). Even that is enough.
>>: [inaudible].
>> Shayan Oveis-Gharan: The problem isn’t the constant. If you prove that constant K then the
best I know is log(n) to the one minus epsilon. But if you do it for log(n), K equals log(n) then
you get a constant factor.
>>: So the Marcus-Spielman-Srivastava gives you a two-sided approximation spectrally but you
seem to only kind need one side of this. Is it possible that there's a weaker result that maybe is
easier to [inaudible] is enough?
>> Shayan Oveis-Gharan: So one reason that they give a two-sided approximation is that they
have a very strong assumption. That effective-resistance of every edge is small. That's very
strong. And if you read their proof the reason that they give a two-sided approximation is just
sort of a byproduct. So the main proof is a one-sided approximation, and there's a simple trick
that becomes two-sided. In fact, the reason that we had to extend it is that it's not two-sided.
The main proof is essentially one-sided. So I'll tell you a little bit more in the next talk, but I
don't know if there is another way.
>>: Thanks, Shayan.
Download