1

advertisement
1
>> Yuval Peres: Okay. Good morning. We're happy to have with us Mohit Singh from
MSR New England, who will tell us about iterative methods in combinatorial
optimization.
>> Mohit Singh: Thanks, Yuval, for the introduction. So in this talk, I'll talk
about iterative methods and combinatorial optimization and essentially try to give
a brief overview of how these iterative matters are the general technique and try
to convince you that they work for lots and lots of problems.
A lot of this work was done by a lot of people, along with a lot of people, and I'll
try to point out that as and when I talk about the work, okay?
So to begin with, there, is in the combinatorial optimization, my view is essentially
there's a master dichotomy of problems. There are two kinds of problems. There
are the easy problems and there hard problems. The easy problems some which are
[unintelligible] solvable, which are MP, and some more examples are problems like
like spanning trees, matchings and [unintelligible].
On the other hand, there are some hard problems, which are MP hard problems and these
are problems like [unintelligible] design, location and scheduling problems,
right? And there is one general technique which has worked for both of these
problems very well, and that is linear programming.
And usually, how does this technique work is you formulate a linear program for your
problem, and if the problem is in polynomial time, like it's MP, then the program
itself is the linear program itself is integral. If you solve the problem, you'll
get an integral program back, hence you'll be able to solve the problem in polynomial
time. That's one way to show, like the polynomial solvability of these problems.
And this technique has been very, very powerful.
On the other hand, for MP hard problems, you try to do the same thing. You formulate
a linear program, but you don't expect the linear program to be integral this time,
right? Otherwise, you will solve MP hard problem in polynomial time. So what you
try to do is then try to use a fractious solution and try to obtain approximation
algorithm.
So let me try to briefly define what approximation algorithm is. [unintelligible]
approximation for a minimization problem [unintelligible] approximation to be more
precise is an algorithm which returns a solution with cost at most [unintelligible]
2
so for minimization problem. And there have been some very general techniques which
have been obtained to use this, do this kind of a procedure which uses a fraction
solution and tries to obtain approximation algorithm.
Some of them include randomized rounding. [unintelligible] which is not capable
[unintelligible] but [unintelligible] general technique again to obtain
approximation algorithms. And then there is the iterative rounding. And in this
talk, essentially I'll try to show that iterative rounding is a very general
technique which gives approximation algorithms for MP hard problems.
Moreover, actually, I'll try to show you not only does it give approximation
algorithm for MP hard problems, it's a very natural way to actually analyze linear
[unintelligible] formulations for problems which are MP.
And this analyze -- analysis actually acts as a basic groundwork for attacking
problems which are MP hard. So we'll use this basic ground work to attack problems
for MP hard and obtain very strong approximation algorithm. And in some things,
you want additive approximation algorithms, which I'll actually tell you later.
So before I tell you exactly how we extend iterative rounding framework, let me just
try to give you what basic iterative rounding is. Actually, even before, let me
tell you exactly what typical rounding algorithm works. How do they work?
So what the typical rounding algorithm does is you take a problem, you give it to
LP involver. You give LP fraction solution [unintelligible] fraction solution and
then you have some nice rounding procedure and you get an [unintelligible] solution
out.
This is how typical rounding algorithm works.
Okay?
What iterative rounding which was introduced by Kamal Jain in '98, what it does is
essentially initially is the same thing. You take the problem instance, you give
it to MP solver, you get the fraction solution. But at this point in time, you
actually figure out in the MP solution there is some good part and there is some
part which is not so good.
So you just try to work with the good part of the fraction solution and try to round
that. For the rest of the part, you try to ask what part is this trying to solve
and try to cause [unintelligible] whenever problem is left and do the whole thing
again, right?
3
So, for example, in a very nice result for a covering problem, like [unintelligible],
that if you have, you can always find an element which the linear program places
a value for at least a half. Then you would -- that would imply a two approximation.
In each iteration if you can find such element. So that's the most tough. You have
to show that in each iteration, you can always find an element which is at least
a half. Then you double this so in your integral solution you pick [unintelligible].
The linear program was picking a value of a half so you pay double the cost.
So that's why this naturally implies a two approximation algorithm, right?
So this is a basic iterative rounding approach. It was nicely applied to a bunch
of problems, especially [unintelligible] network design, which I'll try to mention
later in the talk as well.
So what we, the kind of problems that we work are actually [unintelligible]
optimization problems. So to let me define them by example, suppose you want to
design a spanning tree, right? And you want to design a good spanning tree. So
what could be a good measure of a good spanning tree?
One could be the low cost. The spanning tree has to have low cost. So a low cost
spanning tree so, for example, in this graph, let's say the red edges are cost much
less than the white edges.
So hence, the [unintelligible] would look like this, right? Because these edges
cost much less. But they could be motion of spanning tree, which is good, depending
on which -- what measure you're trying to optimize, like one measure like, say, if
for you is that you want [unintelligible]. Like this would be a good notion if you're
trying minimize condition force, right?
So then this is a very bad spanning tree, because this [unintelligible] is very high
degree, rather you want the spanning tree which looks like this. Right? This is
a much better spanning tree.
And in our work, we try to actually work, try to solve problems with not only just
how one criteria, but both criteria simultaneously. So that's why they're called
multi-criteria or [unintelligible] criteria optimization problems.
And formally, the way to work with these problems is that I will give a bound on
one of the criterias and try to optimize the other one. Okay? For example, for
this problem, the way to formalize would be I want to find the three which is the
4
cheapest tree. It has to be a spanning tree, which is the cheapest tree, and the
degree of each node is it must be a given [unintelligible], right?
That's one way to formulate this problem. And the kind of solutions which we'll
see because this problem was turn out to be MP hard are by criteria approximation
algorithms. So let me just define it.
Alpha, beta, multiplicative by criteria approximation algorithm would return a tree
whose cost is within a factor of alpha and the degrees are violated by a factor of
beta. So I'll violate both, both the, both the criterias for the cost as well as
the degree. So it would be alpha BETA. Alpha for the cost and beta for the degree
for example, for this.
So one more motion in which we'll see more in later in the talk as well would be
additive approximation algorithms. What are additive? Yeah?
>>:
To be this parameter [unintelligible].
>> Mohit Singh:
>>:
Yes, yes, which is specified.
[inaudible].
>> Mohit Singh: So the thing is what you'll be able to comparing yourself. For
example the cost you're comparing yourself, so the alpha is actually this solution
without the beta. Like this point will come later in the talk as well. But
essentially, what are you comparing yourself to, yes.
So what we'll actually later also see, additive approximation algorithms. So for
example, addtivie in the degree bound would be an algorithm which this violates by
iterative factor, not by a multiplicative factor. So for example, iterative
approximation algorithm for this problem, additive factor is in the degree would
look like this. Will have additive violation in the degree.
>>:
[inaudible] is not so good.
>> Mohit Singh: Yes, yes. Actually, you're seeing why I have a beta plus beta now.
So yes, this is [unintelligible] of notation. I will actually, because later on,
we'll also see algorithm where it not only just have some multiplicative as well
as iterative guarantee.
5
So in the later on in the talk, for a degree of [unintelligible] I'll actually write
down the specific bound actually that is obtained. And for the cost, I'll write
down the factor. Okay?
>>:
[inaudible].
>> Mohit Singh:
>>:
Yes, that's even more complicated, yes.
Would that have been [inaudible].
>> Mohit Singh: Yes. Okay. So just to get ahead of myself, one, for example, one
result that I'll show in this talk is that essentially, nearly the optimals for this
problem. Which is with the cost, the alpha would be one, so the cost would be the
optimal cost, and the degree bound will be violated by the [unintelligible]. And
we'll see essentially, because this problem is MP hard, you cannot do it better than
that.
Okay? So but so but how do we essentially do this? So the main contribution is
that we introduce the really nice technique we call the iterative relaxation. The
basic idea is very simple. You relax degree constraints, in the previous example
or, in a sense in your problem, you try to relax constraints at each step of the
iterative procedure.
Okay? Relax means just delete them. The thing is if you delete a constraint, that
is good. It makes a problem simpler. But the hard part is that now this constraint
might be violated, because it's no longer present. Okay?
So then the basic idea, the technical part would be to actually convince that it
is not violated by a large amount. It has a low violation. Okay?
And then we show actually this technique, simple technique is actually very broadly
applicable, and the way we will apply is that we'll look at problems which are
[unintelligible] solvable and we'll give new iterative proofs for many classical
integrated results which those problems are polynomial solvable and then we'll
extend those integrated results to the multi [unintelligible] optimization
problems.
We'll actually show that it fits in very nicely with the iterative rounding procedure
as well, the iterative relaxation procedure. Instead of the integral
[unintelligible], we can use this iterative rounding algorithms as well. And with
6
the combination of the relaxation procedure, it gives us very strong guarantees for
a bunch of problems.
So broadly, we'll try to apply to problems which look this kind of form. So we will
have some base problem, and we'll add some side constraints, and this is the kind
of problem that we solve, the constraint problem. Right?
For example, the spanning tree problem, the bounded degree spanning tree problem
which you saw here, the base problem would be the spanning tree, and then the side
constraints would be the degree constraints.
And essentially, this kind of picture will come up again and again in the talk. So
for most of the problems, I want to take [unintelligible] in this form and the
technique would be essentially applicable if the problems look like this.
You'll have some nice base problem. Base problems should have a very nice structure,
and then the side constraints, they also have some nice structure, then we would
be able to apply this technique. Okay?
So one of the problems that we try to apply this technique is to degree bounded network
design problems. So one of the examples we saw was the earlier, the spanning tree.
So there could be more general network we could try to design, [unintelligible] a
tree, [unintelligible] a forest, or [unintelligible] network design which is a
general class or directive class like [unintelligible] subgraph.
If you do not know all of these, but that is not very important thing, like I will
mainly talk about only a couple of these problems from this list.
But the main point is, actually, you can try to study these problems by single
criteria, which could be either the cost or the maximum degree. Right? Like as
we define for the spanning tree. The either the cost or the degree, or you could
try to study both of these criteria simultaneously.
For example, if you try to study for the cost, the spanning tree problem is polynomial
solvable so that's rate a factor of one. This problem is [unintelligible] solvable.
The rest of the problems are like very good approximation algorithms with a small
cost in factor too. They're a very nice work is the [unintelligible] network design
which is given by Kamal Jain, and this actually is the first application of the
iterative rounding algorithm, right? A lot of hard work [unintelligible] on this
work.
7
If you try to study now just the degree, when you don't worry about the cost, then
also there are some very nice works, especially the work of [unintelligible], which
works for spanning tree and [unintelligible]. Rest assured that actually you can
get additive [unintelligible] approximation.
Again, so this is the actual degree of the tree which is returned, the
[unintelligible] or the spanning tree, right? So [unintelligible] using rotation,
these are approximation algorithms. So essentially, but the main point is for the
degree, they were only very special cases when some results were known which were
quite tight. Otherwise, the results were very, very weak for more general network
design problems.
And if you try to [unintelligible] this criteria, the result would be would even
get worse, right. And the only major result that was known was by result of Geomans,
who essentially showed that there is an approximation algorithm which
[unintelligible] of optimal cost. The cost factor is one, and the degree of the
tree returned is B plus 2. So there's only additive two violation.
Okay? And for more general problems, the result was not at all very good. And now
work essentially, we give a very general technique, so that this standard technique
applies to essentially all network problems for degree constraint.
So for spanning tree, we'll prove this result and I'll talk about the simple proof
of this result later in the talk.
For more general network design problems, we essentially, for example, show for
survival network design, the cost factor essentially we can get the same cost factor
as you can get without the degree constraints. And for the degree constraints, we
can actually improve it.
And the problem, the [unintelligible] get the [unintelligible] factor
[unintelligible] criteria. This is the first result obtained jointly with
[unintelligible]. Okay?
with
And in more recent work, we actually could show at least some of these, as you can
even improve them to additive factor for the degree. Right? So the cost factor
would be the same. So obviously, you cannot hope that the cost factor can improve
without improving these first. So the cost factor is essentially the same what you
would get without a degree constraints. And for the degree constraints, you get
8
the smaller iterative violation.
This is jointly with [unintelligible].
The situation for [unintelligible] is a bit more different. Maybe in the end of
the talk, I'll try to give a brief overview about that as well.
One thing to note is actually these results, actually just improve the approximation
algorithm so the problem when you just worry about degree, don't even worry about
the cost. Even then, these are the -- we actually give one of the first additive
approximation algorithms for these problems. Okay? Any questions?
>>: Are there any lower bounds on how well you can do?
get B plus 1?
Would it be possible to
>> Mohit Singh: Yeah, so it is possible to get B plus 1. We do show for example -- so
yes. So we do show, for example, so like if you just worry about the degree bound,
for example, you do get B plus 1 or half [unintelligible] tree. But there is a reason
why our technique will not give -- the linear program we work with, actually you
cannot. It hasn't [unintelligible] B plus 2.
Using that, we could possibly get even a B plus 2, but this essentially becomes very
tedious and, like, instead we could easily argue B plus 9 in four pages, but to argue
from B plus 9 to B plus 3, it took us about 15 more pages and possibly just a lot
very lot more tedious. But we certainly cannot get B plus 1 using our approach.
So we cannot match this result, for example. But in more general cases, I do not
believe you will have a factor of K. You have to have some factor of K over here.
So I cannot prove it, but that's what I believe.
>>:
So [inaudible].
>> Mohit Singh: [Unintelligible] algorithm, yes. There should be
[unintelligible], of course. Of course, you can always look at the most feasible
solutions and select the best one, yeah, okay?
So and this kind of technique is not only applicable to just [unintelligible] network
design problems, and I'll try to show you that we also apply to some more general
structures. For example, just applying it to the base problem [unintelligible] the
base problem spanning tree instead of adding degree constraints if you add just more
9
cost constraints.
So we have a multi-criteria network design problem, multi-criteria spanning tree
problems and where we improve the result of Ravi and Geomans, similarly, like we
could have more general structures like [unintelligible] and [unintelligible] if
we add extra constraints to them, we'll get some constraints in [unintelligible]
problem and constraints [unintelligible] problem and we will get recognized
algorithms for these problems as well.
For example, by [unintelligible] match, we show a simple iterative proof that this
[unintelligible] matching is polynomial solvable and that directly gives us an
[unintelligible] result of [unintelligible] about scheduling and unrelated
polynomial machines, which you can think in the same framework or [unintelligible]
multi-criteria by [unintelligible] matching, right?
I'll try to talk about this result possibly at the end.
Okay?
So in this, essentially, the most of the talk, I'll try to give a new proof of this
result or a very simple algorithm and a very simple proof of this result which will
illustrate the method, and then I'll talk about some, the extensions of
the -- extensions of these problems and some of these problems as well. How you
obtain that, okay?
So let's come back to the first problem that we started was the minimum bounding
spanning tree problem. So remember, the problem was that given a graph as cost and
some [unintelligible] B and the task is to find a tree which has minimum cost and
maximum degree at [unintelligible], okay?
For example, if B [unintelligible] two, then this black tree is not a feasible tree,
because the degree overhead is violated, right? While this black tree is a feasible
tree. As you can see, for B [unintelligible] 2, essentially the trees I'm sorting
are [unintelligible] parts. The only spanning tree which has a maximum 2 is a
[unintelligible] part. So I'm searching for minimum cost [unintelligible]. And
this problem is MP complete, so hence the basic problem that I'm starting is actually
MP complete.
So I cannot approximate the -- actually, one more thing, I cannot approximate this
problem if I don't violate the degree constraints. Because the minimum cost
[unintelligible] problem is not approximable, right?
10
This is because I do not assume anything, any structure about the cost on the tree
as is. I do not assume this [unintelligible] equality or they come from metric,
right? So that's a general cost function on the metric, on the graph.
So this form has been actually very well studied, and there has been a series of
work. I'll not go into detail all of them, but one work which I pointed out was
the work of [unintelligible], which studied a case when the graph does not have a
cost. So just looking for a spanning tree of -- which has the smallest maximum
degree. And they give a plus one additive approximation algorithm. This work was
building on a work -- earlier work of [unintelligible], which gave a very nice
min/max condition. But didn't give a polynomial algorithm. So they converted his
min/max condition into a polynomial algorithm.
There has been a series of work and, more interestingly than the guarantees, it was
the kind of techniques. They were a different set of techniques being applied to
these problems starting like from [unintelligible], like [unintelligible]
relaxation and some flow base techniques.
And a series of work was gone on, but the main result that came was actually the
work of Geomans, who actually showed that there is a polynomial algorithm which gives
an optimal cost and the degree bound which is only two more than what was required,
right.
So it's only additive violation of two. And here, also made a conjecture that his
algorithm should be possibly improved. There should be a polynomial algorithm with
returns of [unintelligible] optimal cost and maximum degree at most B plus 1.
And in joint work with [unintelligible], we actually showed that this conjecture
is true, that there exists a polynomial algorithm for this problem, which returns
a tree of optimal cost. The cost is no more than the cost of the optimal, and the
maximum degrees at most B plus 1, okay?
Actually, I put the inequality. My cost actually could be strictly less than the
[unintelligible]. Mainly because the [unintelligible] I am comparing myself to is
the three cheapest, three whose maximum degree is at most B. So I'm violating the
degree, but my cost, I am comparing it to the previous -- the three whose maximum
degrees at most [unintelligible]. In that sense, I am cheating, but this is
essentially the best result that you can get. If you want to maximum degree to be
B, you cannot get any guarantee on the cost.
11
So for essentially for the rest of my talk, I'll try to give very sharp proof of
the integrality of the spanning tree problem, the spanning tree problem itself
is -- can be solved in polynomial time by linear program. And giving you proof of
that, and then we just show how that implies a very simple B plus 1 result. And
then I'll conclude with the extensions.
So let's come to the basic picture we have in mind. So we have this spanning tree,
which is our base problem, and we have extra constraints which are the
[unintelligible] constraints, right? This is the constraint portion of the
problem.
So how do we -- let's try the linear program. Let's first try integer program for
this problem. We'll introduce a variable for each edge, whether it's going to be
near spanning tree or not, and we're going to minimize the cost of this tree. That's
the linear -- like that's the objective function.
And one concern we are going to add is the total amount of edges you're going to
pick is exactly the number, or this is minus one, right? This holds clearly for
any three. Any three has exactly N minus one. N is a number [unintelligible],
right?
There's another set of concerns that I'm going to add. For every subset of word
[unintelligible], we're going to see the number of edges you pick inside the subset
is no more than the size of the subset minus one. These are called [unintelligible]
elimination constraints, because like this constraint is [unintelligible] also in
the [unintelligible] problem.
And these are clearly true for any spanning tree, because this [unintelligible] the
spanning tree is a forest, right. And so this inequality must be satisfied. And
this holds for every subset. And we [unintelligible] this linear program.
The -- indeed, right now, it's [unintelligible] program because we have integrality
constraints. We remove these integrality constraints and get this linear program
and this [unintelligible] program.
The amazing thing, actually, what Edmonds showed was that if you just saw this linear
program, you will always get [unintelligible] solution. You do not need to place
integrality constraints. You will also always get the integrality solution if you
solve this. The optimal solution, you can always assume is in [unintelligible]
solution.
12
Each variable would be the zero or one at one of the optimum solutions.
>>:
[inaudible].
>> Mohit Singh: There would be an optimal solution. Vertex solution would
[unintelligible] but there would always be some optimal solution, which will always
be integral. The vertex solutions will always be integral, yes.
Okay? But before that, you might ask how did this linear program actually is
exponential in size, how do you even solve it in polynomial time. There are a couple
of ways. One is that you can write a [unintelligible] compact formulation that was
done by Wong, and there was a [unintelligible] separation oracle given that you can
separate over these constraints in polynomial time that was done by Cunningham.
And [unintelligible] separation optimization would imply that you can solve this
linear program in polynomial time, okay?
What I am going to show you in briefly is actually that a new proof that this actually
linear program is integral. The proof is actually not really new, but the basic
idea behind it would make us actually work, get [unintelligible] new iterative proofs
for the constraint versions of these problems.
So what is going to be, it's going to be a very simple actually rounding proof.
are you going to do? Just solve the LP to optimum extreme point.
What
So we'll not -- let's not worry what extreme point for now. We're just going to
solve the LP and get the optimum solution, right. If you find any edge which has
very, like, which [unintelligible] variable, which is a value zero, that means the
linear program is telling you not to not pick this edge. So you should just not
pick this edge and throw this edge out, right?
And
the
And
are
then I claim that you just return all the edges which are left, okay? And all
non-zero edges, and I claim this set of edges would be a minimum spanning tree.
why should that be true? Firstly, I want to say that if the number of edges
returned is [unintelligible] minus one, then it must be MST.
Why is that be true? Firstly, because I know the X value in all these edges is the
size of what this is minus one, because this is a constraint you place. Hence, each
of the -- each of these edges must have a value of one. Because the total number
13
of value in these edges is number where this is minus one, and each one cannot have
more than a one.
So each of them edges must have a value of one. So the cost of this solution, E,
right, is actually -- is the same as the cost of the MST.
So as long as I am saying the number of edges is the number where this is minus one,
the solution must be a tree. It's a feasible integral tree.
The only thing need to prove is that the support is actually small. The number
non-zero edges which you have is only number where this is minus one.
And that's a very simple argument once you go to the -- use the fact that it's extreme
point and the basic idea you use is uncrossing technique, which is very nice
technique.
So the way to -- you want to prove is that X is extreme, is applies the number of
edges is at most minus one, okay?
So what does extreme mean? That is essentially a corner point on the
[unintelligible]. It's extreme optimal solution, right? And one basic fact is
that if X is extreme, then there must be dealing in the independent tight constraints
was defined, but these are dimension of your [unintelligible], okay?
For us, the dimension is actually the number of variables. Dimension the
[unintelligible] is the number of variables, which is the number of edges.
Remember, the linear set of independent [unintelligible] constraints for us are
subset constraints. We enforce constraints for each subset, that it's
[unintelligible] constraints and type would mean that they are iterative quality,
right?
And their standard [unintelligible] would imply that this linearly independent set
of type constraints can be chosen to be a laminar family, right. And what does the
laminar family mean? Remember, there is a constraint for each set. Right?
And if I think of this set and think of the inclusion as inclusion properties, I
would see a family who would be laminar if, for any two sub sets for the set, one
of them must contain in the other. Okay?
14
So there's essentially very trivial inclusion relationship between all the sets in
the family. So this family is laminar and then the basic fact from uncrossing you
can assume is a linearly set of independent type constraints you can choose to be
a laminar family, okay?
And there's a very natural tree corresponded for every laminar family, right? And
I'll go into more detail about this tree, but now the whole proof is quite
straightforward. Number of variables you have, which is same as dimensions, the
number of edges, and the number of constraints linearly tight constraints is the
size of your laminar family.
And you know because this extreme point solution is a unique solution to these
constraints. You would have these things to be equal. And now very simple
inductive argument would show, because your laminar -- your family's laminar to the
number of sets in it is actually bounded by minus one.
And that actually proves this is one of the worst case, right?
size two, then a set with size three and so on. Okay?
You have a set with
That exactly proves what you wanted to prove. That the number of edges is at most
N minus 1, and that gives you the proof that the linear program is actually integral.
And that gives proof, like, which proves Edmonds' theorem the spanning tree
[unintelligible] is integral.
Now we'll try to use this proof to actually give the B plus 1 result, okay? And
the proof is going to be quite straightforward. Now like remember the same picture
again. Now we have the extra constraints coming in. So I generalize the problem
a bit further. I have essentially assumed that the degree bounds possibly can be
distinct for each vertex, right. It's a BV. It's only a generalization. I can
always assume BV is same for each vertex.
There's one more way I generate this problem. I'm assuming that the degree
constraints are not given for every vertex, but only a subset of them, W. I can
always assume W to be V if I wanted for every vertex, right?
So let's try the linear program for this problem. The linear program is quite
straightforward. We have the same linear program as for the spanning tree, we only
introduce these new degree constraints. So the new degree constraints say the
number of edges you pick at the vertex V is at most BV.
15
And you import these constraints for the same W.
at which you want to impose degree bounds.
>>:
The W are the [unintelligible]
Is it possible to [inaudible] compact presentation for this?
>> Mohit Singh: Yes, so the compacter presentation, essentially you want to think
of the spanning tree as a falling that in the spanning tree, is the smallest connected
graph where you can send unit flow from every vertex to every other vertex, right.
So you would just say you would have flow variables, that those were the new variables
introduced. You'll have flow variables for every pair that you can send unit flow
from the X can support unit flow from every other vertex.
Actually, you can make it more compact by assuming that you don't need to send between
every pair, but you can fix one route and just sent flow from this every other vertex
to this route.
And that would introduce about all the N cube variables and you'd have all the N
cube constraints. This only has all the N square variables. So the variables blow
up, but the constraints reduce in size quite a lot.
So try, let's try to just write the B plus 1 algorithm in the same way as we did
for the spanning tree. Remember, the way we did was we saw the LP to get the optimal
solution. We remove all the edges that have a value of zero, right? That's the
natural thing to do, because these edges, the linear program at least telling us
not to pick these edges. So we should never pick these edges. They always already
have value of zero.
And then we were returning this set of edges, E, and last thing we weren't sure that
this would be the spanning tree, but this time we know this would not be a spanning
tree, right?
So then this is the crucial step where we see the following step. This is the basic
relaxation step which we introduce, we see the following. If you have a vertex,
which has a degree constraint present on it, and it's degree, not how much the LP
values are, just look at the number of edges [unintelligible] currently after
the moving a bunch of edges. The number of edges left is no more than one more than
the degree bound, right? Then you should nonvariable this degree constraint.
Because in the worst case, what might happen all of these edges might be included
in your solution. Right. But even then, the number of edges you would include is
16
not -- like at most one more than your degree bound, right?
this step at these point of time, right?
And you just keep doing
And that is a whole algorithm. You just say while your number for variables and
number for vertexes for which you have a degree constraint does not go to the empty
set. You keep doing this, and at the end, you will return this set of edges, E.
This is a whole algorithm.
The last thing which I presented, the algorithm was a lot more complicated, I could
only present a B plus 2. But in more recent work, we realize this is the right
algorithm. Once you look at it, it actually looks like the right algorithm for this
problem as well. It's a very simple algorithm. You're just removing the degree
constraint at this point of time, right? But you made sure that you only do moving
the constraint for vertexes which are -- for which you can make sure that the degree
constraint will not be violated by at most one more, right?
The only thing I have to show you is actually that the guarantee of the -- these,
this algorithm are correct. That the returns are free of optimal cost whose degree
are no more violated by an additive one. But more importantly, I'll have to convince
you that this algorithm doesn't get stuck. It should not happen that the W is not
empty, and I cannot actually implement the third step, right? That would be the
more crucial thing to prove.
So let's try to just see why would the solution return as a three? Suppose this
algorithm was correctly, and the W is empty, right? If the W is empty, essentially
what I've got is a minimum spanning tree. The linear program for the minimum
spanning tree, because I do not have any of the degree constraints. Hence,
essentially, the linear program be sure that every extreme point solution for that
linear program is integral.
Hence, if W becomes empty, I would get essentially the set of edges would be a spanning
tree, right? So it would be a tree.
Secondly, why would be of optimal cost, what am I doing at each step? Essentially,
I'm [unintelligible] zero edges, but that doesn't hurt the cost at all. Other thing
I'm doing is removing a bunch of constraints, right? That's only [unintelligible]
improve like degrees of cost of the finer solution. I'm only reducing my cost of
these step. So this is okay.
The third is is the guarantee of the degree bound.
Why would the degree at each
17
vertex be at most B plus 1? That's quite clear, right? One, because the degrees
only gets violated once I removed the degree constraint. I remove it only when I'm
sure. It would not be violated by additive one, more than additive one, right?
So the guarantees are quite okay. So the main thing to prove is actually that this
algorithm will actually work. Right? So what does that mean? That I have to show
you that there exists a vertex in W. If W is not empty, then if there exists a vertex
in W so this degree is at most BB plus one. But is a support effects. This is quite
important.
The important thing is why am I making progress each time? I can give you some
[unintelligible] why am I making progress each time is because as I remove
constraints, right, the -- as the number of constraints is going down, I know that
the next time the support actually will go smaller, because there will be smaller
number of constraints defining my LP extreme point.
Smaller number of constraints so there will be smaller number of non-zero variables
in my solution.
So slowly, the number of variables which have a value of zero will go down, and these
are just -- those are just will be to move one by one.
>>:
[inaudible].
>> Mohit Singh: Sorry, number of variables which have values zero will go up, sorry,
yes. And those edges will be removed one by one.
>>:
[inaudible]?
>> Mohit Singh: Yes. So the basic [unintelligible] will go up, because I'm
removing some constraints, right? Hopefully. That's the [unintelligible]. So,
of course, I'll ->>:
[inaudible] I don't see [inaudible].
>> Mohit Singh: Well, intuitively, the basic idea is if you worked out a set of
constraints, let's just go to the linear program. So what are the constraints are
defining? These are the spanning tree constraints which we'll see, which we already
seen they are very simple set of constraints. There's a laminar family sitting over
here. And there are the degree constraints.
18
If, as a degree constraints are getting removed, the only constraints which are going
to define the linear program in your bases would be constraints coming from this,
[unintelligible]. So slowly, these constraints will start defining your -- the
optimum solution.
And as these constraints are getting reduced, more and more of these constraints
will start kicking in. So those are the basic [unintelligible], and we'll see
exactly why the formal argument also why this works, okay?
So this is essentially the basic [unintelligible] we need to prove. And the basic
proof will go as follows. You have a set of tight constraints. The tight
constraints, because we assume there are no zero variables. That means X equal 2,
zero constraints will not be defining any of the variables. Right? They are not
tight, because if they were tight, you would throw that page out.
So the rest of the constraints which are remaining is only the spanning tree
constraints, and you know independent set of them will define a laminar family as
before.
And there will be the degree constraints. But a degree constraints are not there
for all vertexes. They're only for a subset W, right? And that's very important.
There are fewer than what you actually started with.
And you know a number of tight constraints has to be equal to the number of variables.
>>:
[inaudible].
>> Mohit Singh:
>>:
Because you started with W equal to W.
And in your iteration --
[inaudible].
>> Mohit Singh: Okay. Let's say, okay, original [unintelligible], but that's what
we want to show. In, let's say, iterative algorithm, right, slowly you start with
W with V and slowly your W is going down. So right now, you have some W.
>>:
Correct.
>> Mohit Singh:
Yes, yes, and this is that [unintelligible].
19
>>:
Sorry.
>> Mohit Singh: That's okay. Yes. Yes, this is so right now you have only subset
for W. You don't have it for all the vertexes.
>>:
Maybe, maybe not.
>> Mohit Singh: Maybe, maybe not, yes.
have a variable for each edge. Right.
must equal the number of variables.
And you have variables will be the -- you
And you know the set of [unintelligible]
Okay. And basically, the contradiction would be we'll just actually show the number
of edges is actually more than the size of laminar family plus the number of W. Size
of degree constraint. And that will be the main contradiction.
>>:
When you say W, are you talking only about the tight ones now?
>> Mohit Singh: Okay. So I am actually [unintelligible], yes. So I am actually
talking about, I'm -- right now assuming W only the tights. I'm assuming they are
all tight. But the argument goes through even if you don't assume all of them are
tight.
So but let me, for simplicity, assume all of the [unintelligible] constraints are
tight. Okay? But the argument goes through, but it gets a bit more complicated
but not much, just one case which gets different where the W are all [unintelligible],
and W are not tight to degree constraint, okay?
And we're going to show the number of edges is more, strictly more than the size
of laminar family, plus the number of vertexes, right? If this condition is not
satisfied, right?
And like another intuitive thing why this is true, suppose this condition is actually
not satisfied. That means the degree at each vertex is strictly more. This
inequality is strict. That means the degree of each vertex is large. That means
there should be a lot of edges in your graph, and that's exactly what you want to
prove.
Right, we would actually prove that the number of edges will be much larger, because
20
these constraints are not satisfied, and the bound we are trying to aim for is this
bound. We are going to prove that they are larger than this bound.
Okay? So that's -- and the basic idea there will be some [unintelligible] argument.
Most of these proofs go by some counting argument and the basic idea of this counting
argument would be the following. We'll just give one token for each edge. So for
the number of tokens we'll give is the number of edges, and then
we'll -- [unintelligible] with those tokens, and we'll collect tokens so that we'll
collect one token for each member of the laminar family, one token for each vertex
in W, and we'll show that some day some token left, still left, so that would give
us the strict inequality.
So there's a basic idea of all of these proofs, that there's some counting argument,
charging argument. This was also two of the work in Jain's work as well,
[unintelligible] rounding, okay?
So I want to give you a simple argument, which is essentially a lot of this build
on the work of [unintelligible], who simplified our proof. So let me briefly give
you how the argument work is. You know there is one token for each edge. So how
does this token is distributed?
Depending on -- so this edge is UV. So it has an LP value of XUV. So one minus
XUV by two will give to vertex U and one minus XUV by two will give to vertex V.
So giving to the vertex means for the degree constraint, right? Finally, we have
to give tokens for edges and collect for degree constraints, vertex in W and laminar
family. So I'm going to tell you how [unintelligible] distributing this token from
the edges to the vertexes in W. Okay? And that's how I do it. Right? Okay?
So this takes about one minus XUV of token of four, and I had one. So XUV is left,
and that I give to the smallest set in my laminar family, containing four
[unintelligible] points. Okay?
And this is the token -- this is how the token is distributed. I have to show you
these two property that can collect one token for each member of L and each vertex
in W.
And the basic idea is for each vertex in W, and this is a basic condition. Why should
work. Is because I got one minus XE token. One minus XE by two token for each of
these edges. If I sum it up, the number of tokens actually if you sum this up, you
21
will actually get the number of tokens you get is degree of this vertex, number of
edges [unintelligible] minus X of all of these edges, which you know is at most the
degree bound, because the constraint is present, divided by two.
And because you know this quantity must be, the numerator must be at least a two.
Because if it were smaller than a two, then you would have removed the degree
constraint. Because that's exactly your bound constraint for removing the degree,
degree constraint, right? That's exactly the condition you're looking for.
So if this constraint is not satisfied, you cannot move the degree constraint, this
quantity must be -- the enumerator must be at least two and you will get a -- like
the number of tokens for this vertex is at least a one. And that is very crucial.
And there's basic [unintelligible] why this works. Right? And so now you get one
token for the degree constraint.
The argument from the [unintelligible] is actually not very difficult, but I'll try
to omit this, because it's a very simple argument. Essentially, you just take
the -- write the constraint for this site, and look at what all the children are,
the constraint for each of the children and just apply them, and you would get the
number of tokens for this set is equal to some integer which has to be a non-negative
integer. But it can't be zero, because of some argument.
So most of it, as you can see, it really does not use a fact that W and all the vertexes
in W are tight or not, right? So that's what I actually -- at that point, I did
not -- well, like talk about over here. But that point mainly comes in the fact
because finally, I have to not only give one token, but get some extra token as well.
Right.
And this extra, for this extra token, if there were some vertex, which actually had
edge [unintelligible], which was not in W,. That edge, that will -- I would get
some tokens at that which I haven't used for the degree constraints, right. So the
only problem I would only get is when W is equal to the whole vertex set, and all
the constraints are tight. Right? And then you implied that all laminar family
constraints and degree constraints cannot all be tight, right. And you'd get a
contradiction from there.
So there is one simple case I'm trying to gloss over, but that's not a very tough
case. It's a very simple case. So that that proves this result, and, okay. So
basically, the [unintelligible] is essentially this. So we had the base formula
and we added some site constraints. And that's how we applied the spanning tree
22
constraint.
The site constraints, we added, but the degree constraints, okay?
And that's how we got this result. We could try to apply this the same methodology
for the spanning tree, but adding more different set of constraints. A different
set of constraints, for example, we add is that not only given a cost function, which
you're trying to minimize, but given actually many other length functions on along
the edges. And you want to see under the length function, the length of your tree
is no more capital LI.
Okay? And so like actually I didn't -- so about the many Is, but the constant number
of them. And in joint work with [unintelligible] and Ravi, we actually sure that
this technique actually can be used to give a [unintelligible] approximation scheme
that for any [unintelligible] epsilon we can get one plus epsilon approximation for
this problem as well.
Previously, only this was known only for two costs.
length function. This was work of Geomans and Ravi.
it's a very simple proof for this fact.
>>:
So one cost function and one
So this generalizes this, and
[inaudible].
>> Mohit Singh: Okay, yes. No. No. Actually, that is a very nice problem. In
some sense, a diameter problem, right? Like you want a minimum cost, minimum
diameter.
>>:
[inaudible].
>> Mohit Singh: Yes. So the thing is, right, essentially, I cannot write, there's
a small set of linear constraints, right, for bonding the depth. I have to introduce
extra constraints. Essentially, like how do you say, how do you actually measure
the depth just in your [unintelligible] set of variables. It's not easy to do that.
So that is why it doesn't fit in this framework very nicely. Yes.
The other problem for example I mentioned is for the steiner tree, we could add the
degree constraints. The problem we had becomes is the steiner tree problem is not
as easy as a spanning tree problem.
The spanning tree problem, without the [unintelligible], was polynomial solvable.
It was quite easy. For the steiner tree problem, that is not so -- it is not true.
The steiner tree problem, without even a degree constraints, is MP hard. So this
23
is where exactly we use the iterative rounding algorithm instead of using the fact
that you can always get -- the solution will always be integral, we use the fact
that there is always a half edge.
If you solve the LP and you take the optimum solution as the extreme [unintelligible]
solution, there will always be an edge, which is how the MP value of at least a half.
And that used to give a two approximation and along with the iterative relaxation
procedure for the degree constraints, like this was the actually the first work in
this line of work, with [unintelligible] and Muhammad Salavatipour, we obtained the
very same approximation for the cost and we got the 2B plus 3 guarantee for the degree,
right.
And that's quite intuitive to see. The cost is essentially the same. We are only
picking half edges. This remains the same. For the degree bound, right, when you
pick a half edge, the LP loses the degree by half, because you're picking a half
edge. But you pick that edge to completely so you're paying double in the degree
bound as well, what the LP is paying.
And finally, you just say [unintelligible] constraints when only three or four edges
left. Right. And then that's where you get a 2B plus 3 bound, right. So this is
quite natural to see. The main thing is then you have to work out the details that
it works out.
In more recent work with Lap Chi Lau, we improved this result where we actually got
the factor two in the cost, which is the same. But on the degree bound, we only
get iterative violation.
So this becomes more tricky now, because you cannot just pick any half edge. Because
once you start picking a bunch of half edges, you will violate the degree bound also
by a factor of two. So you have to be very careful and then we prove, like essentially
some result about the structure of the extreme points of this steiner tree LP that
you can actually make sure that the vertexes or the edges with half edges that you
are picking, like you're only picking the vertexes which have very low degree bounds.
So once your degree bound goes very low, then you know you're only doubling a small
number, right. And initially, your degree bound could be a hundred and sort of
paying two times hundred plus three, you only do it degree bound goes down to about
two. So you're only doubling two times two rather than two times hundred. So that's
a basic intuition. But then you have to show some, this kind of a property holds
for every extreme point of the linear program.
24
Okay. We applied the same kind of technique by [unintelligible] matching. So by
[unintelligible] matching is also polynomial solvable and we give a intuitive proof
that it is [unintelligible] solvable, and instead of we can add more multiple
criterias, again, that the -- not just cost function, but multiple length function
and you want the minimum cost matching, whose length is at most L1. The length
function number, like L1, then you have one other length function, L2. It's the
length function in the L2 is at most capital L2 and so on.
>>:
[inaudible].
>> Mohit Singh: So essentially, is the same so you just have a linear length function
on the edges. It's same like the cost function, right. You have some cost on the
edges. Your other function, instead of minimizing the cost with respect to some
cost function, you're trying to minimize the cost with respect to many of them. To
distinguishing the cost from the others and just seeing that you minimize the cost
subject to the length is at most some given bound.
And again, you can show that it is a bound like there is a [unintelligible]
approximation scheme, yeah, for this problem as well.
And we also study, like, essentially there's scheduling problem over there that also
fits in this framework very nicely, it's because the scheduling problem like I can
think of it as some type of [unintelligible] matching. I have some jobs. I have
some machines. I have some jobs, I have some machines. I learn to find what the
schedule be problems. Essentially I'm matching jobs and machines. In some sense,
it's a matching problem, which is to have two sites. Because it's really not a
matching problem, I have some [unintelligible] span constraints. The time on the
machine I take is bounded. And I can apply the same procedure and
get -- [unintelligible] that is a two approximation for scheduling under the
[unintelligible] machines.
So this is the whole result from [unintelligible]. Okay? And you can apply this,
the same kind of constraints, like same kind of techniques to more general
structures, like [unintelligible]. For [unintelligible], you can, like, add degree
constraints. Sorry. So this will generalize the bounding degree spanning tree
algorithm, actually. And ->>:
[inaudible]?
>> Mohit Singh:
Huh?
25
>>:
[inaudible].
>> Mohit Singh: Yes, so this is then you have to be very careful how do you define
the degree actually for the [unintelligible]. So the way we define it, we have a
[unintelligible], and all the base set of the [unintelligible] we're giving a hyper
graph, and for heat hyper edge, we're given a bound. So the -- from this hyper edge,
you can not pick so many -- you cannot pick more than two elements from this other
hyper edge, you cannot pick more than these many elements.
For the spanning tree, this hyper edge would corresponds to the -- all the edges
incident at the vertex. So there would be each hyper edge for each vertex and so
on. In that case, it generalizes this bounded degree spanning tree problem.
Interestingly, we actually find out the same kind of algorithms nearly the same kind
of algorithms do work, but there is a very, very distinction between them.
I'll not go into detail of that, but the [unintelligible] problem, all the
generalizes, you cannot generalize all the results from spanning tree to
[unintelligible]. So spanning trees are very special [unintelligible], and they
do have some very special properties you could only use. For [unintelligible],
those properties are actually not to. So we do get iterative model once we're
initially aiming for.
Similarly, we do -- we can try to add degree constraint to a [unintelligible] flow
problem and the same kind of results do work as well. Okay? I'll not go into the
detail of the work. This is work with [unintelligible] and Lap Chi Lau.
Okay. So this is essentially I showed you earlier as well most of these results.
So this is what result I talked about, and the same kind of techniques actually work
for more general problems as well. Right? And for directive [unintelligible]
things, we cannot get additive approximation algorithms and there are reasons to
believe you will not be able to get [unintelligible] criteria additive approximation
or additive [unintelligible]. But in more recent work, [unintelligible] actually
have shown when you just worry about degree bounds, you can actually improve those
results are results, and they do get additive factor over here.
But they show that you cannot actually get -- improve these results, but you can
actually improve just these results as well. Right? Okay. So this is the
[unintelligible] most of these results appeared in the first paper being the one
26
with Lap Chi Lau and Muhammad Saulavatipour, and there's a lot of lap chi Lau
currently I'm essentially almost complete [unintelligible] survey with Lap Chi Lau
and Ravi is going to come out very soon for, like, explaining all the basic problems
that we tried to work on.
Okay? There have been some further applications of the iterative method,
essentially some kind of other variable. So, for example, this work I talked about
for [unintelligible] show very nice results. [Unintelligible] show that iterative
rounding again works for some kind of [unintelligible] generates the same problem.
I'll not define this problem, but they show some kind of iterative algorithms
actually give very nice approximation algorithm for those.
And this the idea of relaxation also works for integral multiflow [unintelligible].
This was shown by [unintelligible] very recently as well.
And in very much more recent work, actually, I try to say is one thing is that I
recently could actually show that the matching problem somehow turned out to be much
more difficult to stay in this framework. The general matching, not the
[unintelligible] matching. And the integral [unintelligible] turned out to be
much, much harder than what I initially thought it was, which actually saw the
matching problem, the linear program again is integral. That was shown by Edmonds
in '65, but it's proving trying to show a direct result -- the direct result on
[unintelligible]. He showed that it's integral by giving a polynomial algorithm.
But trying to show the directly it turned out to be much harder, and I believe this
result has some, possibly some implication of the conjecture by Goldberg and Seymour
for edge coloring in multigraphs. And there is a very conjecture in the similar
spirit of the Gomez conjecture for the spanning tree problem.
Over here, that the linear program actually gives a plus one guarantee, and that
is quite open. And I believe this [unintelligible] proof might have some
implications on this conjecture as well. That's for something that I'm currently
investigating as well.
Okay. To conclude, essentially, we show that we developed a relaxation technique
in quite broad sense and obtain the -- the way we apply is we obtained iterative
proofs for many classical integrality results and we extended these integrality
results to multi[unintelligible] optimization problems and often additive
approximation algorithms.
27
We also extended the iterative rounding methods by using this iterative relaxation
idea simultaneously with this. And hopefully, I've tried to convince you that this
is somewhat a more, like a general algorithm technique than it was believed to be
earlier. And hopefully, it will have much more further applications as well, okay?
So I'll end my talk for some questions.
>>:
Go back one slide.
>> Mohit Singh:
>>:
Yeah?
Sure.
This one?
So [inaudible].
>> Mohit Singh: So this is actually the generalized assignment, like this is the
maximum generalized assignment for [unintelligible]. They work with trying to
maximize the [unintelligible] you have. I'm not absolutely sure about all the
details, but you're trying to assign, let's say, jobs to machines, but you're trying
to maximize, rather than minimize some kind of -- like minimize. You're trying to
maximize some [unintelligible] function, and they show this kind of iterative
rounding procedure for these kind of problems as well to get much better guarantees
than before.
>>:
[inaudible] conjecture?
>> Mohit Singh: Of this conjecture? So there are actually very strong indications
that this conjecture is true. So basically, the -- so closest, you can actually
show there is a one plus model one of the lower bound. There is a coloring, which
is one plus model one of the lower bound.
So the claim is that -- like actually I can tell you this problem as well. So the
basic idea is mainly the [unintelligible] coloring is [unintelligible] from 65 is
that you're given a G, which is simple. Edge coloring number [unintelligible] prime
of G is at most delta G plus one. The maximum degree plus one. You really need
maximum degree colors, right, and it's at most one more than that. So in that sense,
it's a one plus approximation algorithm. You know [unintelligible] G is at most
delta G and it's at most delta G plus one, right. At least for delta G and it's
at most delta G plus one.
The conjecture is actually, in some sense, to generalize this, the [unintelligible]
to multigraphs. I hope you can see this. So let me just write this over here.
28
Conjecture is to [unintelligible] this to multigraphs, right. So but the thing is,
as you can see, the delta G is not a good bound. And that, that can be shown, this
bound is tight for the triangle. If I have a triangle, the delta G is two, but I
need three colors, right. Each edge must get different color.
But if I have K different copies of each edge, it's a multigraph. K different copies
of each edge, delta G goes through 2K, because it is what [unintelligible] is, but
[unintelligible] prime G goes to 3K, right.
So it's not true that for multigraph, this conjecture is not true. But there is
another lower bound, which you can get by a linear program, by solving a linear
program, and I'll not go into detail about that bound. Let me just call this bound
to be Z of G, and the claim is that the [unintelligible] prime of G is at most Z
of G plus one. And it's at least Z of G because it's a lower bound for all
multigraphs G.
>>:
[inaudible].
>> Mohit Singh: No, it can be actually more. For triangle, actually for example,
it is three. Three. The triangle G of G is actually three. So yes. Essentially
comes from the onset mode. That's why there is a matching problem as well. So it
essentially comes from the same kind of the bound like for the matching problem you
set place and inequalities.
>>:
[Inaudible] G plus one.
>> Mohit Singh:
Yes.
>>:
It cannot be more than delta G plus one for simple graphs, yes.
[Inaudible] G, then you know it's a G plus 1?
>> Mohit Singh: Yes, yes. But the problem is still MP hard. This problem is MP
hard. It will not be like, it's not be the fact that when it would [unintelligible]
G plus 1, that G of G will be delta G plus 1 as well. That's early on too.
But what is the theorem is known is this is work by [unintelligible] and his work
was actually non-algorithm, but this result was made, later made algorithmic to make
he's sure that [unintelligible] prime of G is at most one plus smaller one times
Z of G, okay? So when Z of G is large, essentially you can get awfully close to
it.
29
So he would essentially show this is -- this something is like Z of G, plus square
root of Z of G. So it's an additive one violation. He gets a violation which is
a smaller [unintelligible] term. Smaller [unintelligible] term than be this. But
this essentially gives it a very strong indication that this conjecture is actually
true. Pardon?
>>:
Why this is a very strong indication that it's true?
>> Mohit Singh: So the thing is, essentially, it's some additive bound, and it's
okay yes, okay. You can take it as a fact that it is not, but I would rather take
it as a fact that it's a positive. It's not the lower bound. You don't expect to
solve something like 1.1 or something like this. It's on the linear bound, right.
It's additive bound. Like the problem is that the additive bound is not the tightest
bound you can think of, but this is just one result.
For example, before this, there were another series of results which would try to
show that high prime of G ->>:
[inaudible].
>> Mohit Singh: Hm? So there were results, earlier result which would try to show,
like, high prime of G is at most 1.1 of Z of G, and they would slowly improve it
to even further, 1.05, then 1.04.
>>:
[inaudible].
>> Mohit Singh: There are series of people which tried to show this, and each of
these results, actually, they use very different techniques, both of these results,
but, yeah. None of them get as close, but they essentially give some indication
that it's very hard to believe that the result will not be true.
>>: So [inaudible] a problem that you want to apply this to, this [unintelligible]
recording where you want low degree, because you want less computation and complexing
[unintelligible] but you [unintelligible] and you can actually use it to try to find
a [unintelligible].
>> Mohit Singh:
>>:
Um-hmm.
And you want something --
[inaudible] that you're able to broadcast your message, as that's a complex
30
[unintelligible] as possible.
>> Mohit Singh: That is some also I should mention. Some of this work actually
has been implemented. More interestingly not by me or [unintelligible], but there
was some persons in one of the companies who were interested in applying this
algorithm, and they did implement at least two of these algorithms. They did
implement the spanning tree algorithm and they did implement the [unintelligible]
subgraph, and they're still exploring whether they can be applied to their work in
the real life as well as not.
So they were trying to solve the design and network design problem, which has some
condition constraints, and these are essentially -- the degree bounds essentially
are used to model condition in real life, and they were -- it was [unintelligible]
to see at least they really called up all these algorithms, ended up solve them,
yeah.
>>:
[inaudible].
>> Mohit Singh: So the thing is, yeah, this is actually quite open, essentially
since any of the iterative algorithms that came, like, ever since Kamal's algorithm
came ten years back, most lot of people did try to make it common combinatorial as
well. But it -- I believe the simplest algorithm him to make commentary would be
the spanning tree algorithm, and there is far more complexity in this Steiner tree
or this Steiner network, and it might be the first place to make this algorithm
combinatorial.
I believe there is an algorithm which is at least, I cannot prove the guarantee,
but there is an algorithm which will at least achieve this bound, a combinatorial
algorithm. The same guarantee as Geomans. And it does it, you can prove that it
achieves this guarantee under some certain conditions. But the conditions are not
needed, and there's more, like ->>:
[inaudible].
>> Mohit Singh:
We believe that the conditions are not needed.
>>: [inaudible] combinatorial is it, like, concept of [unintelligible] strongly
polynomial, and you can make these algorithms polynomial. Combinatorial and don't
use linear program, okay? You don't [unintelligible] using addition and
multiplication. It's a statement of taste or flavor.
31
>>:
[inaudible].
>>: That's what I'm saying, strongly polynomial is a [unintelligible] statement,
which is objective. The running time is dependent upon the number of objects in
your problem. It's not dependent upon the size of the ->> Mohit Singh: So that if you solve this linear program, it would run in polynomial
time, but polynomial [unintelligible] cost problems. The cost is huge, so that
would affect your running time.
>>:
Different people have different tastes.
>>: That's right.
involved?
But in terms of, what in terms of the power of the polynomial
>> Mohit Singh: The linear program, a power of four or something. Like if you have
combinatorial algorithm, they do it on faster. But the thing is some of them are,
like for example matching and all, you can explore much more structured. They have
much more structured than a typical linear program, right. So that factor is
exploited for flow problems or matching problems, and you can actually show the model
much faster.
>>: [inaudible] you can make it a little more precise and different cases and it
says more of what the structure of the problem.
>>:
[inaudible] of the problem.
>> Mohit Singh:
Yes, actually, that is, I think --
>>:
I'm not trying to --
>>:
Yeah.
>>:
I mean, this is fantastic, but --
>>:
[inaudible].
>>:
[inaudible] case for low exponents in the power.
32
>>: That makes sense, yes. Yeah, that makes sense. And the flow problem also is,
I think the power huge. You indicate the [unintelligible] programs, they're not
that as slow or as ->>:
Just one power.
>>:
Just one power.
>> Mohit Singh: So the other, actually, way this, it is harder to make this
combinatorial is for example if you think that's why I believe like Edmonds proof
for matching is quite beautiful, right. Because main structure what this is using
is, like, essentially linear algebra. Like there are a small, like if you take a
matrix, the number of, like, the number of rows is [unintelligible] is equal to
[unintelligible]. There's a basic idea behind all of these proofs.
And that's what you're using when you solve a linear program. The basis you will
get is a square sum matrix of [unintelligible] matrix. And try to keep track of
this matrix or this linear algebra and linear algorithm gets pretty tough. Tougher
and tougher.
For example, Edmonds algorithm uses this very nicely. He has these blossoms and
so on which are contracting and expanding and he could always show that you could
always work with these, just these blossom which are contracting and expanding as
your algorithm goes and still get the optimum solution.
Because a much simpler thing would be to say there's [unintelligible] this linear
program and somehow you prove it. That's in your program always gives you the
optimum solution. That's one algorithm, right. But it's -- it would be a much
simpler algorithm to say than Edmonds algorithm. His algorithm, very, very
complicated. I'm not even sure it's taught in any of the algorithm classes also,
like -- it's very complicated algorithm, and it's one of the first algorithms also,
which also raises questions of [unintelligible] and so on. But even then, it's not
taught. It's a beautiful algorithm, but it's very complicated. It's not so easy
to keep track of this linear algebra as your algorithm proceeds.
Okay.
Any other questions?
[applause]
Download