Document 17864518

>> Mohit Singh: It's good to have Ola Swenson with us. It's been a
while since he came. We've been trying to get him for a long time.
He's going to talk about a very new result. I believe this is the
first time he's given a talk on this, on K median. So...
>> Ola Svensson: Thank you very much. It's nice to be here. So this
is very recent work, joint work with Shi Li, who is the last-year
graduate at Princeton. I'm Ola Svensson from Switzerland. I want to
spend the first slide to convince you why K-median is very important
problem for Microsoft Research in Seattle. And that is where you
should locate your research centers to minimize that traveling time.
My traveling time was, of course, very long to get here.
some optimization on this.
We'll see
So to see this, we have an American western Europe and you could plot
the concentration of researchers. We have Seattle, Silicon Valley,
Chicago, East Coast, and maybe Sweden, England, Lausanne, of course,
and Germany, of course, and so on.
Looked like you did your homework. If we plot Microsoft Research we
have Seattle, Silicon Valley, New England and now a new one in New
But then there's Cambridge. Obviously should have moved this down. So
that's -- but more seriously, when you want to place your Microsoft
Research location, you can think of different objective functions. In
theory, we often consider minimized average distance or minimize
maximum distance.
These problems are known as K-median or K-center, when we start to look
at these problems, I was not clear why we call them like that. Let me
explain one slide motivating the name of these parts. So let's look at
the median problem. I formulate it differently than we used to. We
have clients on the line, and I want to put one point on this line as
to minimize the established distance. First put a point here on the
line. I see if I push it to the left, then I will decrease the
distance for these three clients. But then I will increase for these
six clients. So it looks like a bad move to move it to the left.
On the other hand, if I move it to the right, then I will decrease this
and so on. So this is the green direction. So then move this point
along the green direction until I find it clear. This coincides with
the median, as you know it. That's why we call the median.
Then for the center, we simply want to find a center.
two extreme guys and we place our guy in the center.
We look at the
I want to say this is a little bit insensitive objective function,
because once I have my point here, no matter where I place the light
with the clients, they will not affect objective. So K-median is now
we have open K points, seal this, and it's not a real line, it's a
metric space. And what we use is the symmetric, triangle inequality
subset. So for this case I think it makes sense to open this, K center
you can open the same too, but it also makes sense to open this outlier
here, because you only care about the extreme cost.
So any questions on the problems?
>>: The last part?
>> Ola Svensson: I mean intuitively here, you know this guy is bad
luck for him. So we should open one in each cluster. For K sum it
doesn't really capture that because you only measure the maximum
distance. So maybe this is also an optimum solution as well as one
here. I'm considering K-median. So I want to motivate the objective
of K-median. So the important thing is to be able have to K-median.
So another point in favor of K-median if we look at the state of art of
the K center problem we have a very good understanding. So there's a
greedy, very simple two approximation algorithm, and you could also
make it basis so on. This is the best we can do. So we have a
complete understanding of worst case behavior, worst case. Some people
don't like worst case behavior. So have this image from Geneva. So
this is in the winter, it gets cold, always gets very humid next to a
lake. If you go for dinner and you park your car next to the lake on
the cold air, when you get back you will find it with ice covering it.
And you cannot open it until it gets warmer. So that's to say that
sometimes worst case happens, it makes sense to keep into your
So we know everything about K center. What about K-median? Can we do
as well? Can we answer to understand what we can do for K-median.
So what do you think? Do you think we can do better?
center, you have no idea? You have no guess?
Can we do K
>>: Should.
>> Ola Svensson: That's unfortunately we don't know. But at least
harness says we cannot do better than 1 plus 2 over E. This is by Jain
in 2002. And what we don't know how to do better than three. That's
quite a big gap. We know how to do better than three. And my guess
would be this is tight. So, yes, by working it. This is a guess
underline. So I don't know. It feels like we should be able to do
better. But current techniques doesn't allow us.
So we're going to talk about today is like the best approach, so far
for approaching this problem.
So what's the known techniques? Then I would state our results. And
it's like two independent parts. And I think the first part, most
interesting. And that is this K is a hard constraint. We cannot open
at most K facilities. So what we show is that we can turn this hard
constraint into a soft one. So we allow to violate slightly. But then
how much we violate is constraint will affect our running time.
But as long as we violate it by constant, it becomes a polynomial
algorithm. Once we have this resolved, what we decide is a pseudo
approximation and combining this gives a better approximation.
So just before I start a little bit about known techniques, let me make
some comments. It will be convenient to distinguish the points in the
metric space where we can open a facility and the client. I will
always throw out points where I can open up a facility by squares and
clients by circles. And the goal is to open K facilities. So if we
would open three facilities, maybe we open this. And once we know
which facilities to open, it's trivial to assign the clients. Closest
So what is the algorithm? So the first algorithm, if you looked at the
median on the real line, we start with any point and we greedily
improved it. So let's try to do that for K-median. So what we do is
to start by opening any K facilities. And while there exists a swap
that decreases the value, we do it. So swap means that if I start with
these two guys, I might close one of them and open another one. I do
this as long as there is one of the quizzes objective. So maybe I swap
And this actually gives you a five approximation algorithm. So this
was proved by Aria in 2001. And now if you have a swap, the next thing
would you do is to look at P swap. So instead of just close one, you
would close P and open P and so on. And this gives you a 3 plus 2 over
P. So this is the best. So if you let P be sufficient three plus
epsilon approximation. So this is the best that we have.
So that's what we had.
>>: There are examples where it doesn't have to be better than this?
>> Ola Svensson:
This is tight analysis.
>>: Smartest swath but do this you can start ->>: How long does it take?
>> Ola Svensson: That depends on -- it's not clear, right? So that's
where this epsilon comes into play. So this is not 3 plus 2 over P.
But you need some slack in your inequalities. So you say that in each
step I want to improve at least this much otherwise this step. If
there's no swap that improves this much, I will stop.
>>: The question, are there bad examples where it could take many, many
>> Ola Svensson: Yes. I don't know if that's a bad example upper
bound of P. That's not the main focus of the part. Anyway, this was
in 2001. And since then we have tried a lot to do better free and one
better candidate to do better free is linear programming?
>>: Clear polynomial into the K.
>> Ola Svensson:
That's why I said you need slack in your improvement.
>>: Typical 9 polynomial.
>>: Optimizing.
>> Ola Svensson: Okay. So I know you don't like -- probably most
people don't like linear problems but you will not need to understand a
linear problem in this talk, the theorem. But to understand that this
approach is what you need to understand is that the variable YI and if
you think of this you take value one if we should open up the facility.
And there's variable XIJ that takes value one if J is connected to I.
So now this objective function is the sum of errors connection cost and
this is that we can open up at most K facilities. Here each client is
connected. And here we said if a client is connected to facility I
then that one has to be here. So don't worry if I went through here
quickly. But now we like you've been here like randomization, so what
we do here is to intercept the YI so the facility I. So once you have
this intuition then the first algorithm you would price just open
independence random probability YI. So this is not that good idea.
Why. First we have to satisfy this constraint that we can only open K
facilities and then there's a real problem that in the neighborhood of
some client we might not open any facility. So there might not be any
facility open at all close to client.
So maybe that's the
they very carefully
ball there and they
ball we should open
most serious problem. So what people have done is
grow balls. They take any client and they close a
do, they grow this ball until the -- inside this
at least one facility.
Then we grow another ball and so on. And they do this very carefully.
Now they do randomized rounding to ensure they open at most K
facilities and at least one is open in each ball. And more than to be
satisfied. So maybe we open three facilities in this.
And this was the first cost analysis. This is careful 6.15
approximation from 99. And now you know they have worked on this quite
a lot. So now the best one from this year is a 3.25. This is
extremely careful randomized algorithm. And they think it's better
than free but analysis is very long and so on. Because it's not clear
how to grow this subset and so on. By the way, here we see a clear
difference between a related facility location. Facility location
these techniques gives nearly tight. Whereas here we have the hard
consent. We can open up K and this makes problematic.
>>: The constraint there?
>> Ola Svensson: That's the next slide. That's good. So facility
location, same as K-median but no hard constraint K facilities open,
instead of we have open and close. Each guy can be open. We have some
open and closed.
But there's a very nice relationship with K-median that I now want to
talk to that we will use. So this comes from [inaudible] economy, if
we can set price on facilities, so suppose you set the price to be very
cheap, then we would open many of them. We put the price to be very
expensive, and few facilities will open.
So what would you do? You try to find a price and use the best known
algorithm for facility location to get the solution that open space for
K facilities. Then this will be a solution to K-median.
Unfortunately, this function is not continuous. You cannot be sure you
open exactly K facilities. So what you do is to find prices. So that
if you change it slightly, you find one solution that is slightly lower
than K and one slightly bigger than K.
And that's the intuition by this five point solution that [inaudible]
to do very nice papers. But what they prove is -- so let me explain
here. By losing a factor of two, can find two solutions. Think of K
equal to 3. One solution I open less than K facilities so the blue
solution. And one, another solution I open more than K facilities. So
I open four facilities here in the green solution.
And in each then defines pseudo solution where I can connect the client
to the closest in the green and the closest in the blue. So there's
two solutions. And what they prove is that we can think by losing a
factor of 2 it's a combination of the two solutions.
So I will just take the combination and get a very structured solution.
>>: Both are examples where the cost of opening K facilities is a
convex function of K, people ever look at that sort of more general
cost functions?
>> Ola Svensson:
I don't know.
That would be my result.
>>: K [inaudible].
>> Ola Svensson: That's true. I don't know. I don't know. But you
understand here so I have two solutions in the convex solution, the
solution to the LP. So I just draw it like this. I can order them
like that and all the clients in the middle and they will be connecting
us. So one blue solution and one green solution and LP will be a
convex solution of the two.
And then what they show is it's quite easy. Once you have this
structure of LP structure you can round it. You can lose another
factor of 2, you can convert LP solution to a solution that opens at
most K facilities. So if you lose a factor of 2 here, you lose a
factor of 2 here, you get the four approximation. First they lost a
factor of 3, then they lost a factor of 2.
And this one is tight. Because if you're given a star, here we have K
plus one facility client pair. So these clients, they have distance 0
to these facilities. And they have distance 1 to the top-most
facility. So we have K plus 1 green and green solution we have K plus
1 facilities open. The cost of this solution will be 0. Here we have
1 blue guy open and the cost of this one will be K plus 1, because
everybody would have distance 1. So it's extreme difference between
open one more facility than we allowed, then we get the solution of
cost 0. But if we open K facilities you can see that the optimum is 2.
And LP will tell you we should open this solution with probability K
with K minus 1 over K. This is the worst case for LP.
That's an inequality of 2. And our algorithms will be obtained by
staring at this graph long enough and then I mean it's a very simple
graph, but it gives you a lot of insights.
So you understand this instance we have K plus 1 facilities
if we take the green solution, we have cost 0. If you take
one, you have K plus 1, therefore the LP has value 1 plus 1
Because it says we take the blue one, property 1 over K and
is TK minus 1 over K.
here, and
the blue
over K.
green one
Okay. So to summarize, the best algorithm local search base epsilon
soreness. And this goes -- such as the orchestrated at most K
facilities can be opened. So for facility location we have almost tied
So what's our results? As we said we're going to stare on this graph
for a while. What we prove we can get the 1 plus square plus three
epsilon to N 1 epsilon. Okay. So that's an improved factor 2.71. I
think the coolest result is that. If you have an R pseudo
approximation algorithm that opens K plus C facilities then this can be
turned into an R plus epsilon algorithm K facilities 1 over C epsilon.
So this shows that, for example, this integrality for LP disappears,
because I can open all the green ones and I can apply that theorem as a
black box to get good approximation algorithm.
>>: Just [inaudible].
>> Ola Svensson: Yeah, you could bike [phonetic]; it's not a
multiplicative factor.
>>: [inaudible].
>> Ola Svensson: So [inaudible], it means you would have a
multiplicative violation in a number of facilities open and that would
kill our result here, because then your running time would be N to K.
And then we just do a pseudo approximation, at 1 plus squared that
opens 1 over epsilon. So this is -- I think it's very good, this is
good but probably can improve this.
Okay. So let's prove this theorem. So it is very simple. If you look
here, if you have -- so here we have the clients, the students then we
have the universities. And so if the clients are not too concentrated
around the university, then if we just remove one university, we will
make some guys angry. But there are only a few compared to the total
number of students. So we can -- with K-median objective allows us to
make some clients very angry. It will not affect overall objective
very much. So that's the idea. So that is the two extremes. We have
a very sparse instance that no clients are too concentrated. And then
we have the dense classes you could see in the integrality gap
instance, we have clients that have connection cost 0 to facilities.
So clearly if we can open four facilities here, it has a much lower
cost than if we can only open three. So removing a facility here will
increase only, if we remove one facility, then we will increase the
cost by maybe a factor of 1 over K. Not too much concentration around
this. Some guys would be unlucky. They'll be rerooted. If we remove
one here, then we will increase the cost with the factor.
The question seems it's an impossible thing to prove if I take a
solution that opens K plus 1 centers, how can I turn it into a solution
that opens K centers without increasing the cost.
And you want to remove this kind of instance. So you want to
pre-process away these instances. Removing constant number of three
sparse effects -- and the proof would be based on if we can remove a
facility here, then we will increase the cost by a little, and we would
now prove we can concentrate on sparse instance without loss of
generality. So let me take some time. So what do I mean by sparse
instance. I set up an instance A sparse for every facility I look at
the facility here and now I look at all the clients around it, which
the radius of this ball is the distance from I to the facility open in
the optimum solution. And I'm only allowed to open three facilities in
optimum solution, maybe it's the green facilities. Now I draw a ball
around this facility I that I'm considering with radius DI opt over
three. So here I have DI opt and the radius here is one-third of this
Okay. And this all the clients inside this ball should have connection
cost in optimum that is at most A for it to be A sparse. Does that
make sense, the definition? I look at the small ball around the
facility. I look at all the clients. What's the connection cost in
optimum? I don't want it to be very big compared to opt. Because that
would mean if I remove this facility, then I would increase the cost a
lot compared to opt.
>>: Average connection cost apply to eye optimal.
>> Ola Svensson: No, DI opt is the distance from the facility I to the
optimum facility, the closest facility in optimum solution. But the
connection cost of a client here optimum solution would be at most
two-thirds this.
>>: This instance, it's an instance of ->> Ola Svensson:
Yes, for simplicity, let's think K plus one.
>>: Every facility, you don't mean every facility that's included in
every solution, you mean every potential.
>> Ola Svensson: Every potential facility. Yes, any potential
facility location. Right. Good. You know so this guy's not -- he's
not sparse. Because we can see that the clients in this ball will
actually be the whole optimum cost in optimum solution. If I have
these clients here, these guys will have cost 0. These cost 0, cost
optimum solution but these ones will have all the cost of opt. So
these guys are dense, dense instance. Of course I don't know opt.
>>: You can spin those three clusters, if you have those three clusters
you would still call it sparse on those.
>> Ola Svensson: Yes. Yeah. Yeah. So it's a little bit confusing in
that way. So if the dense things are around facilities that are opened
in optimum solution, then I still call it sparse. But because I call
it sparse because the problem is that maybe the LP tells me that these
guys open with high probability. Based LP fools these clients and
that's a problem.
>>: So fixing one particular optimum solution.
>> Ola Svensson: Yes, I assume optimum solution to be unique. You can
also do it with virtual atoms. With LPs. How can we obtain -- not all
instances are sparse but how can we see sparse without generality. How
would you do to obtain a sparse instance? If I note optimum, suppose
we note optimum, what would you do? The first thing would be, okay,
let's remove all the locations that are not part of the optimum. Then
we would have a sparse instance. But that's a little bit too good to
be true. So let's try with something weaker.
So while there exists dense facility, what I will do is to remove this
facility and any facility within that is closer to this facility than
the optimum. So I remove this facility. This facility location, and I
remove anyone that's closest to this one than the optimum of the
facility, the facility in opt. So I remove that one and I remove that
>>: Are you going to do ->> Ola Svensson:
So I didn't draw the clients here.
>>: Is this a particle or algorithm?
>> Ola Svensson: This is the algorithm that we will see that we can
turn -- this is of course not a polynomial time because we don't know
opt. But we'll see it doesn't matter. You will not implement this
algorithm. But no, no, you will see that it doesn't matter that we
don't know opt. That's why I said, we can always -- if we knew opt is
trivial. We can remove all the locations that is not opt. But that's
not good now I try to do something that I can do weaker that I can turn
into polynomial. So let's look at something weaker. And the weaker
thing is that I am looking for opt over T sparse instance. If I have
an opt over T dense facility I then I identify this facility I here,
and now I remove it because I know that I'm not using it in opt. And I
will remove any facility that's closest to this one than the optimal
one because I know they're not used in opt.
So I remove that one and that one. Now I iteratively do this. I find
these balls. But now the key thing is these balls would be disjoint.
By the way, we constructed this iterative algorithm. How many balls
can we have? So the opt over T dense means that the connection cost of
the clients in this ball was opt over T. So we can have at most T
balls. Because balls are disjoint we can have at most T balls because
otherwise opt would not be opt. But now we can guess these balls even
if we don't know opt, because we only have T ones. We can't numerator
all possibilities NT time. So that's why it was a fourth experiment
but you can easily turn it into an algorithm by paying a large
polynomial factor.
So that's why we can look at sparse instances. And now maybe you want
to believe me as we saw before that if we have a sparse instance then
it's quite easy to turn it into one that only uses K facilities.
There's some complications, but so now what we look at let's look at
this guy. Let's try to remove him. Let's try to analyze how much the
distance of the clients in the -- sorry if I have solution it naturally
partitions my space into the [inaudible] diagram all the clients here
would be connected here and so on. So now let's look at what happens
if I remove facility I. Well, if this client here is far away from the
facility, let's say at most factor 1 over T lie, which is the distance
to the other one. The distance will increase at most factor of 11.
Just by the triangle inequality. Now if it's too close, well, if you
had -- then we know since it was a sparse instance, these clients
cannot contribute too much by removing this. And that's it. If we
have, if these optimum are far away from the blue guys. So there's
some -- but there's a technicality to say that if all these guys are
very close, then we're in trouble. But then the thing is that if
they're close, then we basically found optimum solution already and we
can guess the symmetric difference. But that's a technicality. Okay.
So finite T we obtain time sparse N to T run pseudo approximation we'll
define later and massage into T into solution that opens K facilities.
So there's T comes up in the running time.
over epsilon.
So that's why I have this C
>>: [inaudible].
>> Ola Svensson:
Pardon me?
>>: Square root of three where does that come?
>> Ola Svensson: That comes now. So where? So we call it the store
algorithm. So this is the store we have in Sweden for Christmas. It's
very dark. We have a lot of lights in the rooms. Okay. So okay so
it's not called a store algorithm because it's the best algorithm.
It's called a store algorithm because we use stores. Okay. So how to
run one store. Now when we allow to open up a little bit more
facilities, how would you do? It's kind of -- LP says we should open a
center 1 over K should open the green solution K minus 1 over K. So
now the algorithm, we simply open a center report A or otherwise we
would open all the leaves, probability 1, probablity B. So either open
this or all the leaves. So in the worst case in this case we will open
K plus 1 facilities and expect it because it's the same as LP. Okay.
So there's -- of course let's think about not too many leaves, if you
have too many leaves then you cannot do it.
But it's enough to think
So rounding several stalls, how would you do?
>>: Are you opening -- worst case you open all the K plus 1.
>> Ola Svensson: If you have way too much leaves what you would do is
always open a center and then I'll open like a B fraction here.
>>: Why do you want to do the same thing as the previous papers where
you have these two solutions you want more than one less previous twice
>> Ola Svensson: That's why I said it's not I think one can improve
it. We start by losing a factor of two to get this kind of structure.
But this was the integrality -- so how would you round several stores?
So pick a fraction of the source randomly. So maybe I pick this
fraction. For those guys I open the center and for other ones I open
all the leaves.
Now you know maybe I pay number of open K plus constant by some
rounding problems. But this is assuming stars of the same size and not
too many leaves. So it gets more complicated if you have different
number of leaves for every store and so on. But let's not worry about
>>: Constant many stars.
>> Ola Svensson:
No, we might have many stores.
>>: So then that fraction of the stars we pick randomly has to be -- we
have to be careful.
>> Ola Svensson: Yeah, we do uniform at random A fraction of the
stars. That's what I want to expect the connection cost if a client is
A and here the expected opening is A and the expected here is B.
>>: So that's not a factor when you say pick?
>> Ola Svensson: No. I'm sorry. It's difficult to write A. Okay.
So now the square root of 3 comes into play. So -- so how waste the
whole algorithm. So when we saw this, obtain bar code solution, losing
a factor of 2 something we could get from Jain and Batsrani. And now
we should obtain stars from this instance. And the way to obtain stars
is to look at each facility in green and take the cheapest edge to the
blue. So each edge here has a green distance and blue distance. So if
I sum this distance up I get the distance of the whole edge.
For each green, I connect it to the blue to take the cheapest edge. So
this gives me some stars, plus some clients that goes in between stars.
I cannot remove the clients. But I will define my stars like that.
>>: [inaudible].
>> Ola Svensson: So for each green guy, I pick the cheapest edge,
where the cost of an edge is the green distance plus the blue distance.
>>: Edge to the blue?
>> Ola Svensson: To the blue guy.
part and a blue part.
I see this as an edge with a green
>>: How do you end up with a green jacket with two blue -- the second
>> Ola Svensson: Yeah, because here I have many clients, right? So
my -- so this guy, so this is a part of the stars. This is the ones I
picked. But I will have to keep the other clients. So let me redo it.
So for each green guy, I will pick a representative. But I have to
keep all the clients. The representative I pick is the cheapest. So I
have some cost here. The cost is to length of the green part plus
length of the blue part. I have a unique representative for each green
guy, it will be stars in the blue. Maybe different sizes and so on.
>>: Only some of the clients.
>> Ola Svensson: Some of the clients exactly. This one is one of the
other ones that I didn't pick as a representative. But what's
important here is now that if we look at the distance of this client to
that one, what's the distance? It's this green part plus this green
part, plus this blue part. But because we pick the representative here
to be the cheapest possible, we have the distance here is at most
distance here.
So distance here is 2 D plus 2 D 1. So this says that every client
will have, will be pretty close to the sample of that star. And why is
that good? Because remember algorithm either open the center all
leaves. So either the leave is open we are very happy. The center is
open or none of them is open and then we can connect them here.
Because we know this guy is open if this one is not open.
And now you right down the expected connection cost and you get this is
one plus square root of 3 over 2. So that's the whole thing. And the
plus epsilon and so on becomes because you have to deal with different
sized stars. If you just have the same star sizes you get plus 2 or
plus 3.
Okay. So I think the most infinite manage to turn the hard constraint
into a soft constraint. This gives a new point of view. And what is
very nice is that this standard LP we thought it was not good enough to
answer what we think is right, one plus two over E but now it might be
good enough as we can open cost of more facilities. We look at the
standard LP try to round it opening a little bit more facilities even
if I open one more facility I don't know of anything worse than 1 over
2 or 3, so it might be possible to use the standard LP.
And it was first obtained by standard hierarchy, so LP here gets K
median. And then we can do it without staircase, which is easier
usually. It will be similar things than the other points with hard
constraints. In particular this K median if you have capacities the
number of the facility then there's no constant. If one could relax
this, then the problem gets easier.
So thank you.
>>: Questions? In between this, the K median, one is [inaudible] and
facilities they're all these different LPs, and L2 would be the
easiest. But ->> Ola Svensson: There was one more practical. This here exactly.
That was the question. So I don't know. So I think we are looking at
this because it's easier. It's LP. And we haven't understood this
yet. But the based on free approximation local surf K mean there's a 9
approximation for K means.
>>: But if you need a version [inaudible].
>> Ola Svensson: Yeah, that's K means. That's K means. That's K
means. So there's a 9 based on the same technique. Maybe you could
get the better here. We haven't looked.
>>: [inaudible] easier.
>> Ola Svensson:
I don't have that intuition.
>>: [inaudible].
>> Ola Svensson:
>>: I guess of all things LP with L1.
But you don't have it --
>> Ola Svensson: You can solve the convex problem. So maybe that -we should actually look at the constraints, because they seem to be
more interested in K means, the practical.
>>: Right.
Start presenting --
>> Ola Svensson:
>>: [inaudible] have standard -- [inaudible] with 1 and points in the
line median.
>> Ola Svensson: Should be able to do any LP.
able to do any LP norm.
>>: LP should be easiest.
>> Ola Svensson:
I agree.
You should be
Not --
Maybe easiest.
>>: [inaudible].
>>: That's sort of related to the question I had which I was wondering
if there was a general [inaudible] like is there a form of lower bounds
which says given ->> Ola Svensson:
>>: No, LP as in the norm LP.
>> Ola Svensson: I don't know. Do you know? I think we like this
experience because there is [inaudible] in terms of LP or ->> Ola Svensson:
>>: You mentioned you had [inaudible] based on some hierarchy. What
was the hierarchy and was it for dual pieces or like for the second
>> Ola Svensson: No, no, it was only for the first part. To relax the
constraints. So instead of doing the enumeration to get a sparse
instance you can just condition and share atoms. So this is like a
dense. It's a facility dense and you can condition on these events.
And you have to condition on that many events.
[inaudible] we added one constraint.
>>: [inaudible].
>> Ola Svensson:
>>: Yeah.
>>: So you condition those centers not being in the open source?
>> Ola Svensson: I condition them on -- so here we said sparsity
without -- so here I condition on this guy. So here I said what I do
is so I condition on for if this facility is sparse, is dense, then I
would condition on this one being the closest, opened. And this will
force all these to be closed.
so I could remove it.
So that's then all LP values would be 0
>>: All equals ->> Ola Svensson: No, so I will have to reinterpretation of XIA stands
for not the facility gaze connected to but it's the closest open
facility, which is what you would expect.
>>: [inaudible] use the fact that after it's sparse ->> Ola Svensson:
>>: You don't need to.
>> Ola Svensson:
>>: So if I just understand -- that problem basically you do 1 plus
square root 3 or 2 approximation.
>> Ola Svensson: This shows that they're not the toughest extreme
points now since you can open more than the K ->>: [inaudible].
>> Ola Svensson: It's better. So this shows that the horizontal
integrality gaps are not in this form.
>>: [inaudible].
>> Ola Svensson: We don't know. No. It's probably not -- we don't -analysis this is probably tight like edge by edge analysis. But you
can probably round this salute kind of -- but you don't want to lose a
factor of 2.
>> Mohit Singh:
Thank you.