>> Lev Nachmansen: Welcome to the first session of... Andreas Gemsa. And I was asked to ask him --

advertisement
>> Lev Nachmansen: Welcome to the first session of this morning. Our first speaker is
Andreas Gemsa. And I was asked to ask him ->> Andreas Gemsa: No, no. [laughter].
>> Lev Nachmansen: To explain what this talk has to do with dragons -- or maybe it
was turtles. So Andreas, go ahead.
>> Andreas Gemsa: So that was kind of an inside joke, and I really, really hoped that it
wouldn't come up here.
Anyway, so I will be talking about column-based graph layouts. And this is joint work
with Gregor Betz, Christof Doll, Ignaz Rutter, and Dorthea Wagner. And maybe it has
something to do with drafting graphs orthogonally in columns. And maybe you can see
now the dragon thing. And it will be nicely complemented in the fifth talk. So you can
pay attention to that, maybe.
All right. So what are we doing? We -- we were approached by a philosopher that is a
co-author of ours, a Mr. Gregor Betz. And they analyze text and they extract arguments
and they want to somehow visualize that. And they do that with a program called
Argunet.
And the result of such a visualization is like this. As of now they have no real way to lay
out all those vertices of the graph and the edges. And so what they did is they do that
manually. And you can see that it looks not really nice, and really confusing actually.
So we wanted to look into that and find somehow a way to do that much, much nicer.
So what are we looking at exactly? Well, we have boxes of uniform width. And in those
boxes there's text, basically the arguments of the text they analyzed.
We consider upward drawings, although you may notice that the arrows are pointing
down. That is only because the application requires it. But I will speak of upward
drawings because this is the established term in graph drawing.
We require that the ingoing edges enter at the top of the boxes and they leave at the
bottom. And we consider only orthogonal edges so that is edges that contain only
horizontal and vertical segments.
And we are looking only at directed acyclic graphs.
All right. So maybe you think, now, okay, let's try Sugiyama. I mean, that is a
well-established thing. And it produces great layouts. So maybe it's a quick reminder of
what is the Sugiyama framework. Much.
It consists of three steps. The first one is well, you take the graph, you take the nodes,
and you assign them to layers.
And in the second step you do something to reduce the crossings, that is you change
your relative position of the nodes inside one layer.
And finally you find the point position for the nodes and the edges. And it might look
something like this. All right. Great.
And there is two drawbacks. Of and the first one is if you choose an unfortunate
layering in the first step well then no matter what you do in the second, you may end up
with many, many unnecessary crossings.
Another problem is that if you have tall nodes, well, then you cannot really have
something like this where you have two nodes beside them, so that means basically
that if you have a tall node in the layer and the rest are only small nodes, then this leads
to non-compact layouts.
So for the first one there is a technique by Chimani, et al., and that is layer-free upward
crossing minimization that works fairly well. And they have also integrated this into the
Sugiyama Framework. So that's a nice thing. And we'll come back to that later
because we used that too.
But for the second problem, well, there is at least to our [inaudible] it's not really a
technique to handle that that well.
All right. So this is a graph I layouted with Y Ed, the graph editor and used basically the
Sugiyama Framework. And I only edited the highlight for the layers. And you can see
that this is not really compact.
And if we use our approach, that is a column based approach. You can see that we put
the nodes into the columns. The layout of the very same graph is much, much more
compact.
So what do we provide? Well, we have some additionally quality guarantees. So what
we do is, for example, we align the predecessors of the node vertically. So if we look at
this node, you see that all the predecessors are aligned vertically. That's very nice. We
can easily find out the predecessor of the node.
We can guarantee that you have at most four bends per edge. And this is exactly what
can happen. So an edge leaves at the bottom, then it goes to some other column, has
a vertical segment and then goes to another column to its final node. That's the only
way to get four bends in our approach.
We have something called local symmetry. And that is if you take all the edge that is
are incoming to one node, then the node will be positioned at the median of all incoming
edges.
And we also support spacing constraints. That means basically that edges that come
from the same node or go to the same node are closer together than edges that do not
share common vertices or nodes.
All right? We optimize some criteria, but these are the standard things. We want to
have few crossings, few bends, and we want to minimize the edge length.
So how do we achieve this? As I told you earlier, we will use the layer-free upward
crossing minimization. And instead of integrating it into the Sugiyama Framework, we
will integrate it into the topology-shape-metric framework.
So what is that? Well, the topology-shape-metric network was invented by Tamassia.
And the central idea of TSM is that crossing minimization is the most important thing.
And so it's done in the first step, topology. And in the first step you compute basically
the embedding.
Then in the second step you assign the bends to edges.
And finally you compute the final layout.
So how is it looking for our case? Well, unfortunately minimizing the cryptography is
NP-complete. But we will use the layer-free upward crossing minimization.
For the shapes that we're not sure if minimizing the number of bends is NP-complete,
but we compute a column assignment for the nodes, and we guarantee that they are at
most four bends per edge.
And finally in the was set, we can show that the minimizing the vertical edge length is
NP-complete. But we use some heuristic and we use also something for the width
compaction.
All right. So topology. First step. We essentially exactly use the layer-free upward
crossing minimization. And it consists of two steps. You take the graph. You actually
add, in our case, a super source and a super sink. So you have an ST graph. Then
you delete edges until the remaining graph is upward planar. And for C graphs we can
check that in polynomial time. And then we reinsert the edges one by one such that the
remaining graph is still upward planar.
That's a bit tricky. And for details I would refer to the paper by Chimani, et al., because
we use the same technique.
But it essentially looks like this. So we have the graph. We have added this super
source and we have added the super sink. And we may have deleted some edges now.
And we need to reinsert them. And these could be those two edges. What we do then
is we just add dummy nodes at the intersections they would have. So that the result is
the planar representation of the graph. All right.
Yeah. And that is basically what we do for the first step for the topology step.
So now we have an embedding of the graph. And now we many come to the shape
step. And what we want to do there is we want to find a column assignment. So what
we -- we have is the graph. And it consists of those -- of those nodes. They have the
same width but they can have arbitrary heights. And we want to assign the nodes to
columns. So, for example, like this.
And for the edges, most of the time it's completely clear in which comes the must lie
because we are required that they leave at the bottom and enter at the top. So for
those two edges we see here, it's completely defined or determined if the position of the
nodes is known.
But there is one case, and this is -- this one we have four bends where -- well, we have
this had middle vertical segment. And so we assign this edge to exactly this column
where its middle vertical segment lies. All right.
So what we want to find is such a column assignment basically for all nodes of the
graph. And we do that with the Biedl and Kant algorithm with a slight modification.
What we do, we start with the super source and that we edit at the very beginning of the
topology step and we wanted to have this local symmetry thing, so what we do now is
we -- we draw it at say column zero and then we draw the beginning of all outgoing
edges, so that they're evenly spread out.
The nice thing is we can do that in linear time because it's more or less the same again
as the Biedl Kant algorithm.
So after we've drawn the super source, we are now somewhere in the graph and have
drawn already some of the nodes and some of the edges. And it looks like this. So this
means there is -- there is no direct connection between those two parts. And now we
want to draw a new node. And we know that it has two incoming edges. One lies in
this column and one lies in this column. So since we wanted to have this local
symmetry thing, we need to put the nodes at the median of the two, so that's either this
young or this one. So let's put that here in this column.
To find the position we look at all the nodes and all the already drawn edges and find
the highest position to put the node. And then we put it there. And then what we just do
is we draw the edges down and connect them -- if we need to do that is by a horizontal
segment here. And then what we also do is we draw the outgoing edges.
But now we have a problem. Maybe some of you can see that. Because if we now use
the same technique again and want to connect this edge to some other node that is
down here well, then it intersects here and we have some kind of weird overlap. We
don't want to have that. So what we can do is something very simple. We just shift all
that is left from here to the left one column.
And we have the same column here because if we want to draw this edge down, we
have a problem down here with an intersection. So what we do again is a shift. And
now that looks fine. All right.
Something I didn't mention until now, we added dummy nodes in the topology step, but
we ignore them here completely. So we kind of relax the topology from the first step.
All right. So what we guarantee now is that we have the local symmetry because we
evenly spread all those out and placed the nodes at the right positions. And we have at
most four bends per edge because, well, the edge leaves the node, then it goes maybe
to some column, and then it goes to the final position.
What we need simply to do is the vertical alignment of the predecessor spacing
constraints and edge bundling. All right.
So we have finished the second step. And now the third -- are at the third step, the
metric step.
And there we want to compute the final coordinates of all the nodes and the edges.
Unfortunately minimizing the total vertical edge length is NP-complete. But we use
some heuristic. And what we do is we want to align the predecessors. So we basically
do that greedily. We vector a vertex, for example, this green one. We look at the
successor of it, and then at all of its predecessors and look at if we can align those
nodes we look at with the green one.
For the right one we can, but for the left one we cannot, because there is a direct
connection between this node and this node. So the right one is okay. But the left one
is not.
And then we have some new successors we need to look at, so we have this node and
it has a successor there. So we go back down to the successor and again to the
predecessor and look if it's comfortable, and, yeah, it is. And so we can draw them at
the same level.
So we do that greedily until there are no vertices or no nodes that are not in the group
yet and then we draw that bottom up. And it's fairly simple, so we put them at the
lowest possible position. We draw the horizontal part already of the ingoing edges to
those nodes and then go to the next group we can draw. We complete the drawing of
the outgoing edges so it is for this group here and here. And we iteratively do that until
we've drawn all nodes.
So that is what we do for vertical edge length compaction. Then we -- we can do that in
N plus B times N time where B is the number of columns. Computed in the first step.
Then there's something else. We can have -- well, let's say empty columns. Something
easy like this, which we do not want to have, so we want to compact the horizontal
layout. It can be a bit more complicated, something like this, and we also want to
remove those. And we do something very simple. Oh, there is -- I have something
important to add. This looks like we can just remove this part. But, in fact, we cannot.
Because spacing constraints between two edge that is are not going to the same node
or coming from the same node may be violated if we do that. So we need to be a bit
careful about that.
What we do if we look at compaction path that is are basically those red dashed lines
that I showed you and then we compact the layouts along them from right to left. And
we always make sure that those spacing constraint violations cannot happen. Yeah.
And we can do that -- well, M times B plus a time -- plus B plus log n time. And again B
is the number of columns computed in the shape.
All right. So now we have all that. The edge bundling I will show later on, again, what
that -- how that looks like in practice. If we apply all of those steps from the algorithm
we have M squared times N plus B running time.
Now, a bit of experimental evaluation. So I said at the beginning, we are coming from
the philosophy professor who approached us with this problem. So he supplied us with
some argument maps. These are called those drawings. And they are fairly small but
was nevertheless nice to see the layout for those. They're small examples.
And here's one which we find to be -- well, looking very well. There is not much
improvement from our point of view. But there is sometimes the problems so this is also
a fairly nice example where you can also see that we have kind of an edge bundling.
You will see the difference in the next slide, I guess. Here you see the edges are very
close together. It looks kind of like one edge.
But here you see one problem. You see this large node over here, and it has outgoing
edges to this one, this one, this one, this one, and this one over here. And now what
looks kind of strange is that, well, this red one over there is way to the top. It looks like
it would make more sense to just put all -- to get all those and put it down here.
But since we aligned the predecessors that does not really work. What you can also
see here is the edge bundling effect. So here you see that those two edges are very far
apart because they do not share some common nodes. All right.
A quick word for the running time. Usually it's fairly fast, but again for only small
instances. If we have, well, a graph 40 nodes, it takes some time. Maybe there's some
ways to improve it. And the topology steps of the first one takes the most time.
And so inclusion, we use the topology-shape-metric framework and integrate layer-free
upward crossing minimization. But we relax the topology computed in the first step.
We, at least in our opinion, produce clean and structured layouts. And the running time
is reasonable.
And what we also want to do is something about incremental layouts where nodes of
the graph and edges are added timely.
With that, I want to conclude my talk.
[applause].
>> Lev Nachmansen: Thank you, Andreas.
Time for question. [inaudible].
>>: [inaudible].
>> Andreas Gemsa: I cannot really, really tell you that, because most of the things we
looked at are very sparse because of the way the layouts or the argument maps are
produced. Usually what one node -- where two nodes meet on an edge means that you
have an argument that supports maybe another argument. And usually you cannot
support too many arguments with one.
So we haven't really tested for that.
>>: [inaudible] is that people do [inaudible] topology [inaudible] it's a waste of space.
Did you measure the area [inaudible].
>> Andreas Gemsa: No, we did not measure the area. But for the layouts we produced
we had 51 maps. It looked fairly compact. So that was okay.
>>: Well, I don't know if that makes sense [inaudible] scenario about when a first red
column I thought that you allowed the edges to wrap around the column so you have
this kind of radial layout. Do you know what I mean?
>> Andreas Gemsa: No.
>>: So that you draw the graph on the column, which is round so you can wrap the
edges around the -- [inaudible].
>> Andreas Gemsa: No.
>>: [inaudible] crossings for instance.
>> Andreas Gemsa: Okay.
>>: I don't know if this makes any sense in the [inaudible] scenario.
>> Andreas Gemsa: I don't know either. [laughter]. I cannot really tell you.
>> Lev Nachmansen: One last question.
>>: I have a conjecture which says that you get S results as you have with the
Sugiyama style approach and use an advanced algorithm or the algorithm of [inaudible]
and have to advance that. And the advantage would be if you have bends then you
don't get bends somewhere in the middle of the edge, but you have the bends
immediately below or above the vertices. And I think you get [inaudible] compact as
yours with no more bends.
>> Andreas Gemsa: Okay. Well, like one of the main problems we have with
Sugiyama, maybe I'll go back if we have quickly the time ->> Lev Nachmansen: 20 seconds.
>> Andreas Gemsa: Okay. I hurry up. [laughter].
>>: [inaudible].
>> Andreas Gemsa: Later? Yeah. Okay. Later. I show you the picture and then we
will talk about where the problem is maybe. Okay.
>> Lev Nachmansen: Thank you, Andreas.
[applause].
>> Lev Nachmansen: So the next talk is given by Robert Zeranski. And it's on an
applications of set solving to graph drawing. I think that's one of the first works where
set solving is applied to graph drawing.
>> Robert Zeranski: Okay. Hi. I'm talking about how to solve upward planarity using
satisfiability. This is joint work with Markus Chimani.
And what is upward planarity? Given a diagraph G, we ask whether there is a planar
drawing such that all edges are drawn upward. That means that all edges are drawn
monotonously increasing, for example, in the Y coordinates.
And let us consider an example. This is a planar drawing of a graph. But it's not
upward planar because the red edge has another orientation than the other edges. And
so we can put the target of this edge above the source of the edge, and then we obtain
upward planar drawing of this graph. But in general, it is NP-complete to test whether a
graph admits in an upward planar drawing or not.
And so let us review some known results. Of there are many polynomial time solvable
clauses of graphs. For upward planarity, for example, single-source graphs, graphs that
have only one vertex without incoming edges are solvable in linear time. Or if the
embedding is fixed then the graphs only can solved within polynomial time. And also for
series-parallel graphs and so on, there are many cases.
And there are FPT algorithms for upward planarity over the number of triconnected
components. And the first algorithm enumerates all embeddings and then they use the
fixed embedding algorithm to check whether such embedding is -- admits an upward
planar drawing.
And the other algorithm enumerates over the R nodes of an SPR tree. And using the
series-parallel case of polynomial time cases.
And also there is a Branch&Bound algorithm over triconnectivity structure which
minimizes the number of bends in quasi upward planar drawing. But this is a more
general problem.
And so our goal is to find a procedure for general digraphs solving upward planarity.
And our approach is to give a formula such that this formula satisfiable if and only if the
graph is upward planar. And so we can test upward planarity using any SAT-solver as
a black box.
And what is satisfiability? I think you know it. Given formula phi in CNF, it's the
conjunction of this junction, and we asked whether there is a satisfying assignment for
this formula.
And here we see such a CNF. And it's a satisfiable CNF because we can set X2 and
X3 to true and then all clauses become satisfied. And so the formula is satisfiable.
And we want to give a formula, so we need some variables for our problem. And if we
consider upward planar drawing of the graph, we can transform the drawing into a
drawing with a spine where all nodes have one X coordinate and every vertex have a
unique Y coordinate. And so we obtain a vertical order of the vertices, and also we
obtain a horizontal order of the edges if they don't -- if they overlap.
And so we can have variables for the vertical order of the node -- of the vertices where
such a variable tau V, W is set to true if and only if W -- V is below W. And it's false
otherwise.
And we only build such a variable if the index of the node V is smaller than the index of
the node V because -- if we have the variable tau VW, WV, then that's kernelization
authentication of this variable. And so the induced order of this tau variable are
[inaudible] and anti-symmetric. And it's the same for the sigma variables for the
horizontal order of the edges.
And now we want to have this induced order is transitive, too. And so we give rules for
the transitivity of these variables. And it's very easy because if U is below V and V is
below W, then other view is V, W is below W and so if the variable tau U, V is set to true
and also tau U -- V, W is set to true, then tau U, V is set to true by this implication and
this implication is one clause.
Okay. That's very easy. And also we need and upward drawing. So if we have an
edge X, Y in our graph, then we add such a unit clause tau X, Y, where that forces that
variable tau X, Y set to true. And so we ensure that every edge goes from the bottom to
the top.
And the main key of the formulation are the planarity rules which ensures that we have
no crossing in our drawing. And that's not so complicated because if we have the node
-- the vertex below the source and the target of an edge then all adjacent edges of this
vertex have to be on the same side respect to consider the edge X, Y.
For this example here, if U is above X, and below Y then the edges E and F are both on
the left to G and both to the left of G and on the right. But if not, then we have the
crossing. The rule is very simple because at the left-hand side of the rule is that U is
between X and Y and to left -- right-hand side is respecting -- sigma variables have the
same value. And such a rule we can transform into two clauses. And we have to
consider such a rule for every triple of edges where E and F have a common vertex. So
that's our formulation.
The problem is there are many variables, and there are many clauses. But the first
question is are these clauses sufficient to ensure upward planarity or do we need more
clauses? The easy part is to see that this is a correct formulation due to our
construction. Because if we control such a drawing into the spine drawing then we can
normalize the drawing, then we can build the order tau and partial order sigma of
overlapping edges, and then almost all of our constraints become satisfied and we have
to expand the partial order to a full order to satisfy all transitivity rules and then we have
a satisfying assignment. So we can easily from a drawing to a satisfying assignment.
But the interesting part is to get a drawing from satisfying assignment. And therefore,
we can build a rotation system by our induced order sigma or sigma variables are set to
[inaudible] and so if it's set to true, we can order them on such a node. And then we
have to prove that this rotation system yields an upward planar drawing.
And this is nontrivial. There are many cases. And we use the order of the tau variable
so we can order the nodes, and then we can order the edges according to their source
vertex index from high to low and if two edges have the same vertex then we can order
them respecting our rotation system we build before.
And then we can compute our drawing from top to down by inserting the edges in this
order, and we can prove by induction that we have no crossings using our planarity
rules and we don't need more rules. And also we need all the cases shown before.
And -- but there are many cases, and this is a very technical proof that clauses are all
sufficient.
But the main question is there are many variables, many clauses. Does it work? This is
a very large formulation. And we did some experiments, for example, with Rome and
North DAGs. This is a benchmark set generated from practice with up to 100 nodes.
And also we can with our formulation we can give an ILP approach, which is very similar
to this approach, because we can formulate such constraints into ILP constraints and
then we have an ILP model. Much and we can solve 97 percent of the Rome and North
DAGs within 0.1 seconds. And we can solve all the instances within 60 seconds per
instance.
But ILP approach can solve only 40 percent within 0.1 seconds and cannot solve 40
percent within 60 seconds per instance. So the SAT approach is much stronger than
the ILP approach. And we use minisat as a SAT server that's a well known SAT server
in the SAT community. And also we use CPLEX for an ILP solver.
And there was no -- it was not known whether in Rome or in north DAG is upward
planar or not. And so you heard about the layer-free upward crossing minimization
approach which tries to minimize the number of crossings in an upward drawing.
And we can see that this approach works good for North DAGs because up to 91
percent of the upward planar instances were solved with the right number of crossings.
But it's weak for Rome graphs because up to 56 percent of the number of upward
planar instances were solved correctly.
And also now the question is they are the Branch&Bound algorithm. But no
implementation of this algorithm was available and so we have to compare to the data
we get from the paper and the paper suggests us to need 10 to 20 minutes per instance
use in this paper, and for these instances we need 0.36 seconds on average, and we
can solve 92 percent of the instances in under one second. But we have to compare
this these running times and therefore we used Speclnt, and we obtained that our
machine is up to 36 times faster than the machine used in this paper. But still this is not
a fair comparison.
And so let us come to future work. We need an experimental evaluation of this
approach. And so we want to implement the Branch&Bound algorithm with the O -- O
to the F. And also we want to restrict this algorithm to the upward planar case because
it is a more general problem. And if we restrict them, we can solve non-upward planar
instances faster. Because we don't have to compute the right number of bends.
And also there are FPT algorithms. But no implementation of these algorithms are
known yet, and so we want to implement these algorithms and also want to compare
the algorithms against the Branch&Bound algorithm and against our approach.
And also we want to extend our formulations to upward crossing minimization and
maximum upward planar subgraph problem. But this is optimization problem and so we
have to use [inaudible] satisfiability and we guessed in this case the ILP approach
becomes more stronger.
And so let us conclude. We have a simple framework using satisfiability. And so we
can use SAT-solvers -- strong SAT-solvers as a black box. And there are many free
SAT-solvers in the Internet. And SAT-solvers are most different. And so we can use
different kind of instances where other kind of instances not so good. And so there are
many space for experiments. And also we showed that the approach works good in
practice for graphs up to 300 nodes.
And but the main problem is we have no guarantee for the running time, so it is possible
that we run into a dead end for graphs with up to 56, 60 nodes. And so you need days
to compute upward planar drawing. That can happen.
And also there are many variables and clauses. So our formulation is very large. And
so we can't solve instances with 2,000 nodes. That's not possible because we have
millions of clauses. Or more than millions of clauses.
And so data reductions and kernelization for these problems would be nice. But there is
one kernelization approach that's very old and not so nice to use, so that's a source of
future work to do for upward planar. Okay. So thank you for your attention.
[applause].
>> Lev Nachmansen: Thank you, Robert. Time for questions.
>>: Can you extend your approach to upward planarity on the sphere? If you fake the
sphere and the plane. The difference is between the characterization for upward
planarity. On the plane you can always add the -- an edge from the source to the
[inaudible] only one source, one [inaudible] on the sphere you don't have this edge.
And of course [inaudible] some kind of circular dependencies that you can run around
the sphere. I don't know whether you can model that with your clauses.
>> Robert Zeranski: I don't think so. I'm not sure, but --
>>: The question is do you need the left end and right end -- the left side and the right
side of the page.
>> Robert Zeranski: For our proof we need the left end and the right end. We need
space that we can draw. And this is a boundary.
>>: You quantify what many variables and many clauses means?
>> Robert Zeranski: It's very cubic. Cubic and [inaudible] edges. And for instances
with 300 nodes and 500 edges, we obtain 150 million clauses.
But it's also feasible for a SAT-solver.
>>: Yes. So in your picture you had these vertices aligned and the edges are either on
the left or on the right page. But in general, your graph may not be [inaudible] and so
the edges must cross this line. [inaudible].
>> Robert Zeranski: No, because we can -- we can produce such a drawing that we
cross the spine but between the space we have to cross the spine. There is no edge.
And if there is an edge, then the graph is not upward planar. So and our constraints
ensures that this is possible.
But the proof for that is very technical, and it's -- there are many cases, and this is one
of the case that we have to cross the spine and -- the spine, and then of course if there
is an edge there, then in the order of the tau we have is just wrong or the graph is not
upward planar.
>> Lev Nachmansen: Thank you.
>>: You said sometime it's slow and sometime it's fast. Can you give some hints on
the dependency on the structural ->> Robert Zeranski: We didn't try this a lot, but, for example, if you have -- we can say
we have experiment for [inaudible] and if the graph is upward planar but if we add one
edge, the graph becomes a nonupward planar, then the running time is much -- it's -the running time is -- the problem becomes slow because if we are near nonupward
planarity or if we have to enumerate too many embeddings because we simulate a
number of embeddings with this formulation and so many of these embeddings are
different but they are dead ends. And so the SAT-solver have to try too many things.
And also we can produce instances where we have to need 10 or 20 minutes to solve
the problem, yes, we can. But also if we have too many sources and things then the
problem also becomes hard and we need too many time.
>> Lev Nachmansen: Thank you.
[applause].
>> Lev Nachmansen: So we move to the third talk. It's given by Soroush Alamdari.
And the title of the talk is self-approaching graphs.
>> Soroush Alamdari: So I'm Soroush. This has been work with Timothy Chan, Elyot
Grant, Anna Lubiw, and Vinayak Pathak about self-approaching graphs. I'll get to what
they are in a minute.
So that's where we have. And Anna is having her flight back home Waterloo this
evening, but she will be going to Vancouver first and then to and then to Toronto and
then to Waterloo. But as you see, going from Redmond to Vancouver you are getting
further from Waterloo. And even from Vancouver to Toronto, at the end of the flight,
there is a part that you are also getting away from Waterloo. That's a waste of time and
energy. And we like time and energy. So we want to find ways to make this better.
So one idea is greedy drawing. Greedy drawings are a drawing -- so a greedy path is a
path in which -- so going from S to T at each step you get closer to your destination.
So it -- you wouldn't go to Vancouver. So the good thing about them is that you can
locally find a greedy by just going a vertex that -- going to a vertex that -- so a greedy
drawing is a drawing that for every pair you have a greedy path.
So in a greedy drawing you can find a greedy path by local -- by locally just getting
yourself closer to your destination. And we know that any tree-collected planar graph
has a greedy drawing. By Angelini, et al., and independently by Leighton and Moitra.
So the downside is that the greedy path -- so the blue here can be quite long with
respect to the distance of the destination and source. So yeah. So we use
self-approaching curves, the notion of self approaching curves. A curve is self
approaching if for any three points A, B, and C along it, we have that the distance of B
and C is smaller than the distance of A and C.
So in another sense, going along the curve, you are always getting closer to the points
ahead. So here ST is as a self-approaching curve.
However, TS would not be a self-approaching curve because taking A, B, and C here,
you are violating the first condition, because going from A to B you are getting away
from C. So you want to be getting closer to points ahead.
So also equivalent is that no perpendicular line to any point of the curve should intersect
any of the points ahead. So we define self-approaching graphs as a straight-line
drawing, self-approaching drawings actually, as a straight-line drawing in which for each
pair S and T there is a self-approaching path connecting them. So a path that is
self-approaching.
So these are good things. Let's see how they -- how they are compared to greedy
drawings. So in a greedy drawing you are getting closer to your destination in each
step. But in a self-approaching drawing you will get closer to your destination
continuously. So, for example, here going from S to T, this is not a self-approaching
because here you get -- you start getting further away from points ahead.
So basically you -- they're a perpendicular to that edge can't intersect any of the points
ahead. The good thing is that we have bounded detour. So that's very good.
The downside is that, well, not -- so greedy strategy does not work here. So, for
example, going from S to T you are -- you can go to A. You will get closer to T. But
then you can't go to C because C is straight perpendicular to S to A.
So we asked three questions. We work on three questions. First, given a graph is it
self-approaching? So we -- we can solve it but we give some partial results. Given a
graph, does it have a self-approaching drawing? Well, again, we can't generally solve
it. We solve it for trees.
And given points in the plane, can we find a network that connects them such that the
network is self-approaching? Yes. We can find linear size network but with a
[inaudible] points.
So let's look at the first problem. For the first problem -- so given a graph is it
self-approaching? An actual problem is that given a graph and S and T, is there a
self-approaching path collecting S and T? We show that for 3D graphs, drawings, that
[inaudible].
And if the graph is a path, we can test in linear time if it's self-approaching path and in
3D we can show that we can test it in N polylog time and we can show that you can do
better than a log.
So I'll be showing this result. So we want to test if a path is self-approaching. One idea
would be to just check each edge and this perpendicular to that with all the points that I
had. So you needs -- because it's a straight line, you only need the vertices, checked
vertices. Their improved idea is that so for each strip you don't -- for each edge you just
need to compare it with the convex hull of the points ahead.
So what you do is you start from destination and come back and construct the convex
hull iteratively and check at each time the strip to not intersect the convex hull. Here it
does. So this is not a self-approaching path. Okay.
So let's look at the second problem. Of given a graph, can you draw it
self-approaching? So for trees, for trees of maximum degree larger three -- the only
graphs that you can draw are subdivisions of K1, 4, and it's easy to see how we would
draw them just as across.
For trees with maximum degree tree we show that you can draw if and only if the tree
does not have a subdivision of this graph here as a subgraph.
So let's look at the third problem, which is most interesting. Given points we want to
construct the network that is self-approaching. The candidate -- so natural candidate is
Delaunay triangulation. So with NS square it's easy like. So we want soft colors. So
Delaunay triangulations are natural candidates. They are already spanners. The
detour is valid already, so you would suspect but no. There are examples that show
that Delaunay triangulation is not a self-approaching graphs.
What about Manhattan networks? Manhattan networks are networks so any pair of
points are connected with an X, Y monotone curve. So X, Y monotone curve is a
self-approaching curve. So, yeah, they are self-approaching networks. They have
stringer points and the size is log N and can't be done better than that.
So what of -- so what we do is we construct a linear size network that contains convex
everyone fits self-approaching patterns.
Before going there, how it's done -- so what are well-separated squares? So two
squares are well-separated if their distance with respect to the diameter of the larger
one is large. So, for example, here these two squares are one over two separated.
Yeah.
So let's look at the problem. We have these four points. We want to construct a
network that connects every one of them with self-approaching -- every pair of them
with self-approaching path. First thing we do we construct the quad tree. We subdivide
the space into squares. And we remove the vertices so it's compressed quad tree. So
we remove the vertices of the quad tree that are not useful to us. They just have one
child. They don't separate anything.
So this will be our compressed quad tree. Now, what we do is that we -- so this -- and
that's the tree for it. So larger square would be the root and so on.
And note that each have the vertices are -- each of our vertices is assumed to be a
very, very tiny square. So, yeah. So we add dummy nodes or standard nodes at every
crossing that we have. And then what we do is we connect each node of the
compressed quad tree to its parent by horizontal -- by orthogonal edges and add the
required dummy Steiner points. So we have this thing. Of.
The good property here is this. So from a vertex so any of the corners, the four corners
of any of its ancestor, you can find an X, Y monotone path. So that's good.
Now, what we do is we can sort the well-separated pair decomposition that is epsilon
separated. Epsilon is a constant though. And we have order of N pairs of squares that
are well separated. So we have this -- this well-separated pair decomposition gives us
order N pairs of squares that are well-separated and for any pair of vertices there is the
pair of squares in this pair decomposition such that those two vertices appear in the two
elements of that pair.
So, for example, here for these four, this is one possible well-separated pair
decomposition on A and B. B and C and D -- by C and D, I mean the [inaudible] C and
D so the set C and D would be smaller square on the corner. So this will be our pairs of
well-separated pairs.
So let's -- then for each pair -- so note that we have linear number of them. So we can
add for each of them one edge. For each pair you -- you would connect the two corners
of those two pairs of squares so that you can [inaudible] from any vertex in front of them
to any vertex in the other.
So, for example, here if we connect that corner to B, you can connect by [inaudible] you
have a self-approaching path from B to any of those two C and D and vice versa
because you could -- as I said, you can find X, Y monotone from C to that corner and
from D to that corner.
C and D, again e those are nearest square. We connect them. A and C and D. So
here's where there is a little problem. If we connect A to any of the corners because
they're -- they intersect in Y coordinate, there might be problems. So how do we handle
this? We take everything and rotate it 40 may have degrees. So anything -- any
vertices that were aligned with respect to the distance are now not aligned. Of.
So running the algorithm again and constructing the graph again and combining them
will give us the desire. So any pairs that were not connected in the last iteration will be
connected here.
So will be good except that we'll be taking union of two graphs so we won't be planar.
Open problems. So we have lots of them for the first problem given a graph drawing, is
it self-approaching? Well, we don't know if it's polynomial or not. That's one of the
problem. In 2D, given S and T, is there a self-approaching connect in there? So the
drawing and the pair are given -- so inform 3D we proved that it's a graph. For 2D we
couldn't.
Second problem, given a graph, does it have a self-approaching for -- we did for trees.
For -- is it in general testing the self-approaching polynomial we don't know that. Are
three connected planar graphs self-approaching? We don't know that. And are there
ways to find self-approaching paths with low [inaudible]? We do not know that.
Third problem -- these two are my favorite. So given points in the plane, we want to
connected them with self-approaching networks. Can you use subquadratic with -without Steiner points? We have no idea. But it's -- I think it's interesting.
And can you do better than one of the networks if you want to stay planar? We don't
know that either. So, yeah. Any of them would be good. Thank you for your time.
[applause].
>> Lev Nachmansen: Thank you, Soroush. Questions?
>>: [inaudible].
>> Soroush Alamdari: [inaudible]?
>>: Have the trees that can be drawn ->> Soroush Alamdari: Yes.
>>: [inaudible]. Is this just a [inaudible].
>> Soroush Alamdari: Straight line. So things get very complicated when you are not
looking at a straight line drawings. You have -- you have -- you can have curves that
are part circles and part straight lines. And you -- yeah.
>>: Is your crab more [inaudible] complicated than a lobster?
>> Soroush Alamdari: That's the graph that you saw. It's just ->>: But is it a lobster or is it ->> Soroush Alamdari: No, no, it's not [inaudible].
>>: It's not [inaudible].
>> Soroush Alamdari: No, no, no. It has just -- it has these things that -- see, it's a
crab. It has these -- [laughter]. And these are the eyes.
>>: [inaudible].
>> Soroush Alamdari: Yeah, my English is not that good. [laughter].
>> Lev Nachmansen: Okay. Thanks again.
[applause].
>> Lev Nachmansen: And now we come to time-space maps. And the speaker is Marc
van Kreveld.
>> Marc van Kreveld: All right. This is joint work with Sandra Bies, who was my master
student a while ago. She now works for a company. The talk is about time-space
maps, a type of map, cartographic graphic products.
So the idea of time-space maps is that you are going to show travel times in a country,
for instance. So not geographic locations of cities but somehow how long it takes to get
from one city to another, for instance by train. And this is a typical Dutch yellow train.
Not very relevant to the talk, though.
You could show travel times of course simply in a table, a cross-table. Or you could see
that it takes for instance for Arnhem to Amsterdam 70 minutes [inaudible] but that's not
really a visualization of course. And that's what we prefer.
So here's a map of the Netherlands. A normal map. With the train lines shown on the
left. So remember the shape of the Netherlands. That's useful for this presentation. So
look exactly what shape it has.
On the right you see in an rectilinear schematic drawing that graph drawers seem to like
a lot too. It's used by the Dutch rail ways to show the same set of train tracks.
Okay. On a normal map you would see for instance if Amsterdam is the city where
you're interested in and actually you want to show travel time from Amsterdam only, so
not between every pair that would be closer to what [inaudible] was talking about on the
first day. We're taking one central location, in this case Amsterdam, and we want to see
can we show travel time from Amsterdam by train?
So normal map would show distance from Amsterdam. So everything within 50
kilometers is in a circle centered at Amsterdam. And if you would take a travel time, say
60 minutes, you would get a contour with a strange shape around Amsterdam, and you
would, for instance, see that Rotterdam is slightly more than 60 minutes away because
it's just outside the contour. S-Hertogenbosch is just inside nearly on the contour.
That's there. And so on. Okay?
So the question now is can we deform a map so -- we squeeze and pull and do some
stuff. We deform the map such that the travel time from Amsterdam, the constant travel
time becomes a circle on the map? Okay?
So this contour has to become a circle. Well, we can do that. Here's a map with all the
red dots of the intercity. Amsterdam is the yellow dot there. And this would be the map.
Okay. We can do it. So you can see, once again -- see the laser pointer.
S-Hertogenbosch is just inside. Rotterdam is just outside. And so on. Let me go back
and forth a bit so you can see better what's going on.
So what you see for instance in the south there, that's Limberg and Maastricht, so
Limberg, you can see if you put these two maps on top of each other that it comes
closer. So what you see is that Limberg is relatively closer to Amsterdam than average.
Whereas other cities like in the north, Groningen, all the way at the top, goes a bit
further away. Okay? So that's what you see with such a map. You basically see
regions and how well they are connected to some central location. And these con
towers are in. All right. This is another one. This is where Hank Myer lives, in
Middelburg. So you can see how far it is for him to travel to Utrecht or to Eindhoven or
to any other city.
And you can see for reference the map of the Netherlands in the lower right-hand
corner.
So here you can see that Maastricht inside Limberg, the lower part, has pulled moved
away. So it's relatively hard because you have to make this detour through the
Netherlands. Okay?
All right. We will do this making these deformations using triangulations. And this is not
a new idea. This has been suggested already many years ago by Allen Saalfeld at the
SoCG conference in 1987. He said he used triangulations. And then if you know for
certain points how they move, then use the triangle to deform the rest as well.
A couple years later Edelsbrunner and Waupotitsch also at SoCG had a different
application of using triangulations to make deformations. They made contiguous area
cartograms where basically states of the United States are deformed to represent
population. You don't have to make these cartograms with triangulations. You can also
just use a string embedder. People have done that too. Just for reference. If you use
string embedder you could get something like the picture on the right. So that's not
made of triangulations. In this case, that probably looks better.
Okay. So how does a triangulation help to deform a map? Well, what we have, of
course, is only the travel times from one city to all the other cities. I mean, we don't
have the travel time to the coast line or anything, because there's no train there. Still we
want to deform the whole map. We don't want to move Maastricht closer by without
taking the shape of the country with it. So you want to deform more than what you have
information of.
So what you basically want, you have these points and you know how they move.
Maybe this one didn't move for some reason. Maybe this one moves there and this one
moves there. Now you want to deform the whole map. So also everything in between.
And if that's a triangle, all the stuff here. According to the deformation of those points,
the interpolation of the movement, of the deformation -- you can do this very simple
using barycentric coordinates. That didn't really matter how that works. But any point
inside this triangle has barycentric coordinates expressed in the three corners and
basically, depending on where it lies, it moves more, according to this point or more
according to this point or more to this. In fact, it will become a weighted combination.
And if this triangle deforms into this triangle according to these green arrows then this
shape will become like this, this piece will become like this, and you basically defined
deformation everywhere from having it just at points. Okay.
Let's go through quickly through an example of what's going on. Our central location is
the red one. Of the blue points we know this is the geographic locations of the blue
ones. But they want to go somewhere else because that would be the time.
We moved them radially with respect to the center. Why radially? Well, why something
else? So we just keep it radial.
So what we could do is for instance just take a Delaunay triangulations. It has these
nice shaped properties. So no skinny triangles if they can be avoided. So maybe we
could just use the Delauney triangulation to define the deformation.
Well, here are the new points. They move. And what happens if we put that same
Delaunay triangulation on new points is that you get self-overlapping triangles. Like
here. So if there would be something else like a river here, this river would probably
start to self-intersect. And you got things that you would not allow on a map.
So we cannot just use the Delaunay triangulation and hope that it works because it
won't. Okay. This would be the Delaunay triangulation of the final destinations but
that's of course a different triangulation. So let's go through a couple of questions and
answer them for the rest of the presentation.
Well, first of all, we want to extend the coast line as well and we don't have information.
So we don't have to only do interpolation, but we also have to do extrapolation, outside
all the points. Well, can we do it easily? Well, yeah, we just make a bounding box that
fits both the geographic locations and the time locations. Make it a bit bigger. Keep it
stationary and then it just works.
All right. Is there actually a triangulation that would work for the points and their initial
and final positions? Is there some triangulation that actually does not give
self-intersections? Well, yes, there is. You can make a radial triangulation. You just
connect the central location with every other point, extend it to the bounding box, then
complete these two triangulation which will always come down to -- well, to basically
connecting the points up in cyclic order around the central locations. And now these
points can move as they want. Of course, you have to complete it with some more
diagonals. But these points can move as they want. It doesn't matter where they go to.
You will never get a collapsing triangle. Okay? So that works.
Well, does this give a good deformation? No. That's again Middelburg. And it looks
very horrible. So that was not such a good idea. Many artifacts. We gave it a name
anyway, static radial deformation.
Okay. Is there a different triangulation than this radial one that would work? Well, let's
try to make it more Delaunay. What we can do is take some edge like this one and try
to flip it and see if we would make the deformation, if it would collapse or not. If it
doesn't collapse we can use the different triangulation.
So we tried to flip as many edges as possible to get as close as possible to Delaunay
but we just for every time for every -- if it's allowed. If it does not give these collapses.
Well, usually there is this usually works. I mean, you're not guaranteed to be able to do
any flips, but in practice we can do many. We get a lot closer to Delaunay, and that's
good. Okay? For instance maybe you get this.
Now, does this give a nice deformation? Well, it's a lot better. Right? There's still
some artifacts but it's much nicer. We call this static hybrid because it's a hybrid
because the radial and the Delaunay triangulation.
Okay. Can we do something else? Can we maybe maintain the Delaunay triangulation
when we are -- can we -- do we need to take one triangulation or can we use different
triangulations throughout a process? And, yes, this we can do. We can maintain the
Delaunay triangulation when we move those points and some points a triangle is no
longer Delaunay if these points move further. We apply the flip, we continue with the
new Delaunay triangulation. We can just do this.
Then other points will not move that smoothly along the line but they might zig-zag a bit
in their path. But it works. It still gives a deformation that is an isomorphic.
Okay. Here's the map we would get from Limburg in this case, and that looks better.
Okay. Usually very nice. That can be some artifacts. It's not clear whether they can
ever be avoided but, yeah.
We call this dynamic Delaunay because the triangulation changes. Let's look quickly
how this really works. So basically at the initial position, so the geographic locations we
parameterize each point for T, 0. The final location is T equals 1. And we run time from
zero to one and we maintain the Delaunay property so these points move slowly a bit.
And if we take the sir couple circle, assume as it contains any other point of set not just
a red one but also a blue one, we flip in this case this edge to that edge and we
continues with that that triangulation. That's simply what we do.
Okay. We can compare these methods on running time. If we take the algorithms.
Well, the static radial is quite simple. At N log N time we can compute the deformation if
we have N cities. If the map has N vertices, computing the new map takes log N time.
Well, static hybrid is a bit expensive because we might do a quadratic number of flips.
Computing the map is, again, cheap. Dynamic Delaunay is related to one of the big
open problems in computational geometry, namely how many changes are there in the
Delaunay triangulation for linearly moving points? People don't know.
Somewhere between quadratic and something close to cubic, upper, lower bounds. But
what it really is, we don't know.
In this radial movement we also don't know. It's not easier as far as we can tell. What's
maybe also quite interesting is what we wanted to somehow quantify that this dynamic
Delaunay is a better method than the other two. And how do you quantify that? It's not
that simple. We're talking about deformations. They're all correct. They all place the
cities at the correct location.
So what we tried to define is a distance deformation an angle deformation. And a
distance deformation basically says well, if you take some line segments and you apply
your deformation then probably it will get some kind of jagged thing here. How much
longer does it get, typically, in your deformation?
And then you can see that static radial adds, well, a lot of distance in kilometers. Static
hybrids on the average doubles it. Then I make the Delaunay, make it just a bit longer.
That's probably good, even though it's just a quantification that we made up ourself. We
don't know.
For the angles we can do the same. We take a 60-degree angle made from three
points. We measure where the three points end up after the deformation. We get a
different angle than 60 degrees, plus or minus some will value. And we can see that
that is also best for dynamic Delaunay, best -- we think it's best if it's closest to 60
degrees.
All right. One more picture. Here we see [inaudible] as the central location right here.
And you can see now in these lines where the points move to. So it's to get here, it's
hard because you have to go all the way around
Now, particularly this town of Lailystots [phonetic] wants to go here because the only
train connection to Lailystots runs here. So you have to go all the way around. That's
why it's so far away. Right?
So this gives one of these annoying maps here where Lailystots has basically pierced
into North Holland. And this you could call it an artifact, but it's not clear whether it
could ever be avoided because it has to go there. Okay? And that's basically it.
We saw time-space maps. The user centered version, so not for all interdistances. But
just from one central location to the rest. Thank you.
[applause].
>> Lev Nachmansen: Thank you, Marc. Start with David.
>>: So in the good case when the Delaunay triangulation works throughout the
transformation, you're still doing a piecewise linear interpolation, which is not very
smooth. Are you thought about using natural neighbor interpolation or other more
smooth ->> Marc van Kreveld: No. It is indeed an idea if one could do that, yes. Of course, it's
both for natural neighbor interpolation and for Delaunay. For which position of the
point? For the initial, the final, or the halfway point that would also be the question.
>>: But that was also Delaunay based so ->> Marc van Kreveld: That's right. That is definitely a good possibility, yes.
>>: So another alternative to avoid these artifacts along the boundaries would be
moving these squares which we had used on [inaudible].
>> Marc van Kreveld: Okay.
>> Lev Nachmansen: Martin.
>>: So when you measure these deformations, do you take pairs of input points or
other shade points or ->> Marc van Kreveld: Yes, we took a hundred pairs of points at distances 10, 20, 30,
40, and 50 kilometers. We basically sampled 100 points on each line segment,
measured where they went and then added up those hundred distances.
>> Lev Nachmansen: Stephen.
>>: Did you consider taking the compatible triangulation between the input and output
with the [inaudible].
>> Marc van Kreveld: No, we did not consider it. It would definitely be an option. You
take the can compatible combination and Steiner points of course you have to define
how the Steiner points move. All right? So you would have to define the interpolation of
the Steiner points anyway I guess. Yeah?
>> Lev Nachmansen: Anna?
>>: It seems curious that you didn't use the rail lines because that [inaudible].
>> Marc van Kreveld: To show them you mean?
>>: No, to -- I mean, you're sort of straightening them out [inaudible].
>> Marc van Kreveld: Not really. You're not really straightening them out.
>>: No?
>>:
>> Marc van Kreveld: Okay. You could say -- okay. Well, I didn't make up this type of
map. I mean, I saw it in a thesis, hand-drawn versions more than 20 years ago and I
thought at some point I'm going to work on this.
>> Lev Nachmansen: Pete?
>>: I didn't quite get -- sometimes the areas where high distance could be surrounded
by a area of -- does it handle that -- I mean ->> Marc van Kreveld: Yeah. Okay. So ->>: [inaudible].
>> Marc van Kreveld: All the pictures that you saw are based on the intercity stations
only. There's, I don't know, 70 of them. We also run it on all the stations in the
Netherlandses. That's 200 and something. And then you have slow train stations that
are closer by. But they are harder to reach than the intercity station. So you basically
have to swap.
But angularly they still are not exactly on the same lines. So they will pass. You will get
a huge deformation. But in principal it works.
>> Lev Nachmansen: Okay. So thanks again.
[applause].
>> Lev Nachmansen: So last talk of this session is by Kevin Verbeek and it's on
homotopic C-oriented routing.
>> Kevin Verbeek: Thank you. So I want to talk about schematic maps. And I think
you're all familiar with the metro maps. I mean you've seen them before this week. So
this is a classic underground map.
And if you look closer to the middle here, then you see the usual characteristics of a
schematic map. So you see you have these few orientations, a lot of horizontal, vertical
lines here, diagonal lines. And you have few line segments or what I like to call -- I call
them links. So you want to minimize that in a map. But for metro maps, you get
different rules than for the maps that I want to consider.
So you want to make a metro map, you're allowed to indeed form the underlying
geometry or the geography in a sense. And for a metro map that's fine. I mean, when
you take the subway you don't really see outside so you don't really care what's
happening.
So if you want to make a schematic roadmap, you do have that problem because then
the environment starts to play an important role in the design of the map. So in making
such lapse I want to disallow actually the underlying geography.
And then you get some different rules. And I want to illustrate that with an example. So
we have this map and this is actually a map that comes from a game that I played like a
year ago. It's a bit outdated, but I still like the map.
So you see the road network of the world here. And there is also these map features.
So we have these mountains and lakes. And they play an important role. And so when
I want to schematize this, you get something like this. So this is kind of like what I want
to see.
But now if you look at the top here, you see I use several links here. This is one, two,
three, four. But you could say, well, I can do this with two links. So this must be better,
right? Well, let's see what can go wrong.
So this game is about dragons. So if you look at this mountain here, this is a dragon
hotspot. So say you notice. And of course you know a dragon's dangerous. So you
want to keep an eye out for these dragons when you're walking around. So say you
want to walk from this place to this place here. So if you look at this map, you think
okay, this dragon's going to be to the right of me, right? So I have to keep an eye out to
the right of the road. But actually is road is going like this. So you're actually constantly
looking at the wrong direction. And bad things can happen. So it's very important that
you make the schematic map such that you realize that you have the right relative
positions to the map features.
And that leads to the following problems. So I'm going to make a slide simplification for
the road networks. So instead of having a road network I'm going to start with distant
paths. So here is the distant paths that represents the roads. And then you have these
obstacles here. Those are the map features.
And what you're also given is a set of orientation. So you have horizontal, vertical, and
diagonal orientations. And what I wants is something like this. Yeah?
So there's several requirements. I want the nodes to stay at the same position. So the
cities have to stay in the same position. The paths have to be C-oriented. So you use
either one of these orientations. And they must be what I call -- what is homotopic. So
that means that the way they go through the obstacles has to be exactly the same as
before. They must be non-crossing because they were non-crossing before. Finally
what I want to optimize is the number of links.
Okay. So this is the problem. And this turns out to the NP hard, this problem. So what
we're going to look at are approximations for this problem. And we did this a while back
for rectilinear paths, so I want to briefly discuss that. So for remembering linear paths
we gave this 2-approximation. And it roughly works like this.
So what you do is you order the paths from left to right. And now you're going to insert
the paths in this order. So you start was the left red paths and -- well, nothing is
happening so, so good. And now you enter this blue path. You see that they are
crossing here. But the crossing region here is a rectangle. So what I just do here is I
just reroute it like this.
And now we're going to add the next path. And, again, it's crossing. But now you see
that this region here is not a rectangle. I don't like that because that would require me
to add too many links. So what I do is actually change this region by pushing this path
to the left. And then I have the rectilinear again. I could add links and I keep on doing it
here. Now I push this down. And eventually we end up with a non-crossing
schematization. Okay?
So now we want to do this for general C-oriented paths. And you might think okay, why
don't they just use the same approach, we just order the paths from left to right and
enter the C-oriented paths? The thing is that doesn't seem to work. And so we need
some sort of a different approach. And what I'm going to use is something else, which
is also pretty simple. But what I'm going to do is I'm going to do is reduce the problem
through actually rectilinear paths. And then I just solve that because I have this
[inaudible]. Okay?
Okay. So to do that we were need specially concepts. And that's called a smooth path.
So if you look at C-oriented paths in general, they can look like this. They look pretty
sharp, right? And I don't like these paths. So I want them to be smooth. So smooth
path would look something like this. And what's the -- the rule for a smooth path is that
you cannot skip an orientation. So if you look back at the sharp path, you see that this
one is going down here. So going south. And then it's going northeast all of a sudden.
So I say that you can not skip any of these directions.
On the other hand I do not really require a minimum link size. So technically this would
also be a smooth path, as long as you consider that there are these tighten little links
here. I need this for technical reasons. Eventually you'll see that I only use them when
I need them. Okay?
Now, to see how we solve the problem of smooth paths, we have to look at some
properties of the smooth path, and before I can do that, I have to introduces some
terminology.
So here we see a smooth path. And so the first definition is a U-turn. So if you turn left
twice or turn right twice, this would be called a U-turn. So you see that we turn left
twice.
Now, if there's an obstacle in this turn, then it's called a tight U-turn. And the overlap
between this tight U-turn and the obstacle is the support of the U-turn. Okay? Now, as
I look between two tight U-turns and between their supports, then what you get in this
between is a staircase chain. And a staircase chain's used only to adjacent
orientations. And these orientations also define the type of the staircase chain. And
finally I need to show what you a shortcut means. So this here would be a shortcut.
But it's only a valid shortcut if I can guarantee that this region here is empty. Because
it's not empty then I would change to a [inaudible] path. So when it's empty, I can just
apply the shortcut and then it becomes shorter. But it's still homotopic.
Okay. So let's look at some properties of these smooth paths now. Okay, first of all,
this one actually holds for any C-oriented path. It says it is shortest if and only if it has
no shortcuts. Well, that seems pretty obvious. But what you can also see is that all the
U-turns you will get are tight. And the supports are unique.
So what means is that you can have different shortest paths for C-oriented paths. But
the only place they can differ is in the share case chains. So all the support's always
the same but the staircase chains, they can differ. Yeah? Okay. That was property
one.
The second property, and this is fairly important, is that if you look at smooth paths then
you can make smallest paths. A smallest paths are both minimum link and shortest. So
I will try to prove this briefly. So what you do is you start with a minimum-link path and
then just going to keep on applying shortcuts. So if I do that, then by the previous
property we know that it's going to be shortest. And as long as I don't ensure that I don't
add links it's going to be a smallest path. Okay?
So say we have this shortcut we want to apply here. Now, if you can see here, we have
the starting orientation here. And ending orientation here. So you can look at this here.
And now because a smooth path you must use a link for every one of these orientations
in between.
So when I shortcut it, then what you see is that I use exactly one link fore every
orientation in between. So I can only reduce the number of links. And if I just keep on
applying these short cuts then eventually I will get my smallest path. So they do seem
to exist.
And then we went to the final property. And that says that only staircase chains of the
same type can cross. And this depends on this first thing. So staircase chains cannot
cross once. And it's a bit tricky to prove this formally, but if you look at this picture here,
you see that -- you see this one crossing. And then in order to support are unique. So
basically you can only chase these staircase chains and that's just no way to remove
this crossing here.
And since we assume that the input was non-crossing, we cannot get this. But we can
get two crossings. But since they are staircase chains, they only use two orientations.
So the only way for them to cross twice is to actually use the same pair of orientations.
Okay?
Okay. So we have these three properties. And now the algorithm actually becomes
really simple. So say we have this input and now what we do is just compute the
smallest paths like this. And then we can look at every type of staircase chain. And
because we have this previous property we know that only staircase chains is the same
type across. So we can look at them separately and solve for them.
And once we have done this for every time, we have untangled all the paths.
So let's look at one of these staircase chains here. And then the only thing you have to
see is that this is pretty much the same as a rectilinear case. So all you have to do is
apply rotation and [inaudible] transformation but this is [inaudible] transformation so
you're going to preserve all the crossing. And now you get this rectilinear problem. And
we know how to solve it. So we solve it here, transform back, and we get our solution.
And that basically leads to a 2-approximation for smooth paths.
It also is an order C approximation for C-oriented paths. And this is very easy to see if
you have a C-oriented path then all do you is you make it smooth and to see that you
don't add more than C times the number of links.
And then you just run the 2-approximation. And then you have an order
C-approximation. Some other nice properties is that -- so when you look at the
application of schematic maps, actually the smooth path makes lots more sense than
these sharp paths that you get here. So maybe actually using smooth paths is the right
way to go.
Other thing is that the paths also be shortest by the algorithm. And you can extend this
algorithm for thick paths. And then it becomes a little bit more involved and technical,
so I don't really want to discuss that. But you can just do that.
And that leaves me to one final open problem. So sort of related to this whole thing. So
say you have a set of paths, C-oriented paths, and with a total number of L links. And
they can be crossing. And now you want to somehow untangle these paths. How many
links do you actually need in the worst case?
So the algorithm I've just shown you tells you that order L times C is always sufficient.
And what I also show in the paper is that you need at least omega L log C to untangle
the paths. But there's still this huge gap in between. So what would be the actual
answer here? And that's all I have. Thank you.
[applause].
>> Lev Nachmansen: Thank you, Kevin. Any questions? What is the concept behind
this O of C?
>> Kevin Verbeek: The actual number is 2C minus 2.
>> Lev Nachmansen: 2C minus 2. And what did you think about your open problem?
>> Kevin Verbeek: Yes. So I have the feeling that it's more close in these lower
bounds. But I have no idea how to achieve this. There might be some more additive
constant there, something based on N, perhaps. But I think it's more close to this lower
bound.
>> Lev Nachmansen: Okay. So more questions? No. So thanks again. Thank you,
again, to all of the speakers.
[applause]
Download