>> Lev Nachmansen: So the first speaker of... Schaefer, and he’s going to talk about theory of planarity. ...

advertisement
>> Lev Nachmansen: So the first speaker of the next session will be Marcus
Schaefer, and he’s going to talk about theory of planarity. And I’m really glad to
have you guys here to talk about all that crazy graph drawing thing at Microsoft.
Thank you.
>> Marcus Schaefer: Thank you very much for having me. Since we were talking, or
since we were supposed to be doing something about games and puzzles, I figured
I’d start with a puzzle. You may know this one, does it look familiar?
So [inaudible] you have three neighbors, A, B, and C, and they’re trying to get out of
the common grounds they live in, but they don’t like each other. That’s why it’s
called the quarrelsome neighbor problem. So the paths have to be disjoint, they
cannot cross each other.
So question is, is it possible? Can you get everybody out of the gate, A out of A, B out
of B, and C out of C, without introducing any crossings or climbing over the wall,
helicopters, or other bad things?
Quick vote who thinks it’s possible? Who thinks it’s not possible? Okay, and that's in
an audience of experts. So why don’t you know? If you were looking for this type of
problem where would you look?
I mean, it’s obviously a planarity problem but it’s not the type of planarity that we
know from Korotovsky [phonetic], right? The type of planarity they’ve been
studying for a while, which is a little bit ironic though.
This particular problem is from a collection of puzzles. By [inaudible] there’s
actually interestingly a video version of it made around 1960-1970 where the
neighbors quarreled because one of them supports the Keizer and the other two do
not, made by Addison Company.
You can find it on YouTube. I don’t have time to show it, but so the problem is old,
older than planarity as we know it. As a matter of fact, the problem is about 600
years old at least, you can find the first version in a book by Lupacchioli [phonetic],
who gave us double counting and things like double bookkeeping and things like
that.
It’s an old problem. What do we know about solving this type of problem? There’s
been some progress, but really we’ve only started looking at this in the past ten
years, even though as a puzzle it’s older. By the way, here’s the solution. So it can be
done.
For this particular type of problem it can always be done. Let me just show you one
more that adds a slight level of complexity. This is another problem of [inaudible].
As far as we know it was not published in the [inaudible] book collection so you
have to go back to the original news paper.
The goal is to get everybody outside at the gate that’s just across from them without
introducing any crossings. And the trees are placed there so you can only make, in
essence, a [inaudible] drawing. So you can only go straight or turn at the trees.
If you didn’t have that restriction it would always be possible because -- yeah, why?
Think about it.
[laughter]
But yeah, it would be. But if you have the grid restriction, suddenly the problem
becomes harder. It may not always be possible in this particular case. I don’t have a
picture of the solution, [inaudible] had one. It uses every single turn you see. So it’s
actually a pretty hard solution. I don’t know whether newspaper readers today
would be interested in solving that type of puzzle.
But you can take this, and I think that’s where there’s still a lot of material out there
that we haven’t looked at carefully, and turn it into something commercial. So
there’s a game called Lab Mice, if you haven’t seen it it’s a cool little gift to give away,
which is essentially just you have mice of different colors and you have to connect
them to the cheese of their color in a grid drawing.
That problem is pretty certainly [inaudible]. I haven’t worked out the details of the
problem, but you can take these ideas and you can turn them into commercial
product that you can buy at the stores.
Okay, so much about games. The Sears version of this is called Partially Imbedded
Planarity and really as far as we know this was the first paper. We have a couple of
the authors here so they may speak to that that defined this problem properly, and
the paper is only about two or three years old. So what is the problem?
You have a graph that’s partially imbedded. What that means is it’s not entirely
clear because partial imbedding may not be ever connect a graph, and once you
have this connect a graph, like look at the black components there, what does it
mean to be an imbedding?
Like, can the pieces move relative to each other, like have different facial structure?
So you can find [inaudible] of what it means there to be an imbedding. It’s not
entirely obvious but it can be done. For this particular version we’re not allowing
the different pieces to move. So you can just think of imbedded as subset of the
plane.
And then the question is if I’m given a graph, sub graph of an embedded, like this,
can I extend that to an imbedding of the full graph? So that’s exactly that Lot
[phonetic] was asking. Here you can model it as such. And they showed that this
can be solved in linear time.
So what’s my point here? Well, apart from planarity, which we’ve studied pretty
well, there are all kinds of other planarity out there that have maybe gotten less
attention, but they’re all somehow related. Like this partially imbedded planarity,
which really should have had precedence because it’s a significantly older than the
idea of planarity.
So here I’ve put together all of the ones that I’m aware of. You may know of some
others and be happy to hear about it. The things are in some sense [inaudible] or
very close to planarity.
So up there out of planarity standard, the usual planarity, level planarity, where
you’re assigning coordinates to the x-coordinates to the vertices, radial-level
planarity where you’re placing the vertices on concentric circles and you’re ordering
them just like level planarity, famous cluster planarity that I’ve heard a couple of
times about.
I’ll talk about embedability in a second. Partial embedability, which we just saw, we
can notion partial rotation, meaning you specified each vertex, the order of the
edges, or just the order of the subset of the edges of the vertex. Maybe you allow
those rotations to flip. Two-page embeddings, there’s a couple of different versions
and there was a talk which I missed, unfortunately, yesterday on that, which is a
special case of book embedding.
With books we’re leaving planarity, right? As soon as you have three pages in a
book that’s no longer really planar, so two-page embedding [inaudible] planar
wants.
Upward planarity, directed edges all have to point the same way, and then
simultaneous embedability for multiple graphs and [inaudible]. Let me just define
two of those a little bit more carefully.
C planarity [phonetic] I think I can skip because you’ve seen the definition a couple
of times, I hope the picture reminds you, you cluster the vertices and we have
certain restrictions about how the edges can be drawn in those clusters.
The tricky case here is if the vertices in a particular cluster are not connected. If you
always know that they’re connected then actually there is a linear time [inaudible]
deciding it. The simultaneous embedabiltiy [phonetic] problem is if multiple graphs
on vertex sets that share edges, so here we have six vertices and three different
types of graphs, the solid edges, the dashed edges, and the dotted edges.
If you put all of them together you can show that there is no simultaneous
embedding of these three graphs. This is called fixed edges, meaning that edges that
belong to multiple graphs have to be drawn the same way.
So for example you see the double edge here, which is dashed and solid. You can’t
draw these two different ways. They have to be drawn the same way, so two graphs
sharing an edge must be embedded the same way for that particular edge.
So for this particular graph you cannot draw it. This would be an SEFE [phonetic],
actually is an SEFE 2 problem, I don’t actually have dotted edges. If you had three
graphs it would be SEFE 3.
Can I assume you’ve all seen simultaneous planarity or the definition as I just gave it
made sense? Okay. Then let me go back to the picture I just showed you. What I
tried to do here is pull all of these notions of planarity together and see how they
relate in terms of computational complexity.
So what you see here is all that I’m away of. The upper part here up to that line is
polynomial time. All of these things can be decided in polynomial time, many of
them in linear time.
Everything down here below upward and book is NP Complete and there’s this
really annoying little part in the middle, Clustered planarity [inaudible]
simultaneous embedablity [phonetic] with two graphs, and partitioned t-coherent
two-page drawings, which we don’t know. Do they go down or up, polynomial time
or NP Complete?
The other thing you see is the edges between them. These are reductions, so as
some of these are new, this tells us something about the relative complexity of these
problems. For example, level planarity you can look at as a special case of radial
level planarity, so if you can solve radial level you can solve level. Radial level is a
special case of clustered planarity, so if you can solve clustered, well we don’t know
how, you can solve radial level.
And then maybe the most interesting one, clustered is a special case of simultaneous
embedability of two graphs. So if you could do simultaneous embedability with two
graphs you could solve clustered planarity and that would explain why we haven’t
been able to do why we haven’t been able to do simultaneous embedability with two
graphs, in spite of trying quite hard, and Steven [inaudible] here have tried very
hard finding obstacles for this.
Maybe one reason that we still don’t have a complete set of obstacles for
simultaneous embedability with two graphs. By the way, as soon as you go to three
graphs the job problem becomes NP Complete, and it’s really the same [inaudible]
problem, which in a way is the most general planarity problem of all because it
encodes really every other problem there is.
If you have a planarity problem it’s easy to encode it as a requializability [phonetic]
problem and we know that this one is NP Complete. Particularly it’s an NP, so we
actually have an NP algorithm for all of these, except it’s not a very natural NP
algorithm, so I’ve not actually seen anybody implement it.
So the first part of the results in this paper is the reduction from clustered planarity
to simultaneous embedding of two graphs showing that clustered planarity is a
special case of that. So one of them really is harder than the other, I’m not going to
go into details.
This is the core gadget, essentially. You have to control, if you think of a cluster, you
have to control the edges leaving the cluster. And by definition, each edge that is
allowed to cross the boundary is allowed to cross the boundary at most once.
So what you have to control is the order in which these edges cross the boundary,
and that’s what this device does here. You have these solid edges coming in and
then the simultaneous embedding makes sure that the corresponding pieces going
out have the same ordering.
I’m not going to prove it. Correct you can find that in the paper, but the rough idea is
that the solid edge here in the middle once you restrict your whole drawing to the
solid edges, then you get these two fans over there that are in the same order.
You can pull them together, like this here, and then separate them out, and then the
solid edges will give you the embedding of the cluster graph with the black solid
edges of the device making up the boundary of the cluster.
So that’s the power that simultaneous planarity gives you. Okay. The talk was
actually called towards the theory of planarity, which of course is supposed to
reference a paper by [inaudible] in 1970 toward the theory of crossing numbers,
because what he tried to do is establish an algebraic theory for crossing number,
which didn’t go very far.
Crossing number is hard in the [inaudible] to find algebraic crossing number is
apparently as hard as just the regular crossing number, so it didn’t really help much.
But the idea I’m trying to pursue here is what if we take his idea and just try to apply
it to planarity?
Well, he did that in 1970 in that paper, so that’s what understood. As a matter of
fact, Wu [phonetic], around the same time in the 1960s in China, just a paper
translated and Hanani in the 1930s had done the same stuff already, and that gives
us what’s now called the Hanani-Tutte theorem.
Let me just show it to you. Some of you have probably seen it before if you’ve seen
me give a talk recently I’ve talked about this a lot. In that case you may remember it.
The point is if you can draw a graph such that every two independent edges cross
each other evenly, then the graph is planar.
So if you look at this piece of graph drawn here for example, the black edge here is
crossed by the red edge here an even number of times, or the thick red edge here is
crossed by the other red edge here an even number of times.
That means we can redraw this without any crossings. Well, we’re actually looking
at a path so that’s not particularly interesting, but this works for arbitrary graphs.
The point here though is your first attempt at a proof of this result is, well, whenever
we have a bygone, something like this here, two edges bordering empty space, we
can just remove it. We can remove crossings, right? That’s easy enough.
Oops. You’re right. But that gets really difficult if there are vertices inside and you
cannot use that method. I miscalculated with time so let me fast-forward you
through this. What this means is you can phrase planarity as an algebraic system,
namely linear system of linear inequalities over g of 2, which you can solve using
[inaudible] elimination in polynomial time.
So the first main contribution here is this theorem that says that the same is true for
partial embedded planarity, you have Hanani-Tutte theorem for that, which you can
phrase. It’s planar even only if you can find a drawing that extends the partially
embedded graph and every two independent edges cross evenly.
The nice thing is you get another algorithm, polynomial time algorithm, slower than
the linear time algorithm but of the same form as the other one for planarity. How
do you prove this? Skip that because I’m running out of time, but it’s using another
recent result by some people in this room.
They characterize partially embedded planarity using a finite set, actually an infinite
set of obstructions and a special notion of manner that works for this. I don’t think
that’s a good way to prove the Hanani-Tutte theorem but right now it’s the only way
I can prove it. Eventually, just like for the projective plane, I hope there will be
direct proof not using obstruction sets, but in this case it’s the easiest way to do it.
So my conjecture would be that there is a most general form of the Hanani-Tutte
theorem that works for all of these planarity variants. So a graph is x-planar where
x-planar is supposed to be any particular notion of planarity you can think of, if and
only if it has a drawing set of satisfying X where you replace a condition that two
edges that are independent and may not intersect with the weaker condition that
they intersect evenly, which means that you can write a polynomial time algorithm
to do this.
So I just about have time to state the results. I can prove such Hanani-Tutte
theorems for partial rotation, for level planarity that’s implicit in an earlier paper.
Most interesting are the results for simultaneous planarity.
For special cases, for example, if the intersection of the graph consists of a disjoint to
connect opponents, or if one of them is a subdivision over three connected graph, or
if the intersection is sub-cubic, all of these yield interestingly the same algorithm for
testing simultaneous planarity.
[inaudible] some of these generalize linear time algorithms for these cases by again
several people in this room. Why is this interesting? Well, I think it means that on
the good side [inaudible] is bad.
On the good side, simultaneous planarity for two graphs has the chance to be the
universal planarity problem that encodes all other problems, and if you could settle
this simple redrawing conjecture here, which says that if we can draw graph G with
sub graph H, so that every H edge crosses every other edge that’s independent even
number of times, then we can remove crossings from H with each other.
If we could solve this simple redrawing problem here you would have a solution for
the simultaneous planarity problem, which would imply polynomial time solution
for the cluster planarity problem and pretty much any other notion of planarity you
can think of, the main point being there’s one single algorithm and maybe a couple
of reductions to that algorithm.
So assuming the conjecture is correct I could give you an algorithm for that problem
right now. I just don’t know whether the conjecture is correct.
Okay, that was it.
[applause]
>> Lev Nachmansen: Thank you. We have time for one question.
>>: [inaudible] is specified for each theorem edges whether they may cross or not?
>> Lev Nachmansen: Do we have time for one more question?
[laughter]
>> Nathalie Henry Riche: No? Let’s thank our speaker again then. So we’re still
looking for the last speaker of the session, Chris Muelder. If you have his phone
number in your phone just text him. And our next fantastic amazing talk by Daniel
Archambault by the [inaudible] preservation in dynamic graphs. Thank you.
>> Daniel Archambault: Thank you Nathalie for that fantastic introduction.
[laughter]
So yes, I’m Daniel Archambault and this is joint work with Helen Purchase. And
from the title we found a task where the mental map does help in comprehension of
dynamic graphs.
So here’s a very quick outline, fairly standard introduction, previous work, the
description of our experiment, the results, and some discussions of these results and
conclusions.
So first of all a few definitions. So what is a dynamic graph? Well, that’s a graph that
evolves over time and each snapshot of the graph is a time slice or a graph at the
given time. Quite frequently animation, or this sort of movie type presentation, is
used to display graph evolution. Another way to depict the evolution of the graph is
sort of like a filmstrip.
This is called a small multiples approach. And each time slice is in its own window
and the user of the system will scan the Windows to determine how the graph
evolves.
It’s not all prevalent in dynamic graph drawing and it might be interesting to look at
some techniques based on this. So what is the mental map? Well, it’s sort of this
notion that drawing stability is important. It was defined in the works of Misue,
Eades, Lai, and Sugiyama.
And there is also a formal definition of Coleman and Parker provided in a journal
article in 1996, where essentially the placement of existing nodes and edges should
change as little as possible when a change is made to the graph.
So there are many dynamic graph drawing algorithms that use this definition of a
mental map and conversely to define what I mean by non-mental map, essentially
each time slice is laid out independently, so the nodes are allowed to fly all over the
plane in the animation.
So there have been a fair number of experiments already done, and none of these
experiments have shown a benefit for the mental map. Most of these experiments
show no effect, so it doesn’t help or hurt, and there is one or two that shows that it
hinders a little bit, but this was primarily due to things like node overlap and clutter.
Now, it’s important to note that most of these experiments test a notion of
readability or memorability. So these are tasks of extracting or reading paths and
that sort of thing from the graph, or the ability to recall certain elements of
evolution. In our case what we’re doing is we’re testing orientation.
This is related both to readability and memorability, but there’s more of a focus on
the relative position of information and how it’s used for revisitation [phonetic] of
certain parts of the graph.
So why are we looking at this? Well, it turns out that if you go back to the field of
psychology there’s three very interesting experiments. So it turns out for a small
number of targets people are very, very good at following these targets even when
they’re moving randomly on the screen.
So the first one, don’t ask me to pronounce that name, found that the number of
targets is around five. So if you have five randomly moving nodes or dots moving
against a field of distracters, it turns out people can get the answer close to 85
percent of the time.
Subsequently Yentis confirmed these results and extended it, saying that if the
targets have coordinated motion you can even scale further beyond five. And it was
also reconfirmed by Liu et al. where they essentially replicated many of these results
in an air traffic control scenario.
So a second draw back of these experiments is that many of them use preattentive
[phonetic] color highlighting of the nodes in order to disambiguate are talked about
in the question. In this case the mental map is probably not as important in the
scenario because the red node stays red and you don’t need to track it, you can
immediately look away at some other node and immediately come back to the red
node.
So because of these two limitations, I think we can cover most of the experiments
where we didn’t find an effect. So here’s our primary research question. The first
one is does the mental map help with orientation in the data for these revisitation
[phonetic] tasks? And the secondary research questions are does the number of
targets influence performance? And also does animation or small multiples
influence performance?
So as I mentioned there are many experiments, a couple here. One of theme was
Purchase where essentially she was looking at degree reading tasks and we’re trying
to figure out which nodes in the graph have large or small degree. In this
experiment she found that a compromise was significantly worse than keeping
nodes essentially pinned and allowing nodes to fly everywhere on the screen, which
is a little bit counterintuitive.
In some of my own work we tested the mental map when comparing animation
small multiples. We tested a lot of tasks and we found no significant difference of
notable magnitude between mental map preservation and nonmental map
preservation.
There’s a recent experiment by Ghani where they were testing the order of insertion
of deletion of nodes and edges in a dynamic graph series. Animation was used and
they also used an aggregation method. So essentially they drew out the dynamic
graph once and pinned vertices to their final positions and evolution was shown
without animated transitions.
In this strategy it turned out that pinning out-performed the mental map
preservation, but these tests really didn’t test reading paths or revisitation
[phonetic] not really that much in terms of structure. And so you can sort of view
this experiment as space where they were testing more time type factors.
So here’s our experiment design. We have two mental map preservation conditions,
followed by the two presentation methods. Three target levels, these target levels
correspond to the work in psychology, was inspired by making sure that we’re using
sufficient numbers of targets. And a two question design. So essentially our tasks
are, the first one is essentially revisitation task.
So the participant is required to relocate nodes that were indicated at the beginning
of the animation. Secondly is to read long paths that search the graph. IF they’re
short we’re conjecturing that the mental map doesn’t help too much, but as they get
longer they become more difficult to follow.
And we have these three sets of target levels for both of these tasks inspired by the
work in psychology. So there are a number of algorithms that we could have used in
order to present the mental map, but inspired by some work at JD 2011 by Oric
Frandas, there are some metrics that show that sort of linking strategies conform
best to the Coleman and Parker definition of a mental map.
So we chose the Erten et al. algorithm, which is one of these linking strategies. So
what are the tasks? You had a number of colored nodes at the beginning; I realize
this is kind of quick.
>>: What’s a linking strategy?
>> Daniel Archambault: A linking strategy is essentially, so you have your time
slices, the nodes have given labels, okay? So the same node can be located in each
time slice. And you cannot nodes of the same label between time slices with inner
time slice edges.
And then you use this hybrid graph to lay nodes out in such a way that the same
node hovers around in the same position of the plane. So by adjusting the strength
of these inner time slice edges you can sort of keep nodes closer to their location.
And by making them loose they fly all over the place.
Okay? So you have a bunch of colored nodes. This is the mental map preservation
condition. You follow how the graph evolves, and then the user goes and clicks on
the colored node that they’re asked. So what’s the blue vertex in this case?
And this was task one, which was the revisitation task. Now, in the nonmental map
preservation condition we have a path and you have to follow the path as this graph
swirls around. And at the end you have to indicate the path in the correct order in
order to answer the question.
So our primary result is there’s a significant difference between nonmental map and
mental map preservation for both of these tasks, both in terms of time and error
rate.
So that means mental map preservation was both faster and created fewer errors.
We did not find a significant difference in this task between animations of small
multiples, but the interesting thing is as the mental map worsened, animation on
average seemed to perform a little bit better.
But we can’t say that much because we don’t have a significant difference. When we
look at divided by target level this phenomenon repeats itself for all of our target
levels tested. It’s always important that your users like what you present to them,
and so we asked a few questions.
We didn’t really explain what the mental map was, but we asked a few questions
about do you like your nodes moving all over the place or staying relative same area
of the plane?
And it turns out that most of the features associated with high levels of mental map
preservation were preferred to those that were associated with less mental map
preservation. So in terms of discussions we have an experiment that shows drawing
stability of the mental map helps.
This confirms a little bit some of the intuitive notions of Eads, Lai, Misue, and
Sugiyama. And it turns out that mental map preservation produces both
significantly fewer errors and faster response times.
We replicated the results in psychology. There is a very high accuracy for a number
of independently non-colliding targets confirming these results. And also
coordination of movement increases accuracy further. If you look at the results in
our paper these long paths actually performed a lot better than the individual node
task.
We didn’t notice any significant definition between animation small multiples. We
had some tendencies towards animation, especially when the mental map was not
preserved. So there is potentially some evidence that NMM transitions can help, but
further experiments are needed in order to confirm this.
So in conclusion, preserving the mental map in the Coleman Parker sense helps in
this sense, keeping track of specific areas of the graph as the graph evolves over
time and following long paths to the graph as it evolves over time. However, we
need to be very, very careful when we use this result.
So these benefits are only really realized when preattentive [phonetic] highlighting
is not used, and also the number of nodes and edges in the task is large. So you’ve
got to make sure that you max out this number of moving targets.
And if you have a task that does these two things we think that the Coleman Parker
definition will probably help. So I think that’s my time, and are there any questions?
[applause]
>> Lev Nachmansen: Go ahead.
>>: Was the speed fixed or could they replay what they saw or maybe go back
[inaudible]?
>> Daniel Archambault: So we allowed full control of the animation to the user.
They could evolve it at their own rate if they felt it was too fast. We had a default
speed and that default speed was determined through piloting.
But it’s interesting with this experiment because in previous experiments a lot of
people didn’t engage with the animation condition at all. With this one the majority
of participants actually used the slider a fair bit.
So there you go. That’s what I observed.
>>: [inaudible]. I have a comment too, which is just from my intuition, which is not
[inaudible], the pinning strategy or the linking strategy is a little bit too strong for
preserving [inaudible], because people can follow translations quite easily, but
something more topological or more about orders of things rather than about
absolute positions is a weaker form of a mental map and probably just as powerful
as [inaudible].
I’m not surprised that the linking strategy or that the pinning strategy doesn’t show
->> Daniel Archambault: Yes. So you’re talking more about [inaudible] there is an
AP vis [phonetic] paper about the relative position using -- yeah, using simulated
kneeling. It would be really interesting, I don’t know of too many algorithms that
use that definition of the mental map. It would be really interesting for the
community to investigate that further.
Of course it would be really interesting to run more experiments to see, you know,
comparing this more spatial position type definition versus this more relative
definition, especially for the location task I have a feeling that you’re probably right
there.
Yes?
>>: So if I understand correctly, in your experiment the graph didn’t really change,
it just sort of moved around, right?
>> Daniel Archambault: No, there were node insertions and deletions. I think for
insertions and deletions the maximum number was four for both nodes and edges. I
think for nodes it was four and for edges it was five.
>>: Did you notice any difference in terms of, like, [inaudible] you didn’t’ really have
too much insertion or deletion versus when you had more if there was any
difference in terms of performance?
>> Daniel Archambault: There were two sizes of graphs. We were sort of lucky in
terms of insertions and deletions, it was approximately the same between each time
slice. I’m partially stretching the truth there but it was pretty close.
I think, yeah, I think we should probably do a little bit of further analysis on our data
to be able to confirm what you’re saying, but I think we can do that, I think we
measured some of that.
>> Lev Nachmansen: Okay. Thank our speaker!
[applause]
>> Nathalie Henry Riche: So our next talk is going to be terrific from Steven
Chaplick and he’s going to talk about planarity again.
Oops. I love planarity. I don’t completely understand it but ->> Steven Chaplick: Okay, thank you for another very vigorous introduction. It’s
nice to have an enthusiastic chair. So this work that I’m talking about is some that I
did when I was visiting Charles university last year, and my coauthor Torrison
Uekerdt [phonetic] was also there at the time.
He’s now at [inaudible] Institute of Technology and there seems to be a large
contingent of his colleagues that have shown up to this conference, so that’s nice to
see. So what I’m going to be really talking about is a nice new geometric
intersection representation of planar graphs.
So in particular I’m going to be giving a bit of a background on intersection families
of graphs, in case you need it, and then I’m going to get into these relatively new
class, which is really closely relate to intersection graphs of curves in the plane.
So it’s a specialization of that. So intersection families of graphs. Again, and I know
the basic idea of intersection families, we have some blobs, when they intersect, the
blobs of course correspond to our vertices and when they intersect we get edges.
So basic stuff here, every graph can be represented easily with a generic kind of
intersection family. Usually the intersection is phrased as a collection of sets, but
more often than not we want some nice structure like geometry or other structures
to make the classes more interesting.
Okay, so the more interesting classes that I’m going to be talking about in this
presentation are string graphs, VPG graphs and this special case of VPG graphs
called BKBVG. So what is a VPG graph?
Well, if you take your string graph, you just have arbitrary curves in the plane. Well,
rather than drawing them as arbitrary curves we’re going to draw them as axislined rectilinear curves. So in this sense they’re the intersection graphs of paths and
grids.
So you can imagine a rectangular grid lying underneath this structure, and we’re just
looking at intersection graphs of paths and grids. Now, some edges can be
represented nicely and some edges can be represented in rather curious ways.
So we want to be a bit more careful about what we’re talking about so we’re going to
restrict ourselves even further to say the paths only have a certain number of bends.
So you can see this blue path here only has one bend, whereas the red path has four.
And over there it’s just a mess. So we’re only going to be talking about small
numbers of bends, but I want to talk about a closely related family of graphs that is
also relevant to this talk. So if we take another perspective, rather than just
arbitrary curves we can have straight-line segments.
Now, the idea of straight-line segments has been well studied and is a very popular
graph class, especially when we’re talking about planarity. There was a longstanding conjecture that every planar graph was representable as a segment graph.
This was proven quite recently by Jeremy Chalapin [phonetic] and Daniel Gonzalez
[phonetic] and this was a phenomenal result.
Now, there were many, many results leading up to it. Just as another closely related
class we have the circle graphs, which are cords of an intersection graph or cords of
a circle. But again, what I really want to talk about are these paths and grids.
So specifically in the original paper by this rather long list of authors, they proved
this folklore result saying that string graphs were the same as these rectilinear
curves, and they also showed that circle graphs can be contained in B1. So B1 is just
this situation where we only have one bend that’s allowed, but you could have all
the four shapes, all right?
They noted that planar is contained in B3, and I’ll talk about that a bit later. It uses a
known construction, and they actually conjectured that three was as good as you
were going to do to get all planar. Well, in this talk I’m going to tell you that that’s
not the case.
In fact, we can do it with two bends and we can actually do triangle free planar using
a special case of contact B1. Okay, so how do they do this with three bends?
Well, there’s this really nice construction by de Fraysseix, Ossona de Mendez, and
Rosenstiehl, which says that every planar graph is a contact system of T’s. Well, if I
take this planar graph I can build my contact system just by taking an outside edge
and starting with these two T’s, putting the next vertex in the middle, and you’ll
notice that the points of all the T’s are always sticking up.
So as I show my planar graph I just stack my T’s on top. And if we want it as a B3
representation I just draw the T’s with three bends and we get the representation.
But on some level it’s not as nice as the T’s because now I have to have these side
contacts instead of just point contacts.
So that’s a small detail. Some people like to talk about side contacts, so it’s usually
worth mentioning. Okay, so how do we improve on this? How do we get beyond
this nice straightforward translation between T’s and these B3s?
Well, we’re going to first look at four connected planar graphs because four
connected planar graphs have really nice structure, and in fact, we get this nice
rectangular tiling that we can use from four connected planar graphs.
So from the four connected planar graphs we’re actually going to build an
intersection graph of zed shapes. So these would just be two horizontals connected
by a vertical, and from that we can get all four connected planar graphs. And then
we’ll just perturb our zeds a little, go inside the filled triangles, and fill in the rest of
our representation.
Okay, so to do that we just use the separation tree, which follows just through filled
triangles. So we start with some triangle, we take the maximal piece that we get
without any filled triangles, we go inside the filled triangles and we recourse further
down.
Okay, so again what is this main tool that we’re using? The main tool is that four
connected planar triangulations have representations by tiled rectangles. So we can
take our graph and we can make a rectangular tiling of it.
One small detail that I want to mention is by a triangulation I don’t want to just add
edges because it’s so much easier for me to add vertices to the faces to triangulate
them. So add a vertex universal to the face and then when I want the original graph
back I’ll just delete part of the representation that I built.
Okay, so that’s what I mean by triangulation. Rather than the adding edges, I’m
adding vertices. Right, so we have our rectangular tiling, but we don’t want just any
rectangular tiling.
Some rectangular tilings are not going to be as nice when we’re trying to create the
zed shapes out of it. In particular, if I have this pattern in my rectangular tiling as
you can see here, and remember as I was saying, I’m going to want to replace each of
my tiles with a zed shape.
So if I was adding the zed shape here somewhere, and one here, well, how would I
make the intersection? Well, the nice thing is that I can avoid the bad configuration - Sorry, this is the one that I’m trying to avoid.
I can avoid the bad configurations by redrawing the tiles with a slight, just sliding y
over and w down a little bit and I get a different orientation.
There are a few more technical details, but I don’t want to go into them. The nice
thing is that we can fix our tiling using the technique from Fusy 2009. Okay? So
how are we going to make the zeds, really?
So again after we’ve done this nice, after we’ve redrawn our tiling in a nice way,
we’ll have the tiles above our current tile all to the left. And I can make this a
splitting point for the center of the zed. And below I’ll have the centers all to the
right.
So when I make the zed shapes this center will be always to the left of the guy below
it, and always to the right of the guy above it. And there will be nobody further to
the right. And this allows me to catch all of these zeds after I would make them with
even just a side contact.
And that’s the short version of the construction, so ultimately we get something that
looks like this. But we want a bit more space because we want to be able to go
inside the filled triangles.
And this is where we use the fact that it’s an intersection representation rather than
a contact representation. So all I’m going to do is stretch up my zeds a little bit and
push them down a little bit to make some space.
Okay, so now I have space. What do I really mean by space? I mean I can look at this
triangle where this is the triangle corresponding to the three rectangles, and I can
identify a region on the zed shapes where I have the three sides of the triangle
showing up. And this happens for the different cases that we get.
There’s just one really annoying case. And I haven’t said why it’s not just zed VPG
the whole way through for all planar graphs, and this is why. So if I happen to have
this triangle on the bottom being filled. So I have this guy, this guy, and this guy as
my filled triangle, when I build the representation for this if I were to just use a zed
shape or a straight line, I wouldn’t have this private region that I want to be able to
recurs inside.
So I have to use a C shape in order to make sure that this line is not touched by
anything else. And it’s just really annoying because this is the only place in the
construction where we have to use a different shape other than the zeds.
But anyway, that’s how we do all planar graphs with B2. So what about a nice
contact representation? What can we say? Well, there’s lots and lots and lots of
work that’s been done on contact representations, the classic one is by Koebe, oh
and this image I actually took from David Epstein’s webpage, but the reference
seems to be cut off on the bottom.
So there’s lots of work that’s been done, and even for the case of triangle-free.
Triangle-free was one of the cases that were studied in the earlier work on segment
graphs because they wanted to show, well, bipartite was studied first and then they
showed that with two directions of line segments you can do all planar bipartite.
And then they showed that various groups have worked on these problems and in
particular this is the result that I want to talk about right now.
So you actually get that all triangle-free planar graphs can be represented by line
segments with three directions. So in the same sense we considered studying
triangle-free planar graphs with respect to these L shapes and more specifically L
shapes for us. Okay.
Now, I just want to mention this very, very recent result. So earlier this year I was
talking to my coauthor Torrison, and he told me that at the [inaudible] workshop he
worked with Korbourov and Verbeek to look at the relationship between contact
segment graphs and contact B1 VPG and they’ve actually shown that these are, in
fact, the same class. So this is a very new result. Okay.
But what are we saying? We’re actually going to say that triangle-free planar graphs
are contact L’s, gammas, horizontal and vertical segments. So we don’t’ even need
the whole class. We just need these four shapes. Okay, and how are we going to
prove it?
Well, suppose that you have a separating C4. We’re just going to assume that
everything always works with separating C4s and that will be our inductive
hypothesis.
Okay? So now you don’t have any separating C4s anymore. So now we go and we
look around and we find a facial C4. Well, what I’m going to do is I’m going to take
V1 and V3, I’m going to contract them together, and I’m going to get some model
back out of Ls and gammas and segments.
And then I’m going to have this guy, V tilde, which is a contractive vertex, and he has
some L. Well, part of my induction will be that the circular order of the edges is
maintained.
It’s a contact representation so we can always do this. And because of that I can just
split off the two Ls as you can see on the far side. Okay. So another easy case is, or
an easy case of the harder case is okay, so now we don’t have any C4s at all. We
have no facial C4s, we have no separating C4s, now what happens?
So we have some interior edge floating around, and again I contract it. I get a new
model for it. Again, the cyclic order of the edges is the same. I uncontract it and I
can make two pieces out of it. Well, this is not telling you the whole story. As you
can see this is just case 3A. In particular, it matters where the edges appear for each
vertex. And this is how we get some more cases.
In particular we get one annoying case. So you’ll see that in cases 3C and 3D I still
end up with Ls. I can always do this expansion of my contracted edge and I still get
Ls, but if I have only this little piece and I have these horizontal segments sticking
into it we couldn’t find a good way to undo the contraction and still only use Ls.
So this is where we have to use gammas. So this little case in both of our
constructions where we have to add these extra shapes and it’s just really
frustrating.
But this completes the proof. So we always have to have an interior edge or we have
a C4 or we have a separating C4, so that’s all that there really is to it, and other than
this annoying case we get the whole contact representation with Ls. And here we
have to use gammas.
Okay, so to sum up, we looked at four connected planar graphs, and we got a side
contact representation using zeds. And then we can take that and recurs inside the
filled triangles to get all planar graphs with B2. We have to add those C shapes, and
at the same time we also have this every triangle-free planar graph has this nice
contact representation.
Now, here’s the most interesting part. At least I think it’s the most interesting part.
We’ve been trying to find a planar graph, which we cannot represent with just L
shapes in the intersection variety. But we can’t, we haven’t been able to.
And a colleague of mine even did a short computer search on all planar graphs less
than ten vertices, and he built intersection models for all of them with just L shapes.
So while we do think it would be somewhat surprising, we’re starting to believe that
all planar graphs can be represented as intersection graphs of L’s. And we’re very
interested to see if anybody has any nice ideas on how to pursue this further.
At the same time we think a slightly easier conjecture would be just to get all the
triangle-free planar graphs without using gammas, so just that one little case needs
to be fixed. And I guess on the other side of things contact representations are
always really nice to have. Can we improve on using three bends to represent all
planar graphs?
So far we can get four connected with the zed shapes but we can’t seem to keep it as
a contact representation if we want to go to all planar graphs. On the other hand we
could try to strengthen the T’s results. So can you use something simpler than T’s,
some special case of T’s, something like that? That’s my presentation and thank you
for your attention.
>> Nathalie Henry Riche: Thank you.
[applause]
>>: Well, as Doctor Schneiderman [phonetic] mentioned earlier yesterday, there’s a
lot of different algorithms and layouts and everything for static graphs, but dynamic
graphs are still a big challenge. You know, we can handle graphs of tens of
thousands or hundreds of thousands of nodes and edges if they’re static, but when
they’re moving at that scale there’s not very much available.
So a quick definition of a dynamic graph, it’s a sequence of snapshots, right? So we
have a series of graphs with a set of vertices and a set of edges, usually there’s a lot
of overlap between those, but we want to understand this network, how it changes
over time, how the overall structure evolves, like how do the clusters in this graph
evolve over time, and maybe even track individual nodes, right?
How does an individual node move around this network? And there are some
existing methods but they often don’t scale very well to the large networks that I’m
considering in this work. I’ve yet to see a work that was handling tens of thousands
or hundreds of thousands of nodes, or hundreds of thousands of time steps.
This is because a lot of existing methods are usually either forced directed, so
there’s some amount of a computation cost there. They’re usually incremental,
right? So given a time step they start working on the very next time step
immediately after, but they don’t consider the graph as a whole, the whole temporal
range at once.
And so this can get them into local minima in the long run if the network evolves too
far. So they’re usually based on some sort of rubber banding or gluing, right? So
they take nodes and fix their positions or restrict their movement. And that’s sort of
what can cause those local minima in the long run.
And there’s generally a tradeoff between quality and stability, right? So if the
network changes a lot, you can either forgo the rubber banding and let the quality of
the graph layout be temporally local, or you can forgo the quality of the layout and
keep the nodes rooted at their position, right? So if a node never moves at all,
obviously it’s going to be stable but the layout will suffer.
So we wanted to be able to try to get both quality and stability for these very large
networks because at this scale even a little motion can be very distracting, right?
I’m beyond even the mental map issues. So we want to be able to handle, we want a
quick layout so we can avoid layout cost, we want to be able to handle large complex
networks, and we want to avoid issues like information overload.
If there is too much information being thrown at the user all at once what can we
reduce, what can we simplify so that you can actually follow a large network? And
we wanted to avoid motion overload, which is what I want to say is far beyond any
mental map study that I’ve seen.
And what I mean by that is, if it’ll load, if we have thousands of vertices and -- it’s
supposed to be playing a video -- well, it loaded the first frame but it does not seem
to be playing the video.
So if we have thousands of vertices all moving very little bits, very little motion, but
they’re all moving in random directions, you end up with this chaos that’s very hard
to follow. The cluster structure is very stable but the overall motion is very chaotic,
very hard to follow, very distracting, and you can’t actually pick out individual
patterns in there.
On the other hand, if we have very smooth motion, all the nodes are still moving but
they’re moving together as groups, so they’re very stable relative to their neighbors
in this graph. And you can easily track a large-scale motion as you see that clusters
move together, you can see growth in clusters and you can even see some edges
flickering in and out as they’re not very stable. They’re not very durable over time.
So what this is not even just a mental map issue, this is just a simple perception
issue, right? The human eye can’t track that many moving objects all at once. It’s
better to be able to perceive large-scale motion, right?
The human eye can track a large set of clusters or even individual clusters moving
together, as long as they’re moving together and not moving randomly within that
cluster.
And so even though in both cases they were using the same clustering ordering and
rendering techniques, in both cases the nodes moved shorter distances and in both
cases all the nodes were moving, there was a huge difference as far as what the
human eye can perceive in those two examples I showed.
In the first one the nodes were moving chaotically and the eye can’t track them
[inaudible], the eye can perceive the large-scale motion. So what we wanted was to
extend these ideas and because those particular examples kind of sacrificed quality
in terms of stability.
So they didn’t necessarily calculate a very good layout for every time step, they
reused some information from previous time steps and kind of sacrificed the quality
of the layout to keep everything very stable and very quick.
So what we wanted to do was develop a layout that could be ideal or it would have
high quality for every time step in the entire data set, but we wanted to ensure the
node stability. So the nodes that behaves constantly. So if a node does not change
its edges, it’s connected to the same neighbors and the neighbors aren’t moving, it
should not move either, right?
So we wanted to eliminate all unnecessary node motion. When nodes do move we
want them to move together if they’re moving together. So we don’t want nodes to
be moving very randomly, very chaotically.
And we wanted to very clearly show large-scale structures like the clusters and like,
where nodes are actually tracking through these clusters. So how the nodes move
and how the clusters form, change, and do they split or merge, and when do the
clusters die off?
So what we did was a two-stage approach, basically. So we have a large set of
temporally aware clustering or basically algorithms for ordering and clustering
nodes within a large dynamic graph, and then we have some set of visualizations for
how to actually view these clusters and use them to layout the network.
So for the clustering step, like I said we want every time step to be locally ideal. So
we cluster every time step individually. So because every time step could be
completely different, right? So the clusters can be different between different time
steps and we want each time step to be clustered correctly.
We use a standard clustering algorithm based on modularity, similar NewmanMores [phonetic], classical Newman-Mores [phonetic] and we use this because the
clustering actually behaves very similar. There’s a result by Noac that shows that
the modularity clustering is very similar properties to four [inaudible] layouts. So
we want to be able to preserve that structure for the layout purposes later.
And I don’t want to bore you with math here, it’s in the paper if you want to look it
up. But since we cluster every time step differently, we need to then map the
different clusters between different time steps, right?
So the cluster three in one time step might be cluster five in the next time step, but
we don’t know that. So we need to go through all the clusters between two time
steps and compare them against each other and figure out which clusters actually
line up, right?
Because a cluster could split, it could merge, there could be overlap between
clusters like in this example here where one node transfers back and forth between
two clusters.
And so this is very similar to a problem in feature tracking in the scientific
visualization where they have features that they’re tracking through a volume and
they need to associate them together. So they use some sort of similarity metric
between the features and then compare them against each other and pick the
features that most closely match up.
So we do a similar thing using the Jaccard index, which is a set membership and
similarity metric. And using this we then end up with these temporal clusters over
time. So we have these clusters throughout time. The membership changes now.
But we have a single cluster over time and we have a set of clusters through time.
And we want to then order these clusters because there’s going to be nodes moving
between them and we want to minimize the amount of motion that these nodes
have to travel, right? So we want to minimize the distance the nodes have to travel.
And that’s a classic minimum linear arrangement problem. We want to minimize
the edge crossings in this kind of representation and minimize the length of the
edges in this kind of representation. But it’s a 1D ordering. So one dimension is
going to be the order of the clusters and the other dimension is going to be time.
Similarly, we also want to order the nodes within these clusters because if a node is
going to be moving up and down between the cluster and a neighbor that’s above it
we want it to be towards the top of that cluster. And also within a cluster we want
the layout to be aware of the connectivity within that cluster so that we can
guarantee good node layout later one.
So once again we want to keep the layout quality good and we want to keep the
nodes as still as we can, so we want to minimize the node motion. Also within the
time cluster one node will always have the same position within that cluster. So if
it’s in that cluster at any given time it will not move.
Yeah. And what this boils down to is basically another minimum linear
arrangement. So we can apply the same algorithm to order the nodes that we did to
order the clusters. So once we are done with that process we now have an ordered
set of clusters that are persistent over time, and within those clusters we have an
ordered set of node locations that are also persistent over time.
So now we can track how the nodes move between clusters and keep them
persistent. So we can look at which clusters nodes belong to, we can look at how the
nodes traverse between the clusters, we can jump to the network structure at any
time and we want to see how the network overall changes over time.
So the first thing we have is a timeline view, which gives us an overview of the entire
cluster structure. So now we can start getting into the pictures. So this is very
simple just like before where I showed the example of the clusters over tie. We have
the clusters on the y-axis and then we have time on the x-axis. And then we can just
using a simple line diagram track how the nodes mode between clusters because
they have a given position within each cluster.
And the clusters have a given order and the nodes have a given order, so we just plot
how the nodes move between clusters. And this actually gives us a very nice
succinct summary of the illusion of the network. So every line in this image is a
node in this network, and we can easily see how the nodes change their clustering
structure over time, right?
So there are some clusters that are fairly consistent, although there are a couple
blips in the beginning, for example here. Here’s a node that starts off in the
completely different cluster and then, or a node that actually kind of splits this
cluster up and moves over here and lets that cluster resume its structure.
And so it’s just a very succinct summary. So once we have that though, we want to
be able to actually look at the entire network at any given time. And so we can use
this cluster and ordering to define a layout using some previous work of ours on
space-filling curve-based layout.
So we can take this one-dimensional positioning and map it along a convoluted
space-filling curve and that maps this one-dimensional ordering to a 2-d space and
guarantees some nice properties, such as the aspect ratio of clusters is guaranteed
to not create any long and skinny clusters because the aspect ratio is contained.
And clusters are guaranteed to be collocated and all this good stuff. And we can do
this very, very quickly because it’s very simple to map this given 1-d ordering to this
2-d space.
Once we have that we can then apply some standard graph techniques, opacity
modulation or tone mapping, we can apply edge bundling. So because we already
have the clusters it’s easy to apply a hierarchical edge bundling to this network.
So we can route the edges accordingly to this hierarchy that we’ve already
computed, and that gives us a good view of the high level structure of each time step.
And then also, since we’ve precomputed all of the clustering and ordering for the
whole network, the whole dynamic network, we can then interact with it and
explore it. So we can easily jump between different time steps without having to
iterate over the entire temporal graph to there because it’s not incremental.
And then we can also use animated transitions to help the user follow the changes
between networks. So once again, quick video.
And that one does not want to play, great. That’s not good. I might have something
on here. It should open with QuickTime. Awesome. Well, I guess you’re going to
have to talk to me later on my laptop to see the actual video of this, which really
kind of sucks. I wanted to show this video.
So the next thing I was going to show was another video, which was going to show a
large-scale example. So this is scaling it up to a network with 50,000 nodes and a
couple hundred thousand edges over time, over 400 different time steps.
Usually it’s about 12-30 thousand nodes per time step and 20-70k edges per time
step. At this scale you almost don’t have enough pixels on the screen to represent
individual nodes, so it’s very hard to track individual nodes usually, but large scale
patterns of behaviors are very interesting to see how clusters form, how they
evolve, and how they die off in this network.
And once again, I was going to show a video over here but it’s not going to work.
But in the timeline you can even see that there are some very stable clusters. So
what this is a network of is the routers of the Internet, the autonomous systems of
the Internet, and how they’ve evolved over about a decade. A very large, very
complex network. A real world network.
And you can see that there are some clusters of stability, so this cluster, for example,
is actually the U.S. west coast, and you can see that some other clusters, there’s some
that form.
They evolve over time, they slowly transition out of this, and there are some other
stable clusters that form over time, right? So there are some that actually evolve,
like one of these is Russia developed halfway through this set and started
developing large Internet presence.
And my talk is going to run a little short because I don’t have the video. So what
we’ve developed here is a globally optimized layout for these dynamic networks. So
we use all the time steps to compute these orderings and these clustering’s and we
don’t use an incremental method, which allows us to achieve both high quality on
every time step, and stability over time, right?
So the clustering for each time step is still time-independent, even though we
associated them across time we’ve still guaranteed some certain clustering
properties for every time step. And nodes that do not change are actually kept
completely stationary so that we can avoid excess or unnecessary motion.
And also a very nice property of this is that most of the computation is
preprocessed, so we can actually allow for a lot of interactive or very rapid
exploration of these very, very large networks even on laptops. The actual
exploration rendering does not take that much processing power.
And it’s very useful for showing the large-scale structures but maybe at this scale it’s
less useful for showing the individual nodes right now. But that’s one of the things
that we want to maybe work on in the future is some more advanced interaction
techniques.
So one thing that we haven’t really focused on yet is optimizing the computation
because we’ve done it as a preprocessing step but there are several ways that we
could actually optimize or improve the efficiency of the computation. Particularly
the association steps that we can optimize and the ordering algorithm is really the
slowest part because that’s the NP Complete part of this problem right now.
Once we actually have those clusters we also want to consider better space
utilization, right? So back in that Internet example there was a lot of empty space up
here and empty space down here that, like, this cluster doesn’t necessarily have to
be placed all the way at the top of the network, it could have started down here
instead, for example.
And we don’t actually consider how to break up, because we had that temporally
consistent cluster across all of time, even if it actually disappears halfway through
the data set. So that’s a potential way to improve the results in the future.
We also want to look at alternate graph layouts because, you didn’t get to see the
video but some of the clusters are actually very, very tight and so we’re not fully
utilizing this screen space as best as we could.
The other thing we were only looking at clusters pair-wise. So we’re only looking at
two time steps at a time for the association process. And so you ended up with
clusters that form for one time step and then disappear. And so we can maybe do a
high order clustering and try to eliminate some of that excess noise.
And also, since we’re clustering every time step independently, there’s a certain
amount of cost that we’re not reusing the information. So if a graph changes by one
node every time step, we’re recomputing the entire clustering every time step. And
so there should be some way to do an iterative clustering, modify the clustering over
time, instead of having to recalculate from scratch every time step.
So that’s some of the directions that we want to look at in the future. So quick
acknowledgements from NSF and CCF, through NSF through those grants, and that’s
all I have for the talk because the videos didn’t work. So any questions now and or
come see me later to actually see the videos working.
>> Nathalie Henry Riche: Thank you, Chris.
[applause]
Any questions?
>>: There seems to be a lot of clusters. I mean, somebody else said [inaudible].
>> Chris Muelder: Yeah, in the mental maps, like I said, you could only really track
five things moving. And yes, we do have a lot more clusters than that at the time at
this moment. But I find that I can actually track it and it’s much better, it’s part of
the whole scalability issue. Dealing with networks at this size we’re lucky to only
have the amount of clusters that we have as opposed to many, many more.
Yeah, the best way to answer that question is to show you the video. So I guess I’ll
have to show you the video after offline.
>> Nathalie Henry Riche: Okay! We don’t have much time before we have lunch, so
let’s thank our speaker.
[applause]
Before you all go I just have a few announcements. So for the first speaker of the
next session this afternoon please go to the registration and sign up your form so we
can record your talk and put it [inaudible] online. And the second one concerns the
nice dinner you have tonight.
If you’re curious about the bus schedule, there is a bunch of schedules available at
the registration as well. So you can know when you’re going to go and so on. Okay?
Thank you, have fun, and good lunch!
Related documents
Download