Document 17955769

advertisement
>> Dan Fay: It's a pleasure to actually welcome Michael Docherty here to Microsoft Research to
give a talk about Cube which is a large display at QT and it would be great to hear where there's
opportunities for not only Microsoft, Microsoft Research, but also to collaborate with this great
technical asset that's available. Thank you.
>> Michael Docherty: Thanks for the brief intro. My name is Michael Docherty and I'm from
QUT, Queensland University of Technology in Brisbane Australia. We have this amazing bit of
kit for a large-scale visualization called the Cube. What I'm going to do today is just do a bit of
show and tell on a few things that we've got going there. I'll talk about how we did it briefly,
talk about some of the content projects that we have there already, talk about where we want
to go and end with a few major research questions for this area and hopefully they'll engender
some interest. This was a project that started some years ago, [indiscernible] over at Kean and
his intention about the Cube to showcase science and technology, a whole new building around
the 300 million dollar mark. It's a great building to work in for researchers, but it has a very
important task which is to inspire new students to be engaged with science, and to some extent
research. The problem worldwide is getting students interested in science. The Cube, it was
called the Cube because the very earliest ideas were for it to literally be a Cube. It would be
running over two stories and you had four panels either side. Each story had eight panels and
you could then go inside of it which is a common idea for an immersive space. There's a whole
lot of reasons why we didn't go ahead with that. It eventually became two sides of that Cube
so this was the final design. That's a two-story space, lots of projectors, lots of touchscreens.
On the other side of it is a one-story space with again, projectors and touchscreens and then
above that, again two more spaces, some with screens and some with just projectors. I'll show
you this was the original design that went forward with the people that had developed it
physically. This also I found recently is an old sketch when we were having an elaborate
discussion, some would say argument, about what we would do in terms of the technology and
I can assure you there was a great deal of discussion about what should be there. Broadly,
we've got some fairly interesting kit to do with what controls each one of the panels and I'll go
through this in more detail later in the talk, but I just sort of would like to show you that there's
lots of bits and pieces and lots of fairly sophisticated background processing behind it all. This
was a more developed image of the final design and it looks realistic, but is in fact a CAD model
of the design, so that the building that was to be there was very elaborately designed to make
sure that it was going to be what we wanted. This is in the process of building it. You can see
bits and pieces, lots of kits, lots of frames. This is the spot where all of the touch panels were
placed and you can see the brackets there. They are designed so that you can put them in and
take them out again as you needed for maintenance and so on. You can also see there's a fair
bit behind and you can walk in behind the system. This is how it is today. This is, in fact, an
image of the Cube. You can see the project there which I will talk some more about which is
the virtual reef. You can see the stairs. You can see someone standing at level 5. It's a twostory space with a lot of projection. Equally, around the other side, the one-story space and a
project there called the physics playroom which I'll go into more detail. The other side, again,
another project. This is called the history wall or the data wall and this is a geo-located data
place and in this case the first project was information about the flood we had in our city two
years ago which took out the CBD. Upstairs, the four panels. There's usually much more going
on there, but this is where students and other people can try out touch events and so on.
There's a couple of games running there as you can see. And the final space which has a
number of projectors which can be blended or separately run and there are some seats on the
left and people can plug in their computers and use that space as they wish. Another important
point is this is a public space so here is the entrance to the Science and Engineering Center and
you can see the physics place by, the bottom level, first level of the Cube and it is very much a
public space and it's very much a space that people see as they enter, and of course, we have a
lot of visitors. We have about 12,000 a month.
>>: Is that front to back, so that the on the other side, is that the large?
>> Michael Docherty: On the other side is the large display, yes. So this is just showing that as
you come in the door, you see it. People walk up to it and start touching and playing with it and
of course going around is the reef. We named these spaces, zone one, two, three, four, five
and six. Just out of interest, some details there. It's two stories high as you can see, 20
multitouch panels, ten on each side of the wedge space, large projectors, high lumens. They're
3-D capable although we don't have a project that uses them at the moment. For the virtual
reef which is on this side we're using a 3-D game engine called Torque but we are moving over
to the Unity pretty soon. It runs over multiple screens and we'll explain that in a moment. It
interprets touches into that and is also info panel content which I'll show you. The audio, there
is a big woofer system behind the wedge. There are ambient speakers hung about inside and
there are also individual projectors under each panel so that you can localize sound when you
touch the walls. Coming around, you get, this is in zones three and four; this is still on the
bottom level. We've got 12 of these panels across the bottom. You can see the resolution
there. We've got three projectors which blend across the top. We've got the loudspeakers and
we've got RFID readers which allows you to come in and swipe your staff card and then you
have access to putting your information up onto this display space, and I'll show you some of
that. USB connectors under each panel which allows us to do anything else you want. Coming
up to the top we call it zone five and zone six. Zone five has got the four panels, two speakers
blended and the audio and they are RFID and on the other side it's the projector space. We
designed specifically a range of different types of projection and display space to allow us to
explore things, and so this upper level tends to be where we try out things or where students
put up their projects. This is an old sketch that goes, as I said goes way back and I just thought
I'd go through some of the tech that's around it. Essentially, we've got two panels and a CPU or
a PC rather, per two panels, two CPUs, two graphic cards, a lot of capability and we can run
each panel as a separate high definition space or a blended double space. You can see we've
got a fair bit of storage there both in RAM and SSD as well as hard disk space so that local
applications can have a blend of what's there and what's not there. Depending on what
application we use, so for example, the reef has a dedicated server which keeps control of the
state of the reef and it keeps track of where everything is and then sends that information back
to the local processors for actual visualization and rendering. Some of the other applications
we have there do things differently. They either have a local server or they are using the local
PC as a server. As the diagram suggested, we've got across those spaces 54 high-definition
displays. We have the GTX 680 cards which are fairly advanced for, they've probably got
another year or so in those, and SLI across that in order to speed up processing. We went out
to tender with this SGI Dell and HP and SGI were very keen and did a lot to make sure that they
came up to our specifications. It's essentially a Windows box but running all of that kit there as
you can see and using some fairly high-end graphics cards. One of the reasons for doing this is,
again, for future tweaking so we can take out the 690 and put in the 790 when it's available and
get all of the resources we need there. We try to keep it future proof and over spec as long as
we can continue to fund that, so there's no limitation as to what you can do. We have a
number of servers on a Win 7 but that seems to work. We've got a Linux server as backup
there which is really just command and control. The Win 7 servers take care of the states of the
system and also look after the projectors. 64 CPUs all up. Small supercomputer and it is
possible to have all of them doing one thing at one time if you really wanted to do that through
the Linux system. There's three independent networks there running across so we can have
external things brought in such as Google Maps or WorldWide Telescope. We also have
internal stuff which is the QUT network which is where we use the ability to bring up your own
information from your own file system and we have a dedicated server just to handle the touch
events running back and forth. We were a little concerned when we were designing this and by
the time we got it together we were able to test it and we loaded the system up to its
maximum and the data was showing .95 network use and we started to get a little bit worried
and we overstressed it and it went up to 1.2 and we realized it was 1.2 percent, not .95 percent
of the system, so we have not yet gotten anywhere as near 3 percent loaded in the network so
there's plenty of room for more data and that's what we're trying to do is use this very large
data. Quickly there, depending on the projects, I'll go through those in a moment. A bunch of
kit there, of course, no JS’s coming to the fore. JavaScript is the thing we mostly use but there
are some other tools there. Extempore is the language that one of our people, Andrew
Sorensen wrote for the physics playroom and I'll talk about that some more, mainly to allow the
physics to go on seamlessly but also for the networking and I'll talk about that when we show it.
Another important point to make is it's not actually touch panels. It's infrared or computer
vision. Those panels that you see are not responding to pressure or touch; they're actually
responding to seeing you and we can calibrate that differently depending on the application
and you could actually use it to pick up. You can pick up a body at about 30 centimeters or just
over a foot, so you could use it to know that someone was standing in front of the screen and
then do other things without people actually touching the screen. We haven't done that yet,
but it is possible. Behind each of those panels are 32 infrared cameras. They track fingers and
hands through image processing. We haven't yet got to any limit with that. The important
thing is that is you can have multiple hands and that was what it was about, that kids, students,
whatever, you can have two or three people or more on the same panel at the same time. To
test the system we had it hammered at one hundred touch events per second for 14 hours and
the systems were still running quite well.
>>: [indiscernible]
>> Michael Docherty: No. So these panels come from a Finnish company called Multi Taction.
They have a branch here in the U.S. We were one of their first customers because this goes
back a few years and the version that we got with the 32 cameras is the fourth version of the
technology and with an ultrathin bevel which is just under two mils, so that you can stack them
together. The first version of this when we first started playing with this had two cameras, then
four then eight and then they jumped to 32. The reason you want so many cameras is for that
blend area as your fingers move across a spot between cameras, so that it's seamless and
accurate. It also allows for this public event because the panels themselves, the external
screens are just plastic. It doesn't matter if you scratch them or they are damaged. They are
very hard and they don't scratch at all easily, but it means they can be just replaced and put
back and you're not paying a lot of money for that, so it's not touch which in a public space is
important. They are very accurate and you can configure the sensitivity of where it picks up an
image, so we can figure that it has to be just on that millimeter above the surface of the screen
to register a touch event.
>> [indiscernible] computed on the server touch [indiscernible]?
>> Michael Docherty: Yes. We've got, one of the networks takes all of these touch events and
depending on the application, so it registers the touch event, which is essentially just an xy, and
depending on what your application wants to do with it. So you get the raw data and it goes to
the touch server using TUIO protocols and then depending on your application we'll take that
information do with it what you want to do for your application, so if you want to process that
separately so the application is not trying to deal with all of that as well. If you've got a lot of
redundant things, or people doing things that don't mean anything to your application, they are
going to be essentially ignored, but the touch server is monitoring everything. So just talk
about the applications, so the first one I'll talk about is the physics playroom. There is this link
and these slides are available. So if you look at that vimeo, you will see this in action. I don't
have time to show you now I don't think. That's Andrew standing there. You can see the scale
of it. What's interesting is you got this 3-D world that's a nice scientific laboratory. This
application shows the first 13 chapters of physics, university physics textbook. It's got
everything from Newtonian physics through to fluid dynamics through optics through audio
pulse generators and so on. It's all there. There are infrared panels that you can bring up
across it to explain all of that. What you're seeing there is just gravity turned to one of the,
Pluto I think, so all of the blocks start to flow. If you choose another planet, which is simply
done on a MIDI glossary there, boom. They all drop to the ground. If you make them Saturn or
something they are very heavy to move. On normal gravity you can touch one of those blocks.
You can lift them up and you can build things, make blocks so the kids can do that, or
interestingly, you can flick a block from one side to the other. Each two panels there are one PC
and across the top are three projectors all run by a separate PC. That means when you pick up
a block on the left and just lift it and flick it across, it will go all the way across to the other side
so you could actually have a tennis game. What is interesting is it's running across a number of
machines seamlessly, and one of the things that Andrew did was to write the software so that
that could happen so the physics would be correct but also the networking. He does that
through time synchronization, not through normal network synchronization pulses. Part of the
reason he does it is because his earlier work was on music software, live performance, live
coding and so this thing about timing was something that he's very much involved in. This is
what his PhD was about. He teaches compiler design and admonishes students for not
understanding what he's talking about. Fluid dynamics is there. They're a couple of portraits of
famous people and you can fiddle around with that. There's also a fireplace and you can play
with that and so on. The other project is called CubIT and this is about where you can come up
and interact with your own information. You can swipe your staff card and it comes up with a
little panel which is your file system if you like. You can have images, text, PowerPoint and
videos in there. From the little panel that's sitting there in front of you you drag it off of your
panel into, onto the screen space and then it is there for you to resize, to manipulate, to shift
across. You can throw it across, move it across. You can take it over and put it on someone
else's directory space and then it's transferred to them. You can also lift it up and put it to the
top band of the touch panels there and then it automatically gets taken to the projector space
and is projected. If it's a video it goes full-size and the audio comes up. If it's a PowerPoint, it
goes into PowerPoint presentation mode and then you simply flick your hand against the touch
panels and it will flick through each of the slides of the presentation. It's meant as you can see
here. This is posted display from a conference we had there, so we used it a couple of times for
conferences in the space and when people come in and when they register they get a
temporary swipe card and their conference information is up there and the conference
presentations are there. So instead of giving them a book or whatever, this is what we give
them, so we've had some interest in that in a general sense. This is also a posted display of that
conference. This application can go on any of the designs of the Cube just as any of the other
projects can. In this case a number of people, again, I think the limitation is about 20 people at
once can have their information up there. You just drag it onto the display space. You can
resize. You can turn it, whatever. And again, if you flick it up it just goes up to the full
projection. One of the other projects called the history wall or the data wall and this is a
project of geo-located images. In this case we decided to play with the story of the flood in
Brisbane some years ago, two and half years ago. A large part of Brisbane was flooded. This is
something that's called the community wall because you can go to the website, upload your
images, bit of moderation, and then they are geo-tagged and then they come up, so you can
come into here go to your interested area, touch on the space and up will come your image
that you put up. This is an opportunity for us to capture all of that information that thousands
of people took images of during the flood and it is not lost. It becomes a public environment.
What you are seeing here are a few explorations of that interface in terms of time and how we
wanted to present the information. It also accepts video and various explorations. This is the
account of it but you can't quite see that text. Because of the way this space is, and this is
where we get into the questions of interaction, so with the reef, which I'll show you in a
moment it's a seamless space. And in the physics playroom it's a seamless space between the
touch panels and the projection. In this case it's not and with CubIT, it's not. If no one is there
then this resides back to an image of the river running across the bottom. It's a long river.
When you come up to each one of these panels and as you touch it in then becomes, it
separates from the others and it becomes yours and you can change the scale and manipulate
it and do whatever you like. As you walk away and after a minute of no activity it will just sink
back in and blend in with the rest of the residing image. On all of these sort of display spaces
you have this issue of what to do when there's one or five or ten people and when someone is
close or not close to the screens, how do you deal with those different times and different
environments. That's some of the questions we're trying to address with the WorldWide
Telescope project. The virtual reef which is the large project that's in the inside the wedge
which is the project that I was mostly involved with and as you can see it's got some sense of
scale and when you are at the upper level looking down, because all of the species in here are
built to actual scale, so when the whale comes in and it is a 12 meter whale and so on and you
do get a sense of scale. It is the correct size. Mostly people interact at this level. Between the
projection and the touch panels it's a seamless blend, so as it's, as a fish wanders across from
the projection space into the touch space , there's no sense that they are not continuous.
There are 54 fish species here. There are 17 different coral species. The fish species are all Ai
with behaviors so the little fish run away from the big fish and so on. Despite us programming
it, we weren't allowed to have the sharks eating things. It was decided that was inappropriate
for some reason. The manta rays come in and do their dance that they do. They do a very
graceful spiral dance they do to collect fish and so on and so we have that. A whale comes in,
there's some calves, a mother and calf sort of activity. We've got a boat that comes in on the
top and a diver comes off and so on. Most of these are fairly prescribed behaviors because
there's nothing to make, the whales don't move out of the way for anybody and so on, but is all
still Ai. Fairly good resolution as you get up closer. If you come in close and you touch one of
the fish for more than half a second, you grab it. You collect it into your little space here. That
image that you see in the middle you can rotate and flip around and so you can have a look at
these things. If you touch the i at the bottom and then you get an information panel like this,
which is a web Chrome browser and you can see the top right you can snap that onto your
iPhone or your smart phone and it will take you to that website with that information and more
information. This is how we connect with schools so they can connect into that and into that
website and add information and do things to add to the curriculum. These info panels have
got images of the fish. Some of them have videos and so on, so there's quite a lot that we can
do there. This is the question of the near and far. So the far you're standing back and looking
at this amazing reef. I mean we do have whale song and the whales come in and so on.
Localize, when you are touching these things you get the audio feedback that you've done
something and so on. But this question of when people are standing close they've got their
own little environment and they can do their own information seeking and standing back see
what it is. Our question is, what other 3-D worlds, what else would you do? With an
environment like this you've got a problem when it's this sort of scale. What would you put up
there? In the original project discussion there were five projects talked about. One was the
virtual reef. Another was a desert landscape. It's big but has life at dusk and night so that was
a consideration. We were going to do space, stars. We were going to do mountains and the
snowy mountains scheme which is a large project in Australia will return to the coastal rivers
back into the inland. And there was a prehistoric forest, the classic dinosaur exercise that was
going to be developed. We ended up going with the virtual reef for various reasons, not
unusually because it's also iconic of Australia and Queensland. We still have this question what
else are we going to put out there. Luckily, we are working with you guys on the WorldWide
Telescope. It seems to be an ideal project because you've already done all of the content for
us. When we were developing the reef there's a lot of time and money spent making all those
Ai creatures. They all had to be modeled. They all had to be rigged. They all had to be
textured. They all had to be given behaviors. It took a lot of time as did the environment, all of
the coral took a lot actual just physical building time. We've got this wonderful content. The
issue is the interaction. So this is how it is for your web interface. The question becomes an
obvious question is how do we get that up into that space where you've not only got
interactions down at the bottom, but you've got this problem that you might have one person,
five people, ten, 50 people. You've got school groups and so on, and how do you interact with
this environment at the near or far scale. It's not appropriate for one person to be coming up
and touching the panel and controlling all of this all of the time. How do you restrict that in
some realistic way? So we have some ideas. The first one is called vanilla sky and what that
will be is simply taking what you've already got there, top and bottom and putting it all down
on the bottom. By putting it across those panels and giving those same sort of of icons there
for choosing a tour or choosing a location, and running it all along the bottom. The difference is
that when you touch one of those, it's selected. It will draw out and it's going to play, but
someone else could touch another panel immediately afterwards and you can't have it
switching between. So implementing what I've called a jukebox model which is they sync up,
line up, so when you touch your panel it goes gray. Yes, it's going to be listed to go and it
comes back with a number saying when it's coming up. Then we'll just run it through that sort
of process. So that means the near and far interaction is you come up close, you stand back,
you see what you want, you go there, you choose that. You're going to go to Mercury or
whatever, and then you're going to step back and watch it. Yes.
>>: That raises the question how [indiscernible] does your design process say like, I'd be
interested to see if you actually do what you don't want to do, but you let people actually start
a new thing like by touch like how do you, it would be interesting to see like if people create,
can you get feedback from people that this is not acceptable if already something is happening,
or do you actually force them not to do that and have the queue model? How much in the way
that you design those things do you account for okay. We need to learn how people are going
to interact with this and we need to change how they behave through all this.
>> Michael Docherty: Interesting question. We could explore that. At the moment the model
that is going to be implemented and is almost done is you simply come across, you choose one.
You didn't raise it not to be chosen again for a moment and it synced up in a queued model,
and that's what we're going with at the moment. It would be interesting to turn that off and
see how people behave. I do think because we get school groups through, that no amount of -the moment you say don't everybody press everything at once that's exactly what they will do.
We don't have a system that would respond actually if you -- I think. I'm not sure that if you
click, click, click, click click, what does it do? Does it just take the last click? I'm not sure.
>>: Yeah. It will jump from one to the other. Yeah.
>> Michael Docherty: Yeah. So the queuing wouldn't be long because if you've got a lot of
things queued up every time it zooms off to that location and then it would, so we have to
explore that. How long do we let it do that? Do we let it sit at the location at the end of a
choice for some moments? I think we have to experiment with that. We do have some ideas
about what do you do when there's nobody particularly interacting with it. We have some slow
tools running and things like that, so there's a whole lot to think about in terms of what you do
when people are just standing back? I mean, do you have a planetary model? The next
version, there's three stages of this. The first stage is what I call vanilla sky is this sort of idea
and the jukebox queue of choices. The second one is called the solar sky which is similar to this,
but it's because of our need to talk to students and it will have the solar system kind of locked
in as your choices and across the bottom panels will be each of the planets, doubled up across
the ten there, , ten there, although the two inner ones in the focus of the wedge would not be
active because people can't really stand there. And the students are going to inquire about the
planets and getting information of about them including the sun because it's part of their year
12, or K-12 curriculum and so we'll turn that on when the school groups are coming and if that's
what they want. And the third area is the tours and that will be in a sense default to what's
running anyway, but the choice would be for people to select one of the many tours that are
already in there and as we've talked about today to actually evolve away where they could
actually make their own tours. This would be an interesting exercise because it has to be done
at the touch, simple gesture level so we've got to explore that and we've already come a long
way today looking at how we can possibly do that. After that we're looking at gestures, so
we're looking forward to getting the new Kinects where we can get the information we want
because we want to explore not just one person standing back, but a bunch of people and have
the idea that if half a dozen or more people will start doing a particular gesture the system is
going to recognize that and then say now you can control it. So if you all sway your arms the
right way, you know, the crowd surfing type thing, or the football crowd wave, then it will
move. So I'm trying to make a bit of the game out of it but it requires cooperation and I think
that will work well with the students. I hope it will. So that's where we are at the moment with
this project. But really what it's trying to do is demonstrate the potential for this facility to look
at this question of large-scale displays and also this question of large-scale data. The research
we are doing, so we've got this wonderful kit, all of about $5 million worth. It's there to
illustrate and to show off and to inspire people with science. We have some other projects like
groundwater display, a project that is available there and so on. We've got the genome
utilization projects and we've got a chemistry project, so there's lots of interesting visualizations
of scientific knowledge and we also have a lot of high resolution images that people take of
microbes and so on, the eyes of little beasties and so on and put them up there. So we have all
this sort of stuff, but how do you get people to engage with it and be excited by it? With the
reef, we will evolve to a point where people can do a little bit of experimentation themselves,
play with a transect line, play with a microscope or a magnifying glass, try to do a little bit of
what a real scientist would do and have the model where they can click and select the
information and put under their own, open up their own bag and they can put samples in it and
when they go home to the website they get it and they can play with it on their computer in the
class, so there some ideas there about where you actually learn through discovery rather than
through a more didactic process which we've got at the moment. I have to say that we went
for the more didactic process because that's where the teachers wanted to go because they
were safer with it. I can't get that off the screen. I can't get a cursor; that's the trouble.
Nobody sees a cursor.
>>: [indiscernible]
>> Michael Docherty: Sorry?
>>: Press return.
>> Michael Docherty: Press return? Nope. I might have to escape out of that and there we go.
There we go. It's all right. So the other question is having done all of these things and gotten
that impressive kit, impressive applications and the WorldWide Telescope is going to be a great
one as well, what else do we do with it? The question that we're looking at is what visualization
process would be the most useful for displaying large amounts of data in a way that allows
meaningful discoveries. With the genome project we've got a lot of, there are some tools there
such as BLAST and TrimDif [phonetic] which allow you to visualize genomic data, but the people
that use this sort of stuff are used to looking at fairly bland ways of looking at data and utilizing
data is the real core of the problem because we've got so much of it. We're drowning in it.
We've got to be able to visualize it in ways that make sense to humans. Get the computer to do
what it does well and let us do what we can do which is to see patterns and connections and
networks. So, this is what we're trying to do with the project. These are the research questions
that we are considering. We've got a number of students, PhD students mostly looking at some
of these questions and we hope to develop these connections with you guys as we can. That's
really my talk. Are there any more questions? [applause]
>>: I'm sorry for showing up a little bit late. Maybe you already covered this but I have a
couple of questions. Are you using the commercial [indiscernible] engine to operate some of
these things or are you rolling your own?
>> Michael Docherty: Further virtual reef we use the Torque 3-D engine to start with. We are
evolving all of those assets over to Unity because it's more flexible for what we want to do. It
also makes it more accessible for the students because all of the students can do stuff with
Unity, but not with Torque. The physics playroom is the other 3-D world. Andrew Rowe
designed the 3-D visualization. I imagine it's a bit of code that is available and pretty simple
these days, so that one wasn't commercial. I mean the Extempore will be made available. I
think it is through the Australian National University where he did his PhD, but it's probably not
easily accessible because I'm sure he hasn't documented it.
>>: The other question that I had is there, some of the visualizations are inherently 3-D. In
other words they have a perspective, so that implies that you probably have a sweet spot. How
big is your sweet spot and do people complain about it?
>> Michael Docherty: I skipped over that one a little bit. You are absolutely right. For the reef
we have this problem where you can't just have one viewpoint because it's not going to look
right when you move around and we did spend a lot of time playing with exactly that problem.
The way the reef works is that you've got one state of that 3-D world which is on the server and
it keeps track of where everything is and what they are doing and it sends, and the touch
information goes through to it and then it sends it all back to the individual PC that's running
either a pair of, two screens or one of the projectors. Then it is locally rendered. The
information is then locally rendered. As the fish swims from one panel to the other it's literally
going from one computer to the other or into the projection space and back out again. In order
to do that, one of the reasons we used Torque in the beginning was because we needed to be
able to manipulate the code as open source. So each one of those two panels, each one of
those projectors is a portal into that particular part of that 3-D world and it's all blended.
We've got very saddled variations in the viewport of each one of those portals so that when
you are there you don't feel that the perspective is wrong. You actually feel when you walk
around that space that you are just looking into this world, this reef through a big glass panel.
But yes, it was an interesting question as to how to do that.
>>: [indiscernible] you can only assume that there is a certain perspective there.
>> Michael Docherty: Well we send the information to the projector from the state machine
which is keeping track of the whole 3-D world and we change the viewport that goes to each of
them just subtly so that the net result to you as you are standing there it looks okay. If we had
just one vision point and sent it out and we generated the 3-D world from that and then gave
each piece of that to all of the projectors, you actually get a sense sometimes that things are
not quite right, so we actually adjust those viewpoints, spread them out a little bit.
>>: Did you have to go [indiscernible] yourself to build [indiscernible]
>> Michael Docherty: But there is multiple viewports, camera positions if you like.
>>: Do they align with the projector or sort of in between?
>> Michael Docherty: The highest projector’s viewport is a little bit below it so then when you
are looking up at it it looks right, particularly because you've got movement. If it was static it
would be easy, but when the whales come in because they are very big, some fish or the whale
shark comes in they are very large and so…
>>: Just [indiscernible] if you are going to have the thing going from one screen to the other
screen and if they are aligning, if they are aligned, then you are assuming a certain perspective.
These are perspective renderings I presume.
>> Michael Docherty: Yep.
>>: You're rendering perspective. That enforces that you have one viewpoint, so you have to
have, you have to pick somewhere in the room, in your environment, you have to pick a
location where this scene is rendered from. I understand that the whole distribution of it, but
where, I mean logically you are kind of taking it in the middle, I guess, which would also mean
that anywhere other than that location is going to be slightly off and more slightly off the
further you are from that particular point.
>> Michael Docherty: Like I said, core server that's holding, the state that it's holding doesn't
do any rendering. What it does know is where all of the fish are and it sends that information
to each PC which then does its rendering of that particular 3-D world or that particular window
into it.
>>: Sure.
>> Michael Docherty: So we…
>>: But from what perspective? I guess what I'm asking is for perspective. Maybe we can dig
into this later.
>> Michael Docherty: Yeah, but it is slightly different each time, just slightly. It's not all
rendered, because it's all different machines we don't have to actually render it from one
viewpoint. Each machine is doing its own, so there's 14 different versions.
>>: But the whale is coming from one end needs to exactly connect to the whale running
through the other one. Then, they need to have a shared viewpoint between the two or very
close viewpoint.
>> Michael Docherty: They have a shared state but as they move between the viewport is only
subtly different and it looks perfectly natural.
>>: Because of the bezel?
>>: Only if they are really, really close by.
>> Michael Docherty: Yeah, they are.
>>: You can't have a viewpoint on one side of the room…
>> Michael Docherty: No, no nothing like that.
>>: If closer they are going to look…
>> Michael Docherty: No, no nothing that major. It's very small, they span of about a meter in
real terms if you are standing back.
>>: Right. And I'm asking where is that one meter? Where is that point?
>> Michael Docherty: It's about a midway back, but we basically spread things out a little bit so
that when you are standing…
>>: I think the question is, when you are standing, are you talking about the actual panels at
this point or actually the large displays, so the projected ones?
>> Michael Docherty: They all have, each PC sends off its view of the world. The image is
blended, but its view of the world is not taken from exactly the same camera point for each PC.
It's spread out a little bit, about a meter in real terms.
>>: If you look at the whale there will be a sweet spot somewhere where it looks great, but
when you get closer to the screen you physically have to move back to have the right
perspective because of that.
>>: If you go back to the image of the reef, where's a big one.
>> Michael Docherty: Trust me; it's magic. There is a side view and there's looking there.
That's looking down. I have never noticed any odd sense that is out of perspective. That one is
there. We did experiment with it a lot and one of the programmers he spent an awful lot of
time and an awful lot of argument with me about this just sort of that basically.
>>: You might actually see a phenomenon here which is really interesting is that the theme
that you have here has a immense amount of blur from anywhere that's not really in the screen
space, so as soon as you are depicting something that is more than a meter or a couple of
meters in virtual space behind the screen, it's already so blurred because it's water. You have
the water.
>> Michael Docherty: Yes. That's right.
>>: So therefore, your approach, applying here to the reef is not going to work for the stars, for
example.
>> Michael Docherty: No.
>>: And there you will see this problem much, much more than you have here at the reef
which is basically blurring everything that is not really right in front of you.
>> Michael Docherty: Like I said, with the WorldWide Telescope, we experimented with this
and we all put it up in that space and as far as we can tell without any tricks other than the
projector blending, it looks fine. What we're really doing there is putting the normal one
window across the four projectors above. In the testing we've done on it, it seems to look fine.
Maybe it's just the scale. Even when we've zoomed into the planets and we've done, we've got
Earth in the middle there and sort of looked at that and zoomed in, it seems to be fine.
>>: With the stars it's going to be okay because it's really like a 2-D bits. It's the 3-D part that
we're building.
>> Michael Docherty: It will be interesting to see.
>>: What we should be noticing is that when you're not standing in the sweet spot and you
have a big planet that actually look circular, it will not be a circle. It will basically be an egg
shaped thing, ellipsoid, something. It has to be perspective wise if you render something from
a particular perspective…
>> Michael Docherty: Like I said, in the case of the WorldWide Telescope we are not rendering
it; you are. It's coming directly from…
>>: Fair enough. But the experience, I guess is what I'm trying to say…
>> Michael Docherty: That is an interesting problem. We, at this stage have tested it in that
space and put it up and had the planet in the middle, you know, where the edge is because we
are concerned about that look, and so far it seems okay. We've still got to do a bit more,
maybe it's the scale, but it looks okay. I think the important thing is that nobody seems to
mind, in particular, with the stars it doesn't matter, but the planets we can play with a bit more
and see how it looks, yeah.
>>: What's interesting was the reef [indiscernible] multiscale, what's playing really well is how
you can have big space on top and the small space on the bottom and you can get all of the
details in the small space and all of the interaction in the small space. I'd actually be curious
how you can interact with whale to get information on the whale. That's another question. I
think it's really plays well and I wonder with the WorldWide Telescope that your examples
actually played that well with being able to have an information space that has a lot of density,
a lot of detail on the bottom and you can play on the idea of the scale on top. It seems like
that.
>> Michael Docherty: That's something interesting to explore. We've got this as I said the near
and far experience, so with the WorldWide Telescope we want to keep it as reasonably far as
possible because that's what you want to look at. But when you come near we haven't yet
decided with the solar sky we are going to get people local information and they'll do stuff
there and then it'll be just like a really big screen in front of you and up there will just be a tour
running or something.
>>: [indiscernible] outline of the planet, say of Earth, looking out and people can track with
Earth and respective to the rest of the worlds and you could play on that idea, you know, you
are looking at something close and you have a vision of something essentially far. In some way
you are you doing here.
>> Michael Docherty: I think, what we've done, the testing we've done and we've also done the
NASA moon landing and they got all that video and we had that running one afternoon and that
also looked fine. It was great at that scale. Essentially, you've got a cinema problem, except
then you've got this blended thing in the middle. We were concerned even with the reef that
when the whale swims across that blend, so what we done is make the behavior so that it
doesn't [laughter], but the whale shark despite things sometimes go across the middle, but
nobody seems to notice, because it's not actually 90 degrees, I should say; it's 110 slightly up
and down. It seems to be all right. We did experiment with what's called anamorphic
projection where you can flatten, so you project into that corner and we did develop this with
some honor students. You project into that three plane there and it reads as if it's a flat plane.
We did do that, but then we found that we didn't really need to. In fact, it had an odd effect on
the reef because it made it just look brown oddly enough when you flattened it, because as you
say, it's all water and you are fading off into the distance anyway with water. Then when you
flatten it, it just looked odd.
>>: I also wonder sometimes if we get more critical about those issues then sometimes the
general public does, right? I find that sometimes even when we do some of the WWT stuff in
the projection where we notice all of the little cracks and everything. And they don't see it.
>>: The reason I asked question is because I've found for a lot of these experiences in the
sweet spot, there's always a sweet spot, but it seems that it's fairly large and the public is very
forgiving. I think we benefit from years of being trained, for instance, the cinema seeing movies
off axis. Our brains are perfectly fine to it and you don't really notice it after a while, so I think
the same phenomenon probably goes in these experiences. They are fundamentally wrong, but
they are believable enough.
>> Michael Docherty: I think you are right. Despite all of the fiddling we did and all the subtle
variations it probably made no difference. We didn't actually do any real experimentation
where we tested it other than we did it by eye in our development lab which wasn't that scale
when we did it. We had four panels and then we had eight and we played with it and we made
some decisions, but you are probably right. It probably makes no difference at all. It's very
forgiving in that environment, as you say. And also the nature of this, the reef is not, you don't
have strong perspective lines of like a building or a city streetscape or something, so it's
probably much less forgiving, much more forgiving. And also, you're right; even if it looked odd
people would still forgive you. It's like when you take a photograph and buildings lean in, that
effective perspective, we suspend disbelief. We accept that the buildings aren't really leaning
in. We just take it for granted because we have spent so much of our life just looking at those
sorts of images these days.
>>: It might have an impact on some of the stated goals that you have set if you are actually
looking at the data set where lines matter, connections matter. When looking at patterns and
things like that all of this perspective stuff starts to come back in because then you are basically
fooling people and they have to be standing in the sweet spot. We created a bunch of these
things like the telescope in the dome and once we started showing link graphs and started
doing information [indiscernible] and where the nodes and the stretch lines are there and the
connections between the stuff that's here and there, I think people in your setup will -- unless
you keep it all planer, I think it's going to be very difficult to, not as easy as some things.
>> Michael Docherty: I'm sure you're right. The model we've just dealt with his big open space.
>>: You picked a very good scene for this point. This is a great scene for this.
>> Michael Docherty: That's right and I think the telescope will be fine too and we are doing
stuff which is turning around the telescope and coming back to earth and then blending when
we get closer with this laser distance and that lighter data that we got from the military for
looking at remote areas and so on. Again, the scale of that perspective doesn't seem to matter
too much. I think once you got straight lines you do get a problem.
>>: Attempting to see if you try to use the Kinect to try to see where the groups are and if
there is one major group and you are just slightly [indiscernible] point of view [indiscernible]
the projector can adjust to where people are.
>> Michael Docherty: This is something we might explore. We have gotten laser detectors
sitting in the middle of the wedge, in the two wedges and then back out on to the stair. We
don't particularly use them, but we can detect that there is someone there and how many. We
haven't done anything with that yet but it is there. I haven't come up with a reason to use it,
but maybe with the Kinect movement of things and knowing that there are five or six or ten
people then we can perhaps start to do things. I'm keen to make that space in there sort of in
front of it active space, but I don't know how to do it.
>>: Do you have a strongly onboard system, the sweet spot, right? People will tend to actually
go there if there's a [indiscernible] the action as well by [indiscernible]
>> Michael Docherty: If we get the laser strong enough we can have it so when they move out
they will get shut down. It's the gamer in me, you know.
>>: I missed in the beginning you said [indiscernible] do you mind sharing some of the details
on how many people work on this and how much did this whole thing cost?
>> Michael Docherty: Like I said, the physical kit cost about $5 million. With people it came
closer to seven. There was a lot of discussion two years out. We opened at the beginning of
this year. It was extended in October of last year and then we did all of this testing and further
development and my team worked in the new building before it was sort of officially open.
They made room for us, getting all the networking going and all of the rest of it. There's a
network room above which has got all the machines in it, you know, the double cold thing, lots
of cables running everywhere. You can go in behind. This is a walk space about shoulder width
that you can go in behind because you have to get to all sorts of things. For the reef I had six
people, so two C coders. There were two animators, modelers and a couple of people who
were specialists on the reef and fish ecology and so on and there was someone whose job was
to make sure that we coordinated with the research people and the museum people got there
occasionally and they vetted what we did. You know, those fish don't bend in the middle. They
bend at the tail, all of this sort of stuff so we would get all of the behaviors correct. We had
external collaborators and we had those teams. About a dozen of the species were outsourced
initially to China, came back and then we spent some time fixing them up and adding more stuff
and so on. We were hoping to get a lot of this, either buy them in or have them premade but
that didn't work out so well because some of the stuff we got back was not good enough for
what we needed and so on. Plus we had to put all the Ai in the behaviors and so doing the
model was only half of the job. With the physics playroom, there were essentially two people
on that. There was Andrew who spends his entire life standing up his computer coding and one
of his doctoral students from [indiscernible] National University helping him. There was an
energy project called E Coss [phonetic] which I didn't show which looks at the energy of the
building and has live feeds and whatever. There was about four people on that. The history
wall project had initially myself and two interns from Singapore and then we had two other
coders on it and eventually just one. And I had a video specialist guy.
>>: [indiscernible] the reef you said that six people. Did that include, did you start from having
kind of a distributed display figured out or did that include all of the infrastructure of the
[indiscernible] blending and all of that?
>> Michael Docherty: When we started it was literally a screen. Let's make this 3-D world a
virtual reef and then as we got the kit, as we got the early versions of the panels, we had them
set up in the development space and we sort of put it across a space like that at that high to see
what it look like but we didn't have the projectors and then we got some projectors and there's
a whole lot of that. And there certainly was a point where we were waiting for the whole thing
to be built and there was this moment of is it going to look all right because we didn't know
what it would look like at that scale and whether that corner would be a problem, whether
those resolutions would be a problem and so on. So yeah, that was a bit of hit and miss along
the way. It was fairly iterative. A lot of the work was just getting, putting more species into the
environment, getting the behaviors working. You wouldn't have noticed it, but oddly at the end
of the day most of the fish are distributed everywhere because everybody is touching the things
and doing things and that causes them to flee or to do something else and by the end of the
day they are just out of their zones. Those little clown fishes are supposed to be near the
anemone; that's where they start, but so it's an interesting effect of people if you allow
interacting with this huge aquarium that the fish just almost evenly distribute which causes a
problem because if the sharks come through, then they all run away again. It's odd and we
have zones where fish would normally live and stay, but then they end up being pushed out of
that and so there's some odd things that we could tweak in terms of what really happens, but
yeah. It was a fairly iterative process, but a lot of it was just about getting the content up,
getting the coral in and rendered was a big issue because it just takes so much grunt to get all
those tiny little -- I mean we had to experiment with how much detail we needed and all of that
usual things for a graphical display.
>>: How long did it take?
>> Michael Docherty: About 18 months for the reef. Some of the other projects took less time.
The data wall took maybe about six months. And a lot of that was the interaction issues and
then until we had the kit configured we didn't know how much we could do, so there was still
some evolution there even as we opened, and so on. But the reef was the main opening event
and says it was a lot of effort to make sure that worked, and so a lot of tension and [laughter]
concern.
>>: Also there’s projections on the backside, so they also have spaces on the back.
>> Michael Docherty: Let me see.
>>: So they have the big one, the wedge and then.
>> Michael Docherty: There's the wedge, but then there's the other side walls. That's on one
of the walls, the physics space. Were you here for that? Actually, computer image, a
computer, so you walk in the door and that's what you see at the beginning of it and around
the other side is the reef. It's a very public space and it runs, it seems pretty seamless. It's
been running all of this time without any fault. Occasionally a panel, we've had to recalibrate
the panels a couple of times and there's been some firmware updates to the panels and such,
but they do that [indiscernible] so it's running from 9 AM until 6 PM usually and we have to
keep it running from 10 until 4 because that's when the school groups come in, visitors
whatever. The Vice Chancellor loves taking international people through here.
>>: I also wondered about the one where you've got the multiple displays for anyone to
connect to. Was it that one?
>> Michael Docherty: That one. There's projectors there that you can connect to and this one
as well, but there's a bit of set up through the Cube team, because you need to get it through to
the server to come back onto the screens, so this is where some of my students tried some of
their games if they have a game that makes sense in this environment. This one doesn't require
permission, although, we try to vet it as much as possible.
>>: Sorry I missed that one.
>> Michael Docherty: That's on the upper level so it is the wedge, two story and then both
sides and then both sides. On the bottom it's standard and above we've got that and then on
the other wall we've just got projectors blended. Just to the left of that image are these bench
seats, Cube seats and people sit there all day. This whole space I should say is really inhabited
by students. It's just full. My son goes to the University and complains if he doesn't get there
by 9 in the morning he can't find anywhere to sit and work. It's a really good public space and
students love it. You can see them sitting there in the spaces. They just, this was a recent one.
There's not too many, but there's always students around.
>>: What will become of this when you run out of funding?
>> Michael Docherty: I don't think we'll ever run out of funding for just keeping it ticking over
because it's the Vice Chancellor’s project and it's got so much interest, so it will always be a bit
of an icon. The issue is the resources to put new content on which is always undervalued and
under resourced and everyone complains that the kit cost $5 million but for all of the content
that you got for 1.5 million dollars and I was just absconded from my faculty, so there was a lot
of embedded costs there that no one added up. We had my 16 were all separately contractors
for that period and paid.
>>: There is nothing more heartbreaking than seeing someone [indiscernible] awesome ten
years ago [indiscernible]
>> Michael Docherty: Like the information environments program, yeah.
>>: Especially interactively, things you go to the museum and, you know, you can tell it used to
be awesome but it's been sort of left there in decay.
>> Michael Docherty: Interestingly, the Queensland Museum has this reef project running just
as a projection which has 1/5 of that area on a wall and an iPad as a bit of a control, so, you
know, it could be in other places if you had some interaction. And it's just got one server
running it, one image on the wall, one projection. But I doubt it will ever be like, I mean, who
knows ten years away, but right now it will continue to be maintained at the very least. Just
replacing the projector globes, you know, that's, so there will be an evolution into full LED
projectors and things like that when they can get them bright enough and so on. For the
moment there's a team of three whose job it is technically just to keep it running and they are
contracted.
>>: And then the other one was your interaction where you can login through your swipes, you
didn't show actually what you would see, but is it just limited to those types of documents?
>> Michael Docherty: At the moment, so it's called CubIT and you can and you can look it up.
You might not get access if you are not a QUT staff member but, because of the things about
[indiscernible] and stuff, but if you are or a student then you have your own space you can
upload. It's like Dropbox essentially and in fact there is a connection to Dropbox [indiscernible]
server, University [indiscernible]. The software will run a text file, PDF, PPT and.MOV, or MP3,
so yeah. But there's an intention to do other things. There's also an ongoing project which is to
get that same software into all of our lecture theaters, so I could come up here with this touch
panel and swipe and which I think is why not? In fact, that's why I got involved with the project
because I was pushing that for some time because this exercise seems a bit utterly primitive.
Literally, you should be able to bring it up; I'd like that. These are all touch panels. We have the
same in all of our lecture theaters. You touch them to control the thing, but that seems a bit
primitive and why not actually have that CubIT exercise and I can pull out the PPT and I can
swipe this and I don't have to carry this between lectures.
>>: The future as well, I mean people have [indiscernible] so you go back to the reef
experiment. Like engaging students more in an active problem-solving and inference
maintaining [indiscernible]
>> Michael Docherty: I'm involved in another project called the chemistry world and that is,
again, University of Chemistry, but there is a design where at each of the individual panels I can
be doing some chemistry and it's at one stage an industrial process and then I can bring it over
into the next person who is going to do their bits of it and so on. We've got some ideas about
how you can actually engender more understanding and by simply chemistry tutorials in that
sort of realistic, as if you've got a chemistry bench sort of thing. You pull things over and you
touch things and you mix things and whatever, and you see the molecules above and so on.
That's ongoing. That's almost, it's being prototyped and vetted and we are just waiting to
develop that one. So that will be there soon as well. The interesting thing about all of that is
the three technicians who run it and make sure everything is working and the panel switches
off and they switch it on again, that sort of thing, and if that the calibration is done, but all of
those apps we've got on a pad and they are sometimes just sitting there and they touch things
with the Vice Chancellor there and it just shuts down everything and brings up the next thing.
The reef takes about two minutes to load up and for all of those things to happen, so we had to
write an app that brings these curtains across and pulls them back again so when the Vice
Chancellor is there he doesn't see all that computer speak zipping up on the screen. But
literally, we can switch any app to any design of this iPad app and the technicians are there
doing it and testing it, so they go downstairs and they do it. That was one of the things we had
to do, the whole command-and-control and we've got one server whose job is just doing that.
>>: [indiscernible] to be on the telescope is there any Microsoft dialogue happening
[indiscernible] Australia [indiscernible] large-screen people?
>> Michael Docherty: A little bit. Mostly we've gone directly to you guys here, but there's been
a little bit through John Warren. He affected the initial introductions and he works with one of
my colleagues Paul Rowe who does the Microsoft e Research Center for us, so yeah.
>>: But not like Jeff [indiscernible]
>> Michael Docherty: No. Basically this, I don't know where this came from, but one day my
Dean says Michael, I need you to do this and then Paul said and I'll introduce you to John who is
going to get you connected to these people and then you've got to go and see them, and so
that's what I'm doing [laughter]. But it's an exciting project because I really am interested in
exploring these interaction ideas and this content is going to let me do some of that. We've got
the team, so we've got the research visualization team and there's four people in that. Three of
them are top code people, so we got a team of people and we've got some more money if we
need to bring in a C person or something like that.
>>: [indiscernible] it will really be interesting to see if you can use that technology
[indiscernible] that far away [indiscernible]
>> Michael Docherty: I'm interested in exploring all of the levels of interaction. All of the space
down here, it would be great if I could use that in some way. We did do some, there is an app I
got one of my students to do which allows you to touch you find and it interacts there. We just
never released it. It would be great to explore some of these ideas, these interaction things.
>>: If you get the chance, I don't know if you are familiar with the [indiscernible] project, the
University California Santa Barbara? It's a four-story humongous, it looks like the thing from
the X-Men, four-story humongous spherical display with I think 18 4K projectors including all
sorts of weird, and by weird I mean awesome spatial acoustic audio. The sphere itself is
translucent material with little holes and so the audio carries very well. They have lots of
spatial [indiscernible]. They're going [indiscernible] large-scale, large immersive kind of
experience visualizations with that.
>> Michael Docherty: That is interesting. What do we do with these spaces?
>>: They have the same question there, the same problem. And so you guys might have a lot
of stuff to talk about.
>> Michael Docherty: And how legitimate is it? I mean, you can do all of these things but then
you don't really test whether it's a good or a bad thing. It's just how it is. There's a push
between kind of the PR and, the interaction and design, this is about a third of what I had on
the table to start with. We didn't go a lot of the ways I wanted to go, with this, just for a whole
lot of reasons. So there's lots more we could do with this even as it is, and I don't think we
really understand large-scale touch interaction even with the tables. Remind me, you can get a
couple of people, but it's got to evolve. Now we don't think twice about using a pad or a
surface, but three years ago it just wasn't there. So I think, we use new technology in old ways
until we get used to it, so still waiting to see. To some extent, what we're doing with this
technology is almost trivial, but it wasn't set up to be a research thing. We do have a research
center a couple of floors above this where we've got four of those like that and then another
four the other way. We've got a 3-D planar system. We've got a gesture g-speak system.
We've got a bunch of things in there where we do our research and we've got Masters and PhD
students doing projects and that's where they do it and if it gets to a point where it needs this
scale, then we can put it up here. So we are trying to get, I mean I did for a while have one of
those screens in front of me and here is my keyboard. And that was my monitor, just to see
what that meant for you, for the way you behaved. I only had it for a few months, but it just
changes how you think about the data that you are using because it's like this digital desktop
and you just stop using the keyboard and it is a bit overwhelming.
>>: [indiscernible] it's not well supporting [indiscernible]
>> Michael Docherty: Not really.
>>: [indiscernible]
>> Michael Docherty: Cheers. So we are probably done, are we?
>>: We're done.
>> Michael Docherty: Good.
Download