32114

advertisement
32114
>> ANDY WILSON: So it's my pleasure to introduce Bruce Thomas. So I've
known Bruce for a little while. Several years I guess. And so there's
coming from -- he's actually traveling with Ross Smith who's over here, the
gentleman over here, and I guess will be sharing part of the talk today. And
so Bruce has been active in many different fields of inter [indiscernible]
including wearable computers, augmented reality, virtual reality, CSCW, and
tabletop display interfaces and so if you've traveled to any of these
conferences, you probably bumped into him at some point. So wide variety of
interest. He's the current deputy director of the Advanced Computing
Research Centre, director of the Mawson Institute Spatial Augmented Reality
Visualization Lab and director of the wearable computer laboratory at the
University of South Australia. And Ross is the codirector of that wearable
computer laboratory. So he's a senior member of the ACM, a visiting scholar
with the human interaction technology library lab at U-Dub and, yeah, so
he's -- right. He's been at Australia since 1990 but before then, he worked
at mist and so has a bit of an manner thing going on. So he works on a lot
of different things like we hope to learn about today. So thanks.
>> BRUCE THOMAS: Thank you. So thank you very much for the opportunity. So
let me kind of explain where Adelaide is. I took a job in Adelaide and I had
to go look on the map where it was. So it's down there in south Australia.
If you know anything about wine, it is the center of the wine regions in
Australia, is very proud of their wine. So when you come -- if you come to
Australia, give me a call and I'll do a talk. So our research areas, we work
on augmented reality. The roots of our work is in wearable computing. And
basically I saw the connection to those early on because I just thought it
would be -- I just said, God, it would be great if you could walk around and
see things outside in augmented reality. And I didn't want to work inside so
I thought I'd go run around and do some stuff outside because unbeknownst to
me, Steve Finer was doing the same thing. But so the early work, at least
that's where our early work is.
Another thing that we work in is novel interactions and we'll talk a lot
about that today. Over all, I guess the science that we do would be HCI
science. You would say that. And we're moving into visualization. So we
work with a range of people. We have computer scientists. We work with
psychologists. We work with designers. And you'll see this in the talk that
we have a real focus on designers. Adjuncts. We're also very interested in
getting some of our ideas out, and we're working with companies to try to -and we've commercialized a number of ideas and then Ph.D. students and the
whole swag of undergraduates. So kind of the four major areas that we work
in is we work in mobile up to our augmented reality is probably the oldest
area we've been working. Distributive and collated collaboration tools.
That's sort of ebbs and flow work. I actually have two Ph.D. students
working in that associated with spatial augmented reality.
What I'll talk about most of the day is this large-scale indoor augmented
reality. And then we're moving into this area of visualizing large graphs.
When we say large graphs, we mean like a hundred thousand to a million nodes.
The university just is part of an $88 million big science project and so I'm
going to be leading the visualization effort in that. I'm not going to get
$88 million, but it is actually -- it's going to be fun because it's going to
be a five-year project and potentially going on to a 15-year project. And so
it's kind of nice to know where you're future is for a little while.
Okay. So one thing I want you to think about is you want to design
something. So say you wanted to design an appliance. You wanted to design
an operating theater. Or command and control center for submarines. In
Adelaide, they actually fit out submarines. So how -- so what are some
similarities between these? Well, number my naïve mind, I think of something
called the design space where kind of the artifacts and ideas of this is in.
And so on one side, what you do is you have this virtual designs. These can
be virtual reality, CAD, whatever. It's all malleable, flexible, easy to
work with.
And then eventually what you're going to do is you want to make a prototype
of it. Because you can talk I have, you can feel it. And what's important
is you get client understanding of it. Because you can look at something on
the screen, designers can translate something on a screen into reality, but
the person off the street can't. So imagine you want -- somebody's going to
sign the check for $10 million or something. They want to understand the
design. So what we want to do is somehow matter the both -- both these ideas
and we think in the design space, our research kind of falls in this. What
we want do is leverage the capabilities of the virtual but have the passive
happenings and the nice understanding of the physicality in a prototype. So
this is sort of driving a lot of our thoughts and it drives a lot of our HCI
design because we want to take advantage of both these. Okay.
So we have this laboratory. We call it the UniSA Holodeck. We're able to
use Holodeck because we're stick being UniSA on the front of it. And so it's
for projecting on to physically large objects. We do actually -interestingly, one of the first things we did was actually a small tabletop
thing so we had to hang a lot of pipes down to get the projectors close
enough. So it's physically a large room. It's 14 meters by eight and a half
meters by four meters tall. Lots of projectors, computers, and lots of
things.
But what's really interesting is it actually has a door. So we can load
things in and out like a movie -- not a movie theater but like a theater.
Like a rock show. So you could build large white props, bring them in. We
project against them, take them out. So that was the concept.
If you look at the ceiling, it's rigged like a -- like actually a theater.
Okay. So for the design, so this is -- I'm doing this kind of a motivation
of where we're going is so we want to do this for design and we want to be
able to change this design in realtime and we need one of our goals is to
develop tools that are rich enough that you can actually do CAD-like
separations on artifact -- on physical artifacts. And if you don't have a
mouse and keyboard, you want to do this precise manipulation and things and
so more importantly, you want to specify things. So that's kinds of where
we're kind of driving at. We haven't gotten there yet.
So we also want to integrate it into design process. Because one of the
things I've observed over the years is computer science is great at coming up
with a tool but a lot of times it's standalone. So it's like, yes, you can
use this, but you have to go to someplace else and translate all your data
and do all this different stuff. And the effort to go and do this is not
worth the benefit.
A simple example is they built a large visualization center for Petra
Chemical in south Australia. We dig a lot of things out of the ground. And
the time it takes to walk 3 or 4 blocks over from Santos, load the data in
and make a decision, they just say, well, we don't have time for that. We'll
just do it the old way. So you have to do it within the way they work.
So fundamentally all the data has to be transparent and it has to be part of
the work practices. Ultimately, I think if we get this right, our dream is
that this potentially could have as big a impact on the design process as 3D
printing. When I talk to designers, they say, yes, this is what we want.
It's part of the whole process.
So the other thing you can do is you can test for ergonomic properties,
people that are in front of it, get them to use it, and so we have one early
product that we made with this company Jumbo Vision. And it's actually -- it
uses like a couple percents of the functionality of our system but what it
does is you have a large area and it projects large table outlines on the
floor.
And so what happens is, so they build command centers, they build call
centers, which generally have large desks, like the four meters long or -and these big curved desks. And so what they do is they pass A three paper
back and forth. And over 3, 6 months, they settle on a design. And this
is -- Jumbo Vision is losing money because this just is a long process.
So what they do is they bring people in and using basically an OptiTrack on
tripods with the Christmas ear on top of it, they can pick up these
full-sized virtual tables which are just projected on the floor and move them
around. And what takes 3 to 6 months, they can do in two days. So it
saves -- and the customer gets exactly what they want. So basically, what
happens is you get customer understanding. So this is what we're driving at.
So I think this kind of validates the idea in a simple use case but that's
the idea.
We're also -- we want to design tools to support the designers. And we've
been looking at things, projecting on to guide manual tasks and -- let's see.
Let's just go on.
So we've actually been installing he's in factories and things and trying to
work with this. So let's get a little -- let me get my mouse back.
Damn you Steve Jobs. This is a little sider. We go to these conferences all
the time and they would plug in their laptop and it wouldn't work and
everybody would -- we'd curse Bill Gates, like saying, he didn't invent VGA.
I can't get it.
There it is.
>>: This video demonstrates physical virtual tools. Our new user interface
methodology for spatial augmented reality. We allow our designers to
digitally airbrush artifacts using a virtualized stencil. Three tools have
been developed for this application. The airbrush functions in a similar way
to its real life counterpart. Digital paint leaves the you brush in a
cone-shaped volume. This is implemented by drawing the scene from the point
of view of the airbrush to a frame buffer object, encoding the UV coordinates
of the artifact that were hit by the paint. We then iterate across the frame
buffer object and updating the textures. This allows us to achieve similar
effects to a real airbrush. Spraying from a distance causes a larger area on
the artifact to be painted but with each point receiving less paint than if
it was painted from close up. Applying the paint as a grazing angle to the
object causes a large area of paint to be applied. Introducing the sensor
tool allows the designer to airbrush on to the artifact, masking areas from
receiving paint. The stencil operates in two modes, either the stencil shape
is the mask or the tool is the mask with the stencil shape acting as a hole
in the tool, allowing paint to pass through.
In addition to the shapes provided, we allow the user to create their own
stencils by drawing on to the stencil tool with the airbrush. This gives the
system more flexibility as the designer can create their own stencil to
accomplish certain tasks. Stencil creation would be difficult to achieve
without the use of physical virtual tools. The stylist tool is use for
annotation directly on to the artifact. For example, a designer might make
notes for modifications during a design meeting with a client annotating
directly on the object can improve the dialogue between designer and client.
As the client can verify the changes that are presented visually.
We placed user interface controls on to the stencil tool in addition to the
stencil shape itself. Pressing the buttons on the left of the tool allows
the user to change the mode of operation. They can select a new stencil
shape, change the paint color ->> BRUCE THOMAS:
>>:
Oops --
-- invert the stencil --
>> BRUCE THOMAS: Don't know what happened there. But if you noticed on the
back of the tool, the actual shape of the -- and so the whole idea is what
we're going to do is bring into the -- so all the tools are sort of -- you're
not using a display. There's no place -- in this environment, there's no
place to put a menu. There's no place to put it. We want to actually have
it -- all that virtuality on the tool itself.
>>:
[Indiscernible].
>> BRUCE THOMAS:
Oh, we stick an OptiTrack on the --
So I have this -- oops, this mentality of like saying well, I'm not really
worried about the sensing, so we cheat on the sensing as much as possible and
let somebody else deal with it.
>>:
[Indiscernible] or the pallet.
>> BRUCE THOMAS: Yeah. Just a little glue tack. So this whole idea -- so
this was a big part of Michael's thesis is how do you kind of integrate all
these tools in the space that you -- came up with a nice theoretical
framework for it.
Okay. So here's -- I'll just show a little bit of this video. So the idea
with this is we're interested in using this for interior architecture and so
getting people to kind of quickly layout cabinets and move things around and
so he's doing it with a keyboard. And it's -- it's difficult because there's
these great operations. [Music].
>> BRUCE THOMAS: So then we just standard to the [indiscernible] he doesn't
show it with this but we came up with this little tool so you could actually
work with the kick board because you don't actually want to crawl around on
the ground and do that so we made a special tool so you could manipulate the
kick board with. [Music].
>> BRUCE THOMAS: And then so the -- the geometric is pretty standard two
handed interaction but then we were wondering how can we change things like
the property, like the color and textures and stuff. Now one of the things a
designer said is basically what they're going to do is they're going to give
you a set of -- so you don't have an infinite choice on this. Says here,
play around with these, and so by using a -- just a simple touch thing, you
can cycle through the different options that you have. [Music].
>> BRUCE THOMAS: So what we want -- and you can touch the handles and the
whole idea is we want to minimize the user interface and allow them to
actually have a lot of flexibility.
The second thing is how do you save this and replay it? So this there's idea
of a swatch so you get a little thing, shall and then you can save it to -actually when you touch it, that's -- you can flip it between the two
swatches and then once you made one, it will show you how you do the simple
little button interaction for saving. You flip it over, and you slide it
over. So it's just this -- it's this -- so it was just a bit of an
experimentation on -- [music]?
>> BRUCE THOMAS: Oh. So it was just how do you -- it would be nice if I
knew how to run my own computer.
So it's like how can you support with a minimalistic amount of tools this
design process? Okay.
So the last thing I want to show on this is we looked at some photo realism.
This is a short video here. So what we want do is look at supporting -- oh,
this is not the right video.
>>:
It's cool anyway.
>> BRUCE THOMAS: So this was an experiment on depth perception. He ran an
experiment to see which of the depth cues actually really work in SAR.
Sorry. Add a thickness to the wall. If you stand over here it looks really
bad. So it is view dependent within a limited ->>:
These things always look better in [indiscernible].
>>:
Yeah.
>>: Something uncannily good about it. Probably doesn't -- probably because
it's monocular view. You strap the tracker right to the video camera.
>>:
[Indiscernible].
>>:
What happens if you see it in stereo?
>> BRUCE THOMAS: We want actually -- there's some stereo projectors. We
want to see what you do on a non-planar surface or a non-perpendicular
surface. We want to do that experiment. We just haven't done it.
Okay.
Let me go find what I wanted to show you.
Sorry.
>>: [Indiscernible]. Why can't you use additional display surfaces?
could have done that on the furniture itself.
You
>> BRUCE THOMAS: Well, I think what we're trying to do is we're going to say
if you go in, it might not be a convenient surface A and you don't want to
display on the thing that you're using because you don't want to disrupt that
what looks like.
So this is the -- so we had somebody -- so you have this these classic
retracing objects, Stanford bunnies, spheres, and stuff. The actual stills
look much better than the video. But when you look at it, it's like saying
how do you have to alter the graphics part to make this work in SAR. And
it's no one thing. There's a whole bunch of little tricks that you have to
do to make it work. And it's nontrivial. And if you get it wrong, it just
looks awful.
And so he got it, with a bit of haranguing, he got it to 15 frames per
second.
>>:
[Indiscernible] from all directions?
>> BRUCE THOMAS: No, no. This is just using the end video rate tracing,
realtime rate tracing engine. But getting it in the correct view, because -I could talk more about this but the whole idea of doing few dependent VRs -I mean SARS is kind of a tricky problem. Okay.
Okay. So the kind of last user interface thing I want to talk to you about
before I hand it over to Ross here is we've been -- James is just finishing
up. We've been working on this kind of idea of ad hoc slash ephemeral user
interfaces. And the idea is what we want to look at is like saying your
standard interface, basically the application dictates how you use it and
there's some customization to it and what we'd like to do is you come in here
and you're going to say I want to tell the interface what I want do. So when
I do my slides, this means forward, this means back, this means start at the
beginning, or whatever it is. And I need to communicate this to the
computer. Okay.
And so we looked at there are three things that you want to communicate with
this. One is you want to have new controls, i.e., I want to stimulate
PowerPoint. I want to manipulate data within it. So I don't know exactly
what -- so the first one is the easiest because I just connect things to
existing controls. The second one is I don't know how many objects of it I
have to manipulate. And the last one, for some reason the first one we
worked on turned out to be the most difficult one was new logic. So you want
to specify some behavior in the application.
So he came up with framework do this.
So let me have -- so this is --
>>: This video demonstrates our system support [indiscernible] on ephemeral
user interfaces. To illustrate how systems seamlessly integrates with
similar systems, this example uses a program that controls [indiscernible] as
is visible in the upper right of the screen. To create a control,
[indiscernible] button and then select which group of functions to explore.
In this case, they select PowerPoint and then they select the next slide
function. The system never presents relevant [indiscernible] options based
on that function selection. User selects button and then drags
[indiscernible] to define that button and then press the action button to
firm their sizing. User can now use this control to navigate to the next
slide as is visual in the upper right-hand screen. The process can then be
repeated for the previous slide function. Again, user selects it's group,
the function, and the input type before they create another button to go to
the previous slide.
Once controls are created, the user can move or delete them by holding the
action button, at which point the controls can then just be dragged around.
To delete a control, that control is simply dragged off the table into the
red region.
To illustrate functions that require parameters, the user can repeat the
process, but then, choose the go to slide function and then select the slide
to navigate the slide show.
To ensure consistent value, user can also use physical object to ensure a
value is set. In this case, the user uses a simple [indiscernible] to assign
a value. In addition to linear slider, the system also supports code
sliders, as well as partial or complete radio dials.
To create arbitrary input devices, the process is exactly the same. The user
selects the function to control and then selects an import requiring an
object. In this case, we use the orientation of an object. User places the
desired object on the table and then presses the action button. They are
then prompted by the system to rotate the object to the initial orientation
before again pressing the button and then rotating the button to the ending
orientation and then press the button again.
The system then interprets plates the value of the orientation of that object
given it's current rotation. This same approach can also be used for 3D
printer controls. For example, once printed this lever can immediately be
used as a functional one without requiring any additional electronic.
For more information regarding our system, please consult the paper.
[Music].
>> BRUCE THOMAS: So what was interesting about the project, well, a lot of
interesting things, but one of the things was we had this whole bootstrapping
problem and the whole idea is you can I'm going to have a complete
conversation with the projector, it turns out starting the conversation is
quite hard. Having a button just to say start that, we sort of cheated on
that. So the buttons, so that button on the side was a little bit of a
cheat.
But the whole idea is that we -- what we'd like to do is say on the fly, just
set up this little control, do it, and then walk away from it. So, okay, so
the -- one of the things that he did with -- the other thing he did is he
made a little joystick out of a cup with some string and a couple -- and a
marker. We just used the OptiTrack. Like I said, we weren't trying to do
sensing.
The other thing we found out, which he had a lot of trouble, was he was
trying to use the original connect SDK and he was only looking at the top
half of the body sitting by the thing. We didn't realize that that actually
broke the SDK until it actually -- but it eventually worked quite well.
So this is a little bit of a video showing his attempt to --
>>: [Indiscernible] the implementation of a theoretical framework to support
ad hoc definition ->> BRUCE THOMAS: No. What happened? Oh, well. So I don't know what's
going on with the sound on that, but so what this was, was his attempt to do
kind of -- turned out to be a little bit of program by example. And the
whole idea is what you want do is -- in this, somebody goes up and says I
want to define a tutorial for chemical reactions. So it's a whole way how
can you define this without really having a user interface in front of it?
There's this little tiny bit of a user interface, but how do you do that?
And it's -- it turned out to be a lot harder problem than we thought it was
going to be, but, okay.
>>:
[Indiscernible] physical button is a long press or --
>> BRUCE THOMAS: In the end what ended up happening was if you made it
totally generic, how do you tell the computer to grab its attention? So in
the end we just said, well, we could do lots of different things, but if
we're not going to make it generic, we'll just do this button. It will just
make it easier. That's how we sort of boot strapped into the middle of it.
Turned out we really tied ourselves in knots about we can do this and this
and this. We're just backing off farther and farther away from the problem
to the point we're saying, whoa, we're not solving what we want to solve,
we're trying to engineer a solution to some other problem.
I just really liked -- for me I just like the idea of just walking up and
having a conversation with the computer as opposed to the computer telling me
what to do.
So with this, I'll hand over to Ross and he'll talk about his work.
>> ROSS SMITH: So good afternoon everybody.
the codirector of the [indiscernible] Bruce.
quick project examples with you.
My name's Dr. Ross Smith. I'm
And I just wanted to share four
So this work was basically my graduate Ph.D. work and I was inspired by this
idea of clay sculpt ing where people use their hands to mold and sculpt
shapes. So you can see some examples in the pictures in the top right there
where the [indiscernible] user is using all their fingers to create a
physical shape. I want to take this concept and bring it into a digital
sculpting system. So I was really compelled by this idea and looked at
around at the sensors and technologies at the time. I know there's been
quite a lot of developments in this area since, but I really wanted to have
something that had a tactile feedback so you got that feel of having
something in your hands that you molding and shaping.
And so the way I went about this problem was to actually develop my own
sensor. And so [indiscernible] around using a piece of foam and
instrumenting the foam so as we could capture the physical geometry and use
it as an input device to a sculpting system.
And so this was the first prototype of making that. And I'll talk over it
myself. So this was the idea of taking a standard piece of foam and fitting
it with conductive foam so I took a serious of [indiscernible] of conductive
foam to make a 2D array. And then layered it with a piece of conductive
fabric, that's elastic. As you can see in that extremely short video, you
were getting two things. As the user presses here, you can see the
reconstruction of the geometry here. So you kind of get this -- it's
inverted in this case, but you get this equivalent of multi touch and you're
getting a depth. What's interesting about using this method of implementing
it is the foam can take on a number of different forms. In this case, it's
planar, but we could wrap the foam around an object. You could have it on a
chair. We can put it in lots of different form factors. So some of the
things I think interesting about this is how it's applied. So I'm interested
in medical mannequins so this idea here is looking at it for training
purposes so you can capture -- you have quite sophisticated medical
mannequins at the moment that are used for training medical students and the
idea here would be to take the foam and instrument areas like the abdomen and
we can capture palpation type techniques and you capture the small little
gestures and we can see how far they're pressing in. We can watch the path
that these take and compare an expert versus a novice and these type of
things.
So this idea of using it for robotics is quite exciting as well so this is
adding a sense of touch to robotics. So fitting our standard robotic arm to
using manufacturing plants with this sensor -- I think there's a number of
interesting things there. You can detect potentially information about
objects you collide with without using vision, so sort of sense of touch, so
you can say okay this is a pointy edge or a curved edge or a curved surface.
Maybe it's an organic object and use that as a method of input and making
decisions around that.
Novel games controllers. So I've got an example. I'm going to show you what
this games controller I've been working on in a minute. And podiatry
application is another area.
So this is another version of the sensor I put together more recently. It's
essentially exactly the same thing, just a bit more time in the construction.
So I'll put it on to an Edwina because of the popularity of these devices.
So it's basically a shield and you can pluck into the Edwina. It's a
four-by-four array. You can see the sort of internals of the sensor there,
just the black parts of the conductive foam. This is a silver
[indiscernible] fabric. It's all bonded together so that [indiscernible]
Edwina. Nice little case to make it look a little bit nicer.
So this is just a -- for a demonstration reasons.
>>:
[Indiscernible] what's the circuit that you actually have in the --
>> ROSS SMITH:
Technically, it's measuring resistance.
>>: [Indiscernible] resistance changes ->> ROSS SMITH: Changes, yup. That's one method you could use. You could
use capacitants, all sorts of things. So this silver fabric, which is an
elastic stretchy fabric, is conductive.
>>:
[Indiscernible].
>> ROSS SMITH: That's actually the connection in this one there. You can
see the -- on the circuit board at the top there, there's terminals so the
conductive foam is sitting directly on that terminal. Then there's this
conductive fabric on top of that so you got this sort of sandwich, like a
variable potential [indiscernible] resistor, just in a deformable form.
>>:
Is it linear in its shape?
>> ROSS SMITH: The response? No, it's not. It's not linear. And what
happens is you're squashing all the voids of air out of the foam. So if you
can visualize the shape of those sort of feel like voids compressing, it's
making the shortest path of resistance. And it has a particular -- actually
you get two things. You get sort of this what I consider sort of a length
sensor, which is what one is. And then when you get to the bottom, you get
this non-perceivable movement and you've got a pressure sensor. So you can
sort of -- you can get two types of interaction out of the one sensor and
then you have an array of them. So yeah.
As I -- the response is not linear.
One of the questions I've often been asked is how did it work when you
compress the foam extremely hard? What happens is beyond 90 percent
compression, the foam takes a long time to spring back. So what I've done
there is had a mechanical means of sort of preventing that to go beyond that
point.
So one of the examples I wanted to show you -- oh.
>>:
Are you dependent on the rigid back plane?
>> ROSS SMITH: That's a good question. So I guess the first thing you do is
you put a flexible circuit board there. Another approach is to have a
printed fabric version on both sides. So you can have this sort of thing.
So you're not dependent at all. In this implementation ->>: You're dependent on the form of the foam actually the form so if you put
something flexible on the backside that just pushes in, then you're not
changing the nature of the foam, therefore, you're not going to be
registering any disturbances.
>> ROSS SMITH: I guess the bigger question then is how are you going to use
it. Depends on the task. So if you have got a two handed task and you're
grasping both sides, part of interaction might be bending or twisting or
talking or these sorts of things. So it's hard to answer that question
without having the specific example of how ->>:
Like your motivation was clay modeling, right?
>> ROSS SMITH:
Yeah.
>>: And if you wanted to do clay modeling, that's actually a separate
question. Like, how well can you sense off axis pressure? Seems to be -even though it is XY, but it's kind of up and down. Can you squeeze the foam
and get a different response?
>> ROSS SMITH: So just go back to the first part. The first thing that I
did was create a sphere for this sort of sculpting type application. So I
had a solid in a core and instead of a printed circuit board, I just had
discrete terminals placed around.
>>:
So you measured gradual pressure.
>> ROSS SMITH: Yeah. And then you're just arranging it. It's still the
same sensor. It's just arranged in a different form. So that's sort of then
you can do you know, the motivation was hand sculpting this sort of thing.
So and looking at different forms sort of depends on how you would maybe
implement this. So this is a sphere variant over here. So this one -- so
this was the same thing as the clay sculpting one, except for we were
inspired -- actually got my students to, say, hey, how can we use this for a
game? And we sort of brainstormed up this idea and they want to modify a
controller and this was their -- they had a whole lot of concepts. This is
the one they went for. The idea was to attach this sculpting molding sphere
on one side and then have a traditional control on the other side. And these
are the type of things that we're planning on using so you can do some
navigation by having some discrete areas on the deformable surface and then
we had some gestures so you could sort of squash like this and use your hands
like this. And you have sort of clicking type operations.
We had these sort of goals. One of the things that we want do is try and
come up with an interaction that you -- it was very difficult to do with a
mouse or a keyboard so it was intentionally -- and so we had -- I don't know
if anyone has heard of this TV game show called hole in the wall. It's
basically a UK show. The idea is there's this physical wall coming towards
you and there's these silhouettes and you've got the contestants there and
they have to get -- stand in this shape to match the silhouette as it comes
over them. So we sort of implemented a variant of that using this
controller. The idea was you squash the shape to match what's come towards
you and then the more that hits the walls, takes away from your score. If
you get the exactly right, you get a really good score.
>>:
[Indiscernible].
>> ROSS SMITH: There's a number of factors that change the resolution. So
there's density of how many cylinders you put on the device and then there's
the foam density because it comes in different forms, the conductive foams so
with really small voids or really large voids and rubbers and all these sorts
of things. So it's quite variable.
I guess the point is not to make a pressure sensor so anything below one
millimeter you may as well just use a pressure sensor. It was designed to be
an interaction over a region so a deformable squashable surface. So in terms
of that sort of resolution, that's the area I was trying to work in. And so
this is the -- you can pitch off, this is just one side of it, so you can
pitch -- this is what's being controlled by the controller. This is what the
user has to match here. And as the wall comes towards them, they have to
squish the right part and the wall comes over and this was actually quite a
good match there so he got some points for that. And we had some stamina and
other associated game motivations there.
Also one of the areas I think is quite interesting for this is for early
childhood education where kids take -- they learn about squares and different
shapes. There's this classic toy where you take the blocks and you put them
into the object, you actually match the actual shapes to it. Yeah, yeah, so
this is a digital version of that and I think we can have a combination of
digital virtual tools there.
>>:
Very expensive version of a very cheap.
>> ROSS SMITH: Yeah, yeah. So there's nothing too exotic in the
implementation. The conductive materials aren't crazy expensive, but yeah,
there's a market controller so it's like any device, games control and these
sorts of things.
Okay. So moving on to the -- I've got to other similar areas. I'm quite
interested in the idea of using deformable surfaces for design as well. So
this is a combination of two projects, this slide. The first one we were
looking -- working with industrial designers and the idea was what will they
wanted was to come up with a material they could use for sculpting a form,
like putting it into a shape and then projecting on to it. So we sort of had
a -- building on the work that you have seen that Bruce has been talking
about there, we've got a physical CAD model and we've got this physical shape
so what we did was very developed a material that had -- it was made of
silicon and sort of a wire form that was internally there so you could make a
physical shape out of it. And it had markers embedded into the surface so we
used IR markers so we could register projected graphics to the region to the
different areas on it.
And the idea was they could take this material, they could chop it up and do
things like staple it together and build an actually form so I've got a short
video of that one here. Have to jump around a little bit. It's called
Kweemo [phonetic]. It's a quick mock-up material. So that's the material
there. Sort after few millimeters thick. Jump over the building it. So
this is sort of how you actually use it and you can see he's cut up a piece
of it there and you can do things like staple it together, you can sculpt
around your hand to get that sort of detail and shape so it's -- this is
actually an interesting example because the -- it was quite rough in terms of
the look of the final product but then you can project on to it directly and
you can really enhance the appearance. This is spray painting. But you can
overlay a CAD model directly on to this because we had the matching sort of
forms here. This was already in time lapse. I'll just jump forward. So
here you can see the sort of technology side of it. Under the IR you can see
these are IR tool kit markers if people are familiar with them,
[indiscernible] markers and this sort of was great. So we had four by three
tool kit markers and then we had in this example we were talking about
turning it into a cube so the idea is we could paint some light on to it and
then we could physically cut the sections out from the material and then fold
it up into a new form but the projected light now stays aligned because those
marker shapes there. Of course the -- which works reasonably well but the
challenge here was only a small grid of markers so we had four by three and
we couldn't cut through the middle of a marker because you would lose your
tracking. So you get an idea. You can see the bits they cut away. Spray
paint that you put in there stays attached to them.
So the next version which we've just actually come from our [indiscernible],
we've been working with Professor Carter one of the main key players in the
IR tool kit and we've developed a new version of this tracking technology to
integrate AR markers in a dot on pattern on a material and we've put it into
the same sort of silicon material for sculpting but we've also put it into
fabric so the idea is you can have this sort of digital draping concept where
you can cover a surface, lay it over there, and then have immediate sort of
recognition of the material where it he sits in the world and be able to
register projected information with it. So I'll just play a little bit of
this video. We use a number of techniques. We're using in here we're using
gray coating to get the form and we're using the dot pattern to do the
correlation between the projection mapping. So you can see this is a -- the
fabric and there's a matching texture. So this is just demonstrating that
you can just sort of grab the cloth and throw it in different forms and you
can put folds.
Another thing that we did here because there's separate regions of the market
pattern on here, you can actually sort of cover partial regions and put your
hand over it and it will still be able to keep the other areas working so the
texture will stay attached to these regions where you don't see the markers.
Okay.
>> BRUCE THOMAS: So from the design perspective, this was motivated because
if you just want to then -- if you just changed it a little bit, you'd have
to repaint the whole thing and so one of the problems with prototypes is you
make one, you paint it, and then you got to make a whole one from the
beginning, and we wanted to sort of get that iterative design cycle faster.
>> ROSS SMITH: Okay. Two other examples. This is some work that we
published at TER. The idea here was another form of -- to support designers
so the idea of using tangible tools and having physical buttons, physical
dials and physical sliders on the objects themselves and using them for the
simulated design process.
So what we wanted to do was have these physical markers but not have
batteries and requires so the traditional sort of wiring buttons in so they
can be moved around and we can control them so this is really was all about
the technology. So what we did was basically came up with this idea of
taking an RFID chip and we modified it so as we could attach a potentiometer
to the RFID system and so we -- what's nice about RFID is it doesn't have
any -- in this passive version, it has no batteries that we need so we're
able to use the energy available to drive our own circuit and use on/off
keying to put the current value of the potentiometer across that signal.
Let way we were using it was like this so we had a wearable glove and when
you touched the control, you can harvest the energy to drive the tag and then
in turns drives our circuit so we can turn the potentiometer. We can add
[indiscernible] sorts of things and pass the information back to the
projected system. So this is a neat little way of integrating our own input
controls into RFID without having the batteries and things associated.
This is a cut-away version showing one of the controls that we had so you can
see it's also in a white projected mobile form that we can place around
[indiscernible] and these sorts of things.
>>:
Getting [indiscernible] feedback by changing the RFID value or --
>> ROSS SMITH: Not changing the ID. We're using on/off keying which is
basically turning on and off really quickly to code information in there so
we put a value into signal that we -- it's like multiplexing in some way so
it's a subcarrier.
>>:
So you had a little micro controller?
>> ROSS SMITH: We actually just used a 5-5-5 timer because you don't need a
lot of intelligence behind -- you basically use it's potentiometer values to
tune the 5-5-5 timer and then ->>:
You have enough power to start the timer and then it --
>> ROSS SMITH:
>>:
Yeah, yeah.
-- keys it on and off.
Got it.
>> ROSS SMITH: And it's a very old -- 5-5-5 timer has been around for a long
time. There was plenty of energy there to drive that. Yeah. So we had to
do some rectification and other supporting ->>:
[Indiscernible].
>> ROSS SMITH: Well, it did reduce the range. What's interesting about the
way you use this, we actually -- this is the second round of using an RFID.
We actually detuned these circuits to reduce the range because we wanted to
be able to pick the device up and not have interference with the other
components around it. So we actually this to detune the circuits to reduce
the range so really, the range was never going to be greater than what this
was before we started working with it.
>>: You got to have the [indiscernible] wearing this glove that has the
triangles mitter, right?
>> ROSS SMITH: Yeah. So it's this -- it has an antenna on the finger and
the circuitry on the back of the glove.
>> BRUCE THOMAS: [Indiscernible] other things that you could do. Put your
capacitor sensor on the button so then when you touch -- you can do something
where [indiscernible] hand you have a larger thing on the back of your or box
around. The whole idea -- this was just the original to see if it worked.
Now we all agreed, nobody wants to wear a glove. And we liked the idea of
having -- so in the design process we wanted to get something that was
tactilely richer so it actually had real buttons and real ->> ROSS SMITH:
And the next one is actually our functional one.
>> BRUCE THOMAS: We envisioned you just have this huge case and just pull
them out and mix and match them.
>> ROSS SMITH: So I guess it's looking at other techniques other than
vision. Vision would be another approach to achieving this potentially.
But ->>:
Well, you're going from [indiscernible] fundamentally.
>> ROSS SMITH: Well that's the point is you can -- we've just got a
[indiscernible] over here, but we had a few other senses, ben senses and any
sort of analog input control.
>>: So all of those devices are touch devices. I actually have to
physically touch them. Why not instead of [indiscernible] transfer the power
from the hand from the glove to the device directly [indiscernible]?
>> ROSS SMITH: When you say problem, so you could, yeah, have some power
transmitted from the person having a battery on the hand --
>>: [Indiscernible] on the surface and this surface is one contact and the
other is the hand that touches.
>> ROSS SMITH: I guess the key point here was that you didn't have to have
any special surface. We could take our cardboard mockups, you know, these
really simple shapes that we've made out of cardboard, we can project on
them, we can have highly complicated controls and things in there, but
virtual machines a special surface so, okay, we don't love that one, let's
make a new one. We'll cut it out with scissors and we'll quickly mock
something up but have quite a rich set of tools to interact with it in terms
of sort of the projection, sort of the components, the physical -- we sort of
envision that there would be a tool kit of knobs and dials and ben sensors
that you can just pick them up and use them how you please to make your
interface, display surfaces, and you really can just pick from the whole tool
set.
There's been some other push pin was one example of another technology where
they basically had a planar surface, could have been non-planar, but you had
controls into it and have these two pins that [indiscernible] power from
outside. That was the benefit of not doing that is we can have an arbitrary
shape. That's not a big investment.
>>: When you say passive glove, you just mean no [indiscernible] right?
It's actually powered.
>> ROSS SMITH: This part here is passive RFID so there's no battery
associated here. The glove which is actually the reared of RFID is powered,
but that's referred to as a ->>:
So [indiscernible] passive glove, [indiscernible] glove.
>>:
[Indiscernible] as opposed to active.
>> ROSS SMITH: Yeah, yeah, yeah, because these come in active form as well.
Probably could rearrange it to make it clearer.
So the final thing, this is just a support bit of hardware that we put
together for supporting projection technology so it's actually just a
calibration idea. So it uses photo sensors to capture projected light for
calibration. What's neat about this approach is we made cubes that are
calibration cubes and the other is you can place them on -- say if we want to
calibrate the lectern here, you can place them on the strategic points and
then you can apply gray coating to capture the base calibration. We also
bended this -- we developed another algorithm so we can calibrate it a little
bit more accurately with the idea of being subpixel so we could do an
interpolation between two pixels and find a more precise location. But
again, the idea of the tool is that you are not fixed to having photodiodes
attached to a particular property. You can place them on the first object
you want to calibrate and then you can move on to the next object. So I've
got the --
>>:
Or imbed them.
>> ROSS SMITH: Yeah, you could. So this video is a couple minutes, I think.
Yeah, so the device itself had on/off switch, USB charging and some lots for
indication for IDs and these type of things and then it was a planar
photodiode on the top surface and not a commonly used sort of point diode and
that's one of the things we used. Yeah, we've just Velcroed them on in this
case. It was a [indiscernible] concept video so [indiscernible] Velcro them
onto this and we can get our calibration.
This is actual realtime.
>>:
It runs quite fast.
[Indiscernible] mounting problem.
>> ROSS SMITH: In many ways, you need a precise placement of them to go in
conjunction with it, yeah.
>>: So you're very much aware of the size of that thing and you want to know
the offset, how it was mounted with respect to the point that you're really
interested in.
>> ROSS SMITH: Yes, we sort of have a list -- this demonstrates the -- so
this is two projectors that are aligned, and then you saw the alignment as
they snapped together then. But you're right in terms of the placement
having a receptacle that maybe has a double sided tape piece, you peel off
and stick it on.
>>:
[Indiscernible] trying to register the existing CAD model of an object.
>> ROSS SMITH:
>>:
Yeah.
This is based on having a matching CAD model.
So is that really necessary to align two projectors --
>> ROSS SMITH: So the idea, so that console, I think you saw the console
back here. So here, the idea was to have three of these consoles and they're
quite large. They're based on the submarine consoles. Bruce was telling you
about this little layout tools of planning a whole room. If we have one
there, one there, and one over there, we're going beyond the sort of
projector's thrust. So in this case we're using -- we have four projectors
set up on these consoles so we're stitching them together and the idea you
would be able to do the calibration of the projector alignment so you can do
blending and all these sorts of things nicely knowing where the stitching
points are. And yeah, you do need more than one projector in this case to
keep the resolution nice and high on the consoles themselves.
>>: Okay. What you were showing was that you were just blending those two
projectors. In other words, we're really only seeing output through one
projector. Or is it two?
>> ROSS SMITH: No, no, that was two projectors you were seeing blended.
I'll go back. I'm just run it because it was quite quick again, but the -here if I pause it at the right ->>:
Wait a minute.
>> ROSS SMITH:
>>:
So there's no camera involved in this calibration.
No, no camera, just the calibration key.
I'm so obsessed with [indiscernible].
>> ROSS SMITH: So here you can see projector one sort of this bit here and
projector two is the second. These are two [indiscernible] image from the
projector so we already had from the gray coating an approximate alignment
but it wasn't quite right which really stands out. It looks for a messy one
that sort of goes like this but when we applied our subpixel calibration
although the geometric alignment might not still a hundred percent with the
projector, the projector-to-projector calibration is quite well captured. So
if I ->>: [Indiscernible] predefine to which one of these sensors responds to
which corner of the [indiscernible] that you're trying to -- is there some
sort of model ->> ROSS SMITH: That's a good question. There is a -- this builds on the
algorithms [indiscernible] published quite some time ago the idea of having a
used single variable decomposition to get an approximation between the model
and the -- the physical model and the geometric form. And you have
predefined points in the model and you match the two together. The whole
defamation is based around that concept you're talking about there. So yeah,
exactly.
>>: So I got a student working on a project now that basically we have this
problem that -- because we have this large space, the projectors are not
overlapping [indiscernible] calibrating [indiscernible] projectors. So what
we want to do is build this tinker toy system, and so if we had this rigid -one thing you can do is build this big rigid model that covers the entire
space that you have, no matter where you put the projectors, as long as
[indiscernible] sees enough of these calibration [indiscernible], you can
calibrate all the projectors to that large [indiscernible].
>>: If you wanted to just calibrate two projectors [indiscernible], do you
need that geometrical [indiscernible] or do I just need to roll a couple
cubes into the scene?
>> ROSS SMITH:
That's a separate problem.
>> BRUCE THOMAS: We're interested in user interfaces so we take algorithms
from basically [indiscernible] technology [indiscernible]. Now if we want to
go back and rethink the problem, we got to calibrate, there are other things
to do. The other thing that -- the reason we put those L and Ds on there is
we wanted to put natural light cameras on the scene and calibrate the natural
light cameras to [indiscernible], which we haven't taken advantage of. So in
some way, this calibration thing, we did this only because it was an
interesting problem and we had somebody who was a very talented engineer
working on it. I tend to stay away from the calibration problem because it's
not our strength?
>> ROSS SMITH: We just wanted a solution so we had to make [indiscernible]
it wasn't something we could buy. If you had a [indiscernible] here that you
were using six projectors on the planar surfaces, no geometric calibration,
you could use it definitely to align them very rapidly.
>>:
We are doing that.
>> ROSS SMITH:
>>:
Yeah.
It's actually a simpler problem.
Because that's a planar surface?
>> ROSS SMITH: I was just trying to separate the geometric under lying
calibration, planner surfaces, sort of in terms of using it, if we want to
stitch this screen here up with six projectors.
>>:
[Indiscernible] to help remove shadows [indiscernible]?
>> BRUCE THOMAS: So we haven't found at the moment people -- it's not only
people can bring up the question without seeing it while we're actually using
the system, nobody says, boy, I wish you'd get rid of the shadows. So and if
you are actually doing over the shoulder projection, shadows are not as big
of a deal, and if we become a big deal, then you put two projectors on and
even if you're getting shadowing [indiscernible] scenarios, it's not as
[indiscernible]. But, once again, the problem we're doing is not a high
fidelity [indiscernible]. It's mostly a fast and dirty thing before we
[indiscernible].
The motivation for the whole submarine thing was somebody from the defense
labs said you know this would be great because the end of the submarine
projects, they build this $10 million thing with glass and steel and
everything like that and people come in it's two years before they put the
ships in the water [indiscernible] there's all these problems with it and
there's nothing they can do two years [indiscernible]. Expensive.
We did actually build a whole command center after they started to design
[indiscernible] problems. And it doesn't matter that it isn't perfect
because there's enough to cue them. And what you want to do is
[indiscernible] called the Hilton problem [indiscernible]. [Indiscernible]
Hilton hotels so they don't make them again but they keep making them. And
the only way you see ->> ROSS SMITH:
You're supposed to consult --
>> BRUCE THOMAS: You're supposed to but they don't. So I don't know really
if you have this problem, for some reason there's a special chair you buy for
hotels that won't fit under the desk. Because the arms are up too high.
So the idea is it's like yes, [indiscernible] saying what did you think of
the previous oh, that's got some problems, put them in front of somebody and
say stop putting the button there, and they don't see it until they see the
problem. Or hospitals. If you want to design a new room in a hospital, you
bring the cleaners in and the cleaners will say stop putting the power points
here. And don't put one here, put a whole bunch there. It's the 1 to 1
relationship and that's where we're thinking.
Sorry, I'm really passionate about this because when I talk to designers,
they just have such a hard time telling their clients what the design is. So
that's -- so -- I just think it's really neat to have tools that everybody
can sort of understand what's going on. It's a hard user interface problem.
This is the same problem Mark [indiscernible] has when he gets to
[indiscernible]. I have to have these tools, but [indiscernible] tools what
I want do. And the designers don't want to bring a -- roll a work station
[indiscernible] and I think [indiscernible] specify things.
[Indiscernible] would be great as to like the first step, so my idea is that
you sketch a lot of things on paper so you have thousands of sketches on
paper and then you have someone in VR that you make models for but eventually
[indiscernible]. So the -- so that's part of the understanding. So you
could do head mounted but then everybody is wearing head mounts and that
has -- I also think there's this kind of nice social aspect of I'm not
wearing something. The [indiscernible] clear enough that I can actually see
your eyes that might be okay, but I just like that [indiscernible] pushing
all the technology into the ceiling and then you just kind of working
[indiscernible].
I think there's some things that H and Ds -- you're going to have to use H
and Ds because basically what's going to happen is like you would say okay,
we have this but now we want to visualize something and there isn't any
projection surface there.
>>: [Indiscernible] if you want to change the physical size of the console
[indiscernible], easy operation.
>> BRUCE THOMAS: Yeah, and I think it's -- I think it's one tool in the
spectrum. So I don't -- I'm not he against them. I love H and D.
>>: I was curious in that last example am where you're looking at user
interface on a console how you separate the ergonomic problem from designing
the user interface problem because here you're using your projection
techniques. You can iterate over the protected designs very quickly, but if
you have to build a new console model, new CAD model, that's going to be much
slower process. You probably don't want to do as often as it's really on the
projected version what's going on.
So how do you separate the physical, the ergonomic side of the design
prototyping from the user interface side of design prototype because they
seem somewhat [indiscernible] linked. Does the question make sense?
>> ROSS SMITH:
Yeah, yeah.
>>: You start with an example [indiscernible] interface and change economics
and change the interface, or [indiscernible]?
>> BRUCE THOMAS: Okay. So if you're a user interface designer or user
experience person, you'd probably be designing that [indiscernible]. We
see -- I had a little slide there. I think we were thinking about more
ergonomic testing. We'd like to do some -- come with a kind of
[indiscernible]. Hook somebody up to a motion capture system and see
[indiscernible] front of the console and say -- sit them in front of the
console and you're going say go and use it like you would normally use it.
Now here's this other design and you can measure reach and [indiscernible]
and like that and get a report back saying this is a superior design, this
one [indiscernible]. And once you whittle it down to something, I know you
might find an ergonomics expert and [indiscernible].
We see this as in the design phase in the early part of the design phase.
[Indiscernible] design face you'll probably make more [indiscernible].
>> ROSS SMITH: They are somewhat integrated in some cases as well. If you
look at the other example, the dashboard, the idea of being able to sit at
the dashboard and operate the user interface is a combination of the
ergonomics of how far the buttons are away from you and the interface that
you're using at the same time. So there is some link in some cases that you
don't ->> BRUCE THOMAS: I guess the other thing too is we talked to appliance
manufacturers. You're washing machine [indiscernible] -- I mean a
dishwasher. So you can make a really nice physical [indiscernible] for that.
So that's not going to change. And then you can really worry about the
ergonomics there. In fact I've heard stories of photocopy machines that just
had these horrendous interfaces and it's only -- and the prototypes are like
a hundred thousand dollars to make the prototypes and then the
[indiscernible] comes up and says what the hell is going on here and
[indiscernible] fired because it's just really bad design.
So we want to start bringing ergonomics into the equation.
[indiscernible].
Just haven't
>>: So we've been bombarding these guys with questions, so maybe we should
wrap up soon. Unless somebody else has another question or two.
>>: Just out of curiosity, what do you use to do your renderings
[indiscernible] all these projectors?
>> BRUCE THOMAS: We have an infrastructure that we've been working on for
the past five years. It's open [indiscernible] based. And so it's a -- we
built up an infrastructure where it's a model based thing where we can
literally if you want to -- it's a pretty simple thing like that control
panel, you can mockup an application in an hour or two. So we're pretty
happy with the architecture of it but it's just [indiscernible].
>>:
Is it all from one node is it multiple node?
>> BRUCE THOMAS: At the moment we can run four projectors off of one
machine. We haven't done multiple machines. No reason you can't do multiple
machines. It's more synchronization. We could run multiple versions on it.
>>:
Right, right.
>> BRUCE THOMAS:
[indiscernible].
[indiscernible]?
>> ROSS SMITH:
developing, but
talking about.
[indiscernible]
I understand that.
But at the moment we don't have the synchronization
Every time we talk about it doesn't sound like
[Indiscernible] infrastructure in place that's sort of
a shared same graph sort of thing I think is what you're
There's no reason it's not achievable. It's a
thing.
>>: The reason I'm asking is we spent a lot of time, the system that I
showed you, we spent a lot of time last summer debating whether we should
roll our own because it would be much easier to scale or do something the
designers are actually [indiscernible] with to build it [indiscernible]. We
went down that track which is painful but it's -- I don't know. So it's kind
of interesting, you guys chose to do the opposite direction, but I think
you'll probably scale faster, probably be easier to kind of get, you know,
more than certain end machines or different things.
>> BRUCE THOMAS: If I'm getting your question, I just put in a large
[indiscernible]. So at the moment what you do is [indiscernible] but what
we'd really like do is [indiscernible]. It's a one way process now. CAD
model, ship it out, [indiscernible].
>>: Don't have the realtime necessarily [indiscernible].
display but the CAD model doesn't realtime --
[Indiscernible]
>> BRUCE THOMAS: What we'd like to do -- there's lots of problems, but what
we'd like do is say I'm going to change this, these two arms are not
perpendicular. I want to make them parallel. So I want to specify that in
there, gets pushed back into the CAD system. They do the constraints
calculation on this. Up daylight [indiscernible] and pushes it back. It's
going to be not realtime in the sense that it goes, and it does that, but
it's realtime in the sense that I tell it what to do [indiscernible]. As far
as I can tell that's -- aside from working up the CAD packages, it's a
problem to specify CAD construction [indiscernible] infrastructure?
>> ROSS SMITH: This is what you would describe in terms of the constraint
side of it with your work I think.
>>:
[Indiscernible] animation.
>>: Once he started doing [indiscernible] prototypes and things like that,
you have to start going from applying different shades and that stuff and
then the whole pipeline, it's just impossible to create. You might as well
just [indiscernible] into a tool like that. You can have a [indiscernible]
last simulation ->> BRUCE THOMAS: So you're using unity? So we have sat around and said, can
we [indiscernible] on to unity? And we're -- we haven't figured out if we
can [indiscernible]?
>>:
If we created one, would you be interested in using it?
>> BRUCE THOMAS: Yes.
it's just -- yes.
No, this is something -- we're big unity fans because
>> ROSS SMITH: We've adopted that for individual projects at the moment,
but not a core library basically.
>>: Basically what we have is a unity front end for [indiscernible]
rendering, spatial rendering.
>> BRUCE THOMAS: [Indiscernible].
Yeah.
[Indiscernible].
>>: [Indiscernible] care what you're asking [indiscernible].
>> ANDY WILSON: Thank you very much.
[Applause]
Download