1

advertisement

1

>> Andy Wilson: So it's my pleasure to introduce Alvaro Cassinelli. Alvaro and I met in 2005, right, at [indiscernible], and then we did this trip Paris, and had a good time there showing our stuff, very interesting. Lots of stories there.

So yeah, Alvaro's work is fascinating, and also actually I think I saw your work before then. It was your demo at Kai with the laser and that struck me as one of the most amazing demos I've ever seen, actually. Still, to this today, is remarkable. I hope you'll show a little bit of that.

>> Alvaro Cassinelli: Yeah, it's there. A new version. You'll have some live demos.

>> Andy Wilson: Oh, it's actually in this slide, this picture right here. But he'll talk about what makes it unique and amazing and all that. But just a few words about Alvaro. Are you going to go over your background? Yes, there it is, right there. Ph.D. in physics and at university of Tokyo today, and he's visiting the lab this week. And maybe we'll have him back again to do some more work. So thanks.

>> Alvaro Cassinelli: Thank you. So yeah, I was saying like a complete introduction about why we're doing like [indiscernible] arts and science. But basically when I start, I was -- and I'm trying to do all the other, kind of oscillation between art and science. I started working a lot on laser physics.

I was doing like plasma physics and stuff like that and also [indiscernible].

When I went to Japan, people were using technology, developing things with a very clear purpose, making it faster, like I was showing this robots with not really a clear purpose in mind, like you were saying yesterday for the -- and the hell of it.

>>: [indiscernible].

>> Alvaro Cassinelli: And that strike me. This kind of intuition, this kind of vision when you do something and you just start to see a different angle a different perspective, it's beautiful. You don't understand exactly what it will be used for. And then basically, I couldn't publish this kind of result, om in venues like maybe E-tech or Kai, maybe [indiscernible], things like that.

2

So I started thinking that, yeah, media right is a way of actually doing research, have a platform for having a little feedback in a completely noncontrolled environment. And, well, basically, I consider myself an inventor. So I think many people maybe [indiscernible] I'm an artist or I'm a scientist by saying that. I'm just creating, and I need this feedback. You need feedback. So papers, whatever is feedback.

So I'm a laser guy. So today, I'm not going to go into all these works I've been doing, and sometimes they're interesting for me and maybe not for others.

So just work talking about this latest area, still work, still trying to work, which is human computer interfaces, and you'll see some of this work like media arts sometimes or like research, depending on the public only.

And someplace I call a proprioceptive interfaces, which is kind of extension of the concept of tangibles, Tang issue [indiscernible], tangible interfaces. I think it's more not just if you touch it, but if you feel it with your full body.

Then also, I've been working a lot -- can you see very well? I can see very well here. But the contrast -- I can actually shut these lights off.

>>: Yeah, that would be better.

>> Alvaro Cassinelli: So red on black is not good I've been told many times.

So mediated self is the idea, I think the interfaces are not just something that I'm here and I will, like, control something. Actually interfaces changes you. Becomes an extension of yourself. I mean, you completely modify what I am. So I've been working on this idea for altered sensing and prosthetics, and

I work with blind people. And tactile feedback keeps coming in many of the works I'm doing.

And then the thing I'm really very excited lately, and that's also why I'm here, because I found like [indiscernible] -- I try to. Yesterday, we click on your name, because I all the called you Banco, but I wanted to say like

[indiscernible] -- I forgot. How do you say it?

>>: We're not going to make this easier, because it's kind of fun.

>> Alvaro Cassinelli: Thank you. So I was reading your papers and it's exactly what I want to talk about here, because I'm very interested in this

3 idea of what display, you know, displays beyond the flat surfaces and interactive displays. It's no longer displays. It's no longer something that just produce graphics. It's a full, like, user experience.

So three topics here. Minimal display, you're not projecting just data, but something different, making it like capable of having input too, and very important, zero delay, zero mismatch spatial augmented reality. It will be for me the future.

So the first thing, about the proprioceptive and synesthetic interfaces.

Synesthetic interfaces, I mean like you move, for instance, your body and something, it's mapped on to that. That is not necessarily like moving some object, but like moving some other, like, dimension or something. So I will start by explaining with meta force. Actually, can I do something? I have like a lot of information that I cannot read here. Now I'm okay. Yeah.

So also, everybody that do that, but I work a lot on the concept of have a kind of metaphor, some very powerful idea and try to make things that work on that.

It's very tricky sometimes, because a metaphor can sometimes go against the real [indiscernible] of objects, and you are using real objects to make this interfaces.

But it's somehow framed by [indiscernible] work. So the metaphor here is about using space as a container for data. So volumetric data, like MRI image, and also our works, this is actually time lapsed photography that can explore what touching, putting this data on space in subtle ways by making it maybe two-dimensional images by moving in space and moving the screens. And the other thing I wish to show some examples, this may be the first, like, work I did, it was a kind of cube that somehow contains inside the whole volume of a movie. And by pressing on different parts of the image, you will like be able to control like time locally. So just pressing here, in fact, you're cutting into this volume of images and painting literally with time.

So there are two things that were like nice intuitions and did work the first is like, of course, this area of playing with time and breaking the

[indiscernible] on the image. And the other thing is like having something that is tangible, what you can really touch with your hand and somehow have as

I said, this kind of synesthetic experience. I'm moving some I'm doing an action that is perfectly correlated, but it's not about space.

4

So it was very intuitive, and people were doing a lot of comments about how, like, can we really touch a screen [indiscernible] the first time I tried that.

And you can see like different generations, like all the mothers telling the childs do not touch the screen, and children don't care. It's like go there and start to interact with that screen in pressing.

So I start having this feedback, it's very interesting. This surface where you project is actually the [indiscernible] of attention, in fact. It's not just a display. It's an interface.

So just quick question that for me was interesting, like what really technology was really bringing to this kind of interactive cubism experience, because, of course, it has been done a lot of times like digital photography, with like photography, it's called a slit scan, right? So I think the point is like you have this realtime interaction. It brings the possibility of being -- doing performance with it. So exploring having a really very fast kind of feedback and improvising and exploring the data, and it's quite important.

It's not just, you know, when some, like, painting -- painter was doing some work, he needs like years of experience to each its on languages. Here, very quickly, things that look very similar maybe or not, that you decide. But now,

I was wondering, is this good or not, because actually all these years of practice to reach a language, you know, it's like somehow disappear. Now we are making this up very quickly, and now it's -- you cannot really go beyond the aesthetic that [indiscernible] give you. So that's the important point

I've been discovering.

You make all these things, you get trapped in something that looks always same way. It's very difficult to break this. You need a lot of practice.

So this little square here means that when it's orange, it means signs. When it's green, it means like kind of part of project and I bring this request for some company and here it was [indiscernible] nuclear facility with the

[indiscernible] images of objects inside, and then instead of having time, you can see inside objects by having this tactile feedback or zooming here also, other things. So pressure related again. It transform in something like zooming [indiscernible] see be inside objects. And well you can go like for millions of strange thing.

There was sculpting, like volumes, by pressing and starting playing and all

5 that. So that's kind of fun. But then paper. We needed to publish and that's the paper for one of the conferences. I think you were chair. So can we really make this serious? Can we really sculpt, like, three-dimensional data or find things or controlling a cad system very precisely.

So with my colleague, he's not, he wanted to come. We developed this, which is a version, much more precise of this Khronos projector flexible screen, not based on just some infrared light. If you have technical question about how it works, I will answer. The other one, the system was using something called vision [indiscernible], and shape from shading. This is using like structure, light and real time with lasers and things like that. So you have a have a very, very, good [indiscernible]. And you can really manipulate like this things behind the screen.

And the metaphor somehow changed. The metaphor is that you have volume behind the screen, which is visual, and then the real world. And then instead of having a mouse in which the mapping somehow happened in the mind, this is a membrane, the screen is a membrane that show this [indiscernible] doesn't change. It looks perfectly natural. So can maybe sculpt, like you'll see which we try to actually do something like clay -- like manipulation by having always passive feedback. And passive feedback was somehow enough. Oh, I didn't press the Khronometer. So my talk start now.

>>: I have a question. So maybe it's a [indiscernible]. I saw one you have a space you manipulate. The other one is you have a 2D and then you have a time

[inaudible].

>> Alvaro Cassinelli: Yeah.

>>: Do you have any work to joint space [indiscernible].

>> Alvaro Cassinelli: I don't understand.

>>: So your projector, it's a 2D. It's a video cube [indiscernible]. This one, it was 3D. An object you manipulate on there. So I'm wondering if you have a 3D object, plus the time, you have projects like that to when you allow the interruptions?

>> Alvaro Cassinelli: Like the thing you were doing, like, with the TV object --

6

>>: [indiscernible]. So yeah, I would love to, if we have XYZ plus T, then maybe it would create an interaction like that.

>> Alvaro Cassinelli: I see, okay. Yeah. Interesting, yeah. That's interesting. Maybe something, you have kind of a sketch or something to attach it and it goes to some state that was previous state or something using virtual reality. Yeah, that's interesting.

Okay. We'll talk more. So yeah, but I've been trying to map many different, like, spaces, like different dimensions, like with time space. I try also like maybe more like high-level cognitive, like understanding of the image. For instance, if someone was like maybe sad, the face would maybe stop in time. So you can mix all this coordinated. But I understand what you're saying.

Another [indiscernible] that's very interesting is the given the fact that you can play on a surface instead of just having some kind of [indiscernible]. Try this experiment for up to four months [indiscernible] contemporary art, like 64 channels of video with sound, and by touching this and playing on the screen, you can cause a delay and speed and then it's like having a chorus of voices.

If you touch here and here, the intersections will go faster. So you have this visual reason that correlate with sound, and it was sort of interesting thing to explore. The result was really interesting. [indiscernible].

And then now, a little more serious, I was thinking, we this kind of Khronos screen, it's a flexible screen, and it acts as a controller. But the screen cannot be moved in space. So here [indiscernible] a little strange. The idea was actually to move the screen on space. The screen here is rigid. Some people have now done that with a flexible screen.

But having this screen on space, making sense of the content by just like moving around, it was very intuitive, and I try it with -- I have like a discussion with radiologist that say that somehow, like, could be maybe useful in the future, because not precise enough. But still, gives at least for students a very quick, like, understanding of how this different parts of the anatomy are placed.

So I've been trying to have tracking to make like a little window effect on something that's mobile. So there is some like markers that when you occlude them, you can like even be in trucking mote or slicing mode. And the idea is

7 that there's a Plexiglass screen so everything is actually, the a little -- it's a small little box with a projector and a camera, and infrared light. So you don't need to -- this sits here. That's the whole thing. And you could -- that's why I could like maybe detect like an infrared pen and also annotate

[indiscernible]. And actually, what I would like to be able to do is do like pinching, all these functions. I couldn't do it yet because I will be needed.

It's maybe, of course, can be done maybe with a -- I was going to say iPad,

Microsoft Surface. But the idea, maybe it's I like that it's like any object used as a display.

Of course, if it were a little transparent. If it were, maybe, made of acrylic, because I'm putting the light down, maybe I can couple the lighting inside the screen and it becomes an FTIR, like interface that can do other things. At the same time, the position of the screen somehow control the content.

So there were some attempts to use these for a kind of performance in which the surface of the people were actually the screen intersecting some visual objects, and having a kind of Boolean theater. And again, like putting these images on space, this is an art installation mostly like mechanical. So they have some behavior, it depends on where the people are on the room. But this elephant is always flat, but it start getting some three dimensionality as you move, because it's still space. It has a way of moving around that depend on the shape of the animal.

So I was very interested in how you create the sense of presence of this virtual object without using, you know, goggles or something that would like really show you this three-dimensional thing.

And one thing that comes to mind is like tactile feedback again. So the motivation for that is sometimes you have people, for instance, that do, like actors in a [indiscernible] set, and they will collide and have the

[indiscernible] augmented stuff and they collide with the objects. So normally, they have a screen and they have to see themselves interacting with this object. So the idea was to maybe feed them with a small vibrators and join and then as they go, like, move towards, like, near some three-dimensional object, they will feel the force fields.

That's a prototype using like ultra sound, like, custom-made ultrasound, like tracking system. That's here. It will be a kind of virtual column or changing

8 the color here. And we thought about using different kind of [indiscernible], like [indiscernible], maybe change in temperature, things that are dependent on the state of this virtual object.

So again, a question, how to generate a sense of presence. You don't need more than that. You're not really now exploring some [indiscernible] and you need very precise information. But something is there or nothing. So it senses presence beyond the [indiscernible] of seeing something or seeing something virtual with goggles.

So that's the art part. Now I was trying to create this sense of embodiment of being somewhere, of being some presence of, in this case, use of somewhere else. There's a small experiment. I mean, I see like him going from like art things and like researcher and all, what's really interesting for you. Not just me to say yeah, this is cool.

But this is for me was very interesting when I start playing, because it's disconcerting, it's weird. Actually, I made a little box with a stereo display. I put the display, custom made again. Two screens and some special optics to make it really 3D. So what happens like if you approach this box and look here, you feel it, it's empty or there's something inside. But if you talk, like what's this actually you get trapped, because there are two

[indiscernible] behind you, separated by ten times the distance of your eyes, and the whole room look exactly like the box. My original idea was to make the box exactly, but they couldn't really close it. So [indiscernible].

So what happened is the whole space gets trapped inside. With some little time delay, this is a classic in video art since the '70s, delay. So what happens is you see yourself, and then you say, oh, well, where is the camera. You turn around, you see the camera, and then you look here and you look at yourself like looking at your eyes. It's kind of scary. So one moment, you don't know where you are, and people get really trapped, literally. That's why

[indiscernible].

So there is a little sense of disembodiment, where I am. So that idea study also to be a very driving a lot of little experiments and research in the lab.

But where do we place, where do we place our, like, our self, you know. Where are we in this kind of interfaces, when you are like talking on Skype to someone else, where are we, okay.

9

Somehow we know that's okay. I mean, in my head, that's what people would say.

But with all these interfaces with virtual reality, what is the center of gravity. You need a center of gravity. Otherwise, you get lost. The sense of control, you know, where are we, where's the -- so this started like this kind of piece, art piece. And then I started thinking, you know, maybe something that existed before that produced the same uncanny effect is a mirror. If you wake up in the middle of the night, and you're a little sleepy, and you go to the bathroom to get water, I remember once I did that, and I was totally sleepy, and I look at myself and I did like aah! Just like I was so scared.

Really. I had that kind of reaction, whoa. It was like boom, boom, boom, you know.

There is something interesting about mirrors. And we don't have

[indiscernible] I read three books about mirrors. I got completely obsessed.

How can it be that we are really not totally fascinated by mirrors? We put mirrors everywhere, and it's the most disturbing experience, you know. And we are now doing that. We are connecting cameras and putting them everywhere.

Like okay.

So this center of gravity, the self, is being completely destroyed so we need to reconstruct it through these interfaces. So that's what I call like interface is an extension of itself. If you are like manipulating that you have to be clear what are you trying do.

So I was thinking it could be maybe the future, super mirror, for instance, it's a very application because this is orange. Could be this mirror in which you can see yourself from any side. Or using the cameras everywhere in the city to have the ability to just look around to any object. People have done that on cars. It's this AVA -- ABS system that you can see as if you were looking at it from top to park.

So actually, we work -- now I can say we work with some car company. Car company. I can't say the name of the company. I think you know which one. So the idea was to extend this concept by actually networking, you know, like the plate of the car to the camera where in front of the car.

So, for instance, it becomes invisible. You can look through cars, and the idea was to completely network all this using, like, tags [indiscernible]. So

I just [indiscernible] and some like wireless network. But normally, you could maybe do much more, like connect like this, see you are in a jam, in a traffic

10 jam, and you can't see what's going on. You cannot see. But just start clicking, clicking, clicking on this car, this car, this car, you see what's in there. But you see a routine video.

So I became interested in this. The [indiscernible] I will maybe show you if I have time in which I tried to extend that concept on the whole earth, like connecting like plugs of video. And then you don't know where you end up.

It's more like an art project.

So I think I'm very late. So about this interface -- more concrete now. More concrete interface, not just like vision, but I'm being trying on extending what are called like creating new sensory model contingencies. That's really the much more classic work on augmented perception. So creating, for instance, like x-ray vision or whatever will be extending senses.

So very concretely, the method for exploring since a long time is this idea that we are relying all the time on vision. Vision is something that's really appear quite late in evolution. Evolutionary terms.

Before, animals, like cells were only concerned about what was really around them, you know, some millimeters, chemicals or something, centimeters, and we are extending this range, vision extended very much. Now we are extending maybe with internet, we know something is happening, like some million -- some thousand of kilometers from here. But still, it's not a physical very direct interaction.

So the idea of extending the physical body with, like, [indiscernible] and things like that. So I explore that both from machines. Like how, you know, you have like phones and people are planning to put a camera here to be able to track hands. But it's always a very short range interaction. So why not like having hairs, [indiscernible] electronics.

And also for people that will be -- so for people, I was interested about this idea of maybe we can have like this Celia, this extension of the as far as of laboratory through rays of light. So not obtrusive. But they are actually

[indiscernible] and the feedback is tactile, you know.

So it's a new sensory modality. It's not tactile to visual substitution system. It's a new sensory modality. It's a new, like, sensorimotor loop, because it's tactile, its range to tactile is not vision to tactile. It's

11

[indiscernible]. It's much more like [indiscernible]. My skin hasn't extended, become very fat. That's just one of the first prototype. I was just playing. I'm really not -- so I only have one module, and I start like actually moving it up to try to find, and I just feel something, the strength depends on the range.

And you start having a special kind of behavior. So this is the theory behind all this kind of research is called, like, action and perception. That perception is never passive, and then you start doing some kind -- I know I put this, this is my niece, because that's how this project started. She had some eye problems. [indiscernible] artificial retina, kind of like a vision

[indiscernible] processors for processing like data, visual data. But it's not really artificial retina. He said maybe you can make that for my child.

The first idea I had is like this is not possible. You don't need that. And then I became more serious. This is a real experiment with 50 blind people in

Brazil. So here, the [indiscernible] contains only six of this [indiscernible] finder. And you'll see that sometimes there's a little light there. People actually feeling it, and they move, and they feel that there is something.

Something that, I mean, she's a professor [indiscernible], specialist that work with [indiscernible] the first like experiment on putting on the back. And she did a lot of trials. She send me this video first, I was saying this is not normal. He doesn't have the cane. It's so distressing. You can feel it.

He's like this. He's scared. But he try with cane and without cane. And this is a complement for the cane. It's not a substitute.

But it worked pretty well and they want to continue doing that. They want to have this as a product. Since then, that was 2006, and there have been so many projects that are very similar. Yesterday, had to review two papers do exactly the same thing. And the [indiscernible] project using Kinect. I don't see people having a Kinect on their hand. I mean, unless -- I'm sorry. Yes, I see people having Kinect on their hand, on their shoulder.

But the problem really here is it must be super cheap. Super invisible. So we saw at the end, I think maybe at least for children, we could maybe just use some kind of plastic little antennas, because the problem with child, she was telling me something terrible, when a child is born blind, they are scared about bumping their head. So they start walking backwards. They start really always protecting their face.

12

And if you start your life walking backwards, I don't think it's very good. So very early in life, you have to make at least some very simple awareness of what's in front. So people put their hands out or maybe really having some

[indiscernible].

And that's the Japanese version. I'm showing this for two reasons. First, because it's quite fun. And second, because everything we do in the lab, I mean really now, 80 percent goes on TV, because it becomes kind of

[indiscernible]. So I always like to show this.

And it's interesting, because the public gets -- we were talking about technology, how people perceive it. And it's always a joke. So I was really angry at first. And then maybe I got it, you know. It's the only way, like, people get used to these things to visions. People don't read the papers so they see that and they say that was kind of stupid, but it could be maybe useful to a blind person. You know, it goes the other way.

And they make very nice graphics so I don't have to make myself. I mean, it's a long video, because then there is the detective that put this and start moving very well. But, yeah, I have this. I could do my whole presentation with this kind of videos.

>>: Question. So I noticed one of the video, it's like a [indiscernible].

The other is --

>> Alvaro Cassinelli: Yes.

>>: So how do you differentiate different quality of the picture? I mean, you

[indiscernible]. Maybe you can go through that paper ball. What kind of feedback, sensing?

>> Alvaro Cassinelli: It's dedicating. We're writing a big paper now, a real paper. I will explain that later. There's a lot of things, because it's not just [indiscernible]. We are trying to create against tactile language, and it depends. Sometimes the signal goes to zero too quickly. It can be because of the surface is a corner. Or maybe somehow -- I mean, it depends. It's exactly like -- it's a kind of very simple optical processing. There is no camera, but you have all the effects of reflecting of the surface. And you can couple with ultrasound sensor too, which have much bigger kind of -- not range, but field of view. So we are trying to couple both of them.

13

So when we're doing this project, we notice that this is the only way to see if the blind person, for instance, was really getting some feedback was to actually see this [indiscernible]. Then I made a prototype in which you send all the information to the computer. You see all this stuff.

But I was very interested in that idea, okay, I'm here. So you get close to me. You don't notice me. You get close to me and then something flashes. So you do like, oh, sorry. I'm the one who is maybe looking at my phone, get too close to someone and without even saying anything, I have some feedback, you know. Not this feedback, which is the haptic [indiscernible] project but the feedback to the environment. I am here. It's like a light on the car, you know.

So I was trying to think about that and I was start collaborating with an artist also in Australia about this project that was about extending the body in time and space through this really geometrical, like, lines.

So we put a lot of places and people. Actually, dancers, professional dancers that dance within the company of [indiscernible], and we just try to see what happened if you have a body that is extended and somehow interconnected, because it has a lot of models here that could move like up, down. And they move not because he's moving them, but because of the other guy, this partner, can move and it's mapped on to this body.

So it's like you have an extended body and a shared body. That was really open kind of research. It's not at all kind of very serious research on that and it's just what happened. The most interesting result here was that -- I cannot believe like it's not working. Technology, technology. It's that when we don't see the dancers, you see the projections on the building, little lights, and you see that something is moving here.

So there's this projection of motion. And another thing that was really

[indiscernible], we really put a lot of [indiscernible] and made a model of some kind of light substance that can go on the difference of your body, depending on how you move. Like maybe you will rotate and do things like this.

To extend the temporally, the motion, if you do like this it start like spinning around you. So the idea was to make the effect very strong like visual kind of inertia, a kind of blur motion, like [indiscernible] very clear.

So we made many prototype. And so that's the first one when I was not very

14 fat. Now I real cannot put in.

What I'm showing here, because for single laser, you need some hazes, smoke machines. So this is how it was exhibited, kind of big file with a lot of speakers. So there were these lasers, when people would move, so dancers were inside, but you also worked kind of the public installation. So as you move, your motion was detected like the elevator, 3.0. So much better. It works now.

When it detected your acceleration, it start like spinning sound because there were eight speakers. So you -- there was flow of water going, a kind of spiral, you know. We were trying to bring this pace which is very, very kind of urban, the feeling of nature, the power of nature. So this was a big sound and one week later, it was an earthquake. So yeah, it was a kind of wow. We just invoked the power of nature. So not so fun.

I don't have lot of time, so I'll really go quickly. That was played a lot of things like face vision and a video. This is the BBC. You know, who knows her? It's Joanna [indiscernible]. She was on one very famous movie kind of program. It's one of the Bond girl, and she was make -- yes, sorry, don't look here. She was making a program about cats and how, like, cats have actually an extended body through the whiskers.

How can I have this? For her, this kind of strange mask. And she's very polite and she say wow, there's an interesting [indiscernible] how cats can see. In fact, she's saying this is awful. I'll never do that again. It's really not -- because it's a terrible thing. You feel like Hannibal Lecter and you have the vibration on your face, you know.

So here, we have kind of her vital signs. Everything is like -- sound like -- the sound is very low. See the British accent? The mask is sending everything wirelessly to this computer. So try to approach your hand, yeah, over your mask. So this is near your mask. This one.

>>: How can I try it out?

>>: We will try it out in the corridor.

>>: Cats. A cat's whiskers are the most incredible bits of [indiscernible].

They're so sensitive.

15

>> Alvaro Cassinelli: It's quite interesting, actually. They can even detect air flow and blind people can also detect air flow through -- there was this controversy, people thought they had a sixth sense, blind people. Actually, they were able to detect like [indiscernible] vision. They do that.

[indiscernible].

Here I read an article by the ACM -- I don't know. It was interesting. He went bald, and he said like start bumping my head like ten times more, because you feel when you are approaching something. A small, like --

>>: Used to have hair there?

>> Alvaro Cassinelli: Yes. [Laughter]. That's the only thing she say. It's long. So again, more about this extension. I mean, I did a lot of experiments and now the idea is to decouple, like sensors were on a car, we never try in a real car. And by the [indiscernible] on your forehead. So, in fact, somehow, see, that's what I was saying. Some stimuli to the external world, even if this person doesn't know that this person's getting close, you have an

[indiscernible]. People don't do that now. People are always thinking, okay,

I have to, like, do something to tell the driver to take action.

But these things could work in a kind of reflex arc that never, like, somehow, gets this brain of the machine like aware, you know. The driver doesn't need to know that the car is taking some very low level actions. So that looks this thing. It was control with the eye closed, because you feel, when you move the car, that you are, like, sensing pressure.

And I don't know if it would be -- it could be maybe used in a real car, I think. I don't have a driver's license. I show the thing that if I had a car,

I would like to have that somehow. It's true. Like because, you know, for me, it's magic how people park their cars. I don't understand. This is a sixth sense for me. So I need to build my own. Really, we are capable of doing a lot of things. You're not seeing.

>>: [indiscernible].

>> Alvaro Cassinelli: Ah, that's too easy. So yeah, then let's go to this, like the last point, the thing that really I'm very, very much interested in.

This area of what's a display. Why we're trying always to [indiscernible]

16 somehow linear cortex which actually greeting. It's occluding so much. So thinking about this ambient display. So this is very fun.

You need very precise, very concrete, like, cue, and you cannot also be looking at the screen for having something that even if it's a simple cue, you have to look there. You need to have this [indiscernible] and also, the content is related to where they appear. So we'll show you some examples.

This is something which could also be an extension of the self, called a laser aura. The idea was like you know Japanese don't express a lot in the office, but I was wondering, I'm working, I want to know can I interrupt you or something like this. In social networks, you always have available, not available, something like this. But you don't -- it doesn't happen anymore in the physical world. Why? Because we assume that if you are very kind of like hey, maybe I know, maybe I don't know. Exactly.

So the idea was to show something around the person, this kind of aura, and the experiment here was very simple. We try with some mind flex to try to measure some EEGs, you know, they really work and arouse [indiscernible]. It was just like sensing the vibration of the chair, because he was like stressed, like he's talking to me. So you have this kind of aura, right.

And the aura, like this minimalist display could show a lot of things. Could show distances, emotions, I want to go to toilet. Or objects. Where is my banana.

>>: Where's the key?

>> Alvaro Cassinelli: Exactly, where's the key. The comment you read by Bruce

Sterling, Bruce Sterling is this famous, like, science fiction writer, and he is actually commenting about this project, all these things. He start reading that, and he gave me a lot of interesting feedback. It's true. Don't inflict

[indiscernible] on reality. A laser pointer, you know. It's so nice. Keep it

[indiscernible].

So then, when I say minimal displays, it means small. So this is another experiment we did on a mountain -- where is the video? I want to show the video. Close your eyes for 30 seconds. Oh, no I can't believe it. I don't have the video. Anyway, I don't have time either.

17

So never mind. Never mind. So while we try to project, like interactive graphics on ski -- ski what?

>>: Ski slope.

>> Alvaro Cassinelli: Ski Joe. Also say ski Joe. So here, it was a kind of, you can see that on your [indiscernible]. It's a sea shore and people can play with the sea shore. So we used two ways of tracking people or making this interactive was like either like with a camera or using the laser sensing technology that we'll show in a moment. But the problem with all this kind of interactive displays, [indiscernible] is like in particular for games and for really interacting with people, as if there's something there that has intentionality, it reacts to my actions, then I believe like you cannot afford having mismatched, spatial mismatch or time, like, delay. So it's very important to have some key technology for that.

And I think, of course, there are two ways to do that. One is like smart sensors or [indiscernible] example and the other is, of course, vision based and the technology evolve, they will converge.

So but the goal for me, it's not really the goal. It's something that will happen, but it change things radically, and people will start thinking differently about that.

So some project about the smart sensing technology with laser I will show now very quickly. [indiscernible] printing or skin games and this idea of a robot made of light. You see how it goes, this idea of robot made of light, because of the display and the place where the displayed graphics make -- is part of the message, you know, it's not they have a display here, you can't put any information.

What I'm projecting means something. It's like a person, you know. I'm talking this person saying something. It's not the other one. So I think that somehow it becomes a kind of living being. That's also very similar when I read your paper, like [indiscernible]. The idea that you are location based like display, it already have a lot of meaning. Doesn't care what you are projecting. Depends on where and how it reacts.

So for more like technically inclined, so it just a word about how this laser-sensing display, LSD technology works. You have like two or more like

18 laser that completely co-linear. One is for sensing and access a range finder.

Actually more could be also, could detect polarization. Could detect, like, lot of things like color. And the other is for displaying the color.

So that means that when you are like drawing here. For instance, here, you have no more projectedness. It's not aware at all of what it's doing. It's like a blind projector. Here, it's actually being able to sense the same way when you write or you do like calligraphy or whatever, you feel the surface.

So that was a key point.

You have something that actually feeling where it's writing. But it's not like, okay, I will write a full letter. I close and I look and I go oh, no, it was curved and then I have to start again. That's vision-based. It works and

[indiscernible] is that right, no, correct. So here it's like each pixel, I know where I am. Oh, I'm not touching it anymore. I'm not at the right distance. I have to maybe [indiscernible] or whatever.

So the technology works like this. It's just this idea of having a beam that can at the same time sense, and apply for machines gives you these, for instance. What I call the markers laser tracking. That was 2004 technology.

So here, you have like a [indiscernible] mirrors and one photo detector. So there's imageer here. It's detecting the reflection by doing some synchronous for the detection, being able to somehow measure distance with quite good position at about one meter. About like [indiscernible] position.

And, of course, like this transverse position is very good. That's what you saw in Kai. So that was the first idea was to make this [indiscernible] because it was the year of [indiscernible] report, and it was fun. No markers, you know. The guy had markers so I felt so good for a year. But nobody knew how this work and now have sixth sense with markers.

But I still think it could be interesting to do something like this. Because not necessarily with [indiscernible] mirrors, but maybe with [indiscernible].

Again, same idea, you are only interested in the short range introduction, so you can have, like, a lot of hairs, like exploring the world around and then measure the size of -- shape of your hand. It's basically a range finder

[indiscernible] people are now doing that.

So yeah, we all could contribute a little part on this kind of reflection and then it's very quickly obsolete. This is another, like, application. More

19 inspired, maybe, on a blade runner. This one moment, Harrison Ford is zooming on something by talking and he's completely nonfunctional. He's saying to the right 10.5 centimeters. I don't know how he knows. But now it's like, about like yeah, speech control interfaces, you are like you know exactly how you want to do. It's not that work. You need [indiscernible] interfaces

[indiscernible]. Like something that just like more like improvisation and you have a very fast feedback.

So here it's controlling like six degree of freedom with one finger and no markers. This is doing very simple recognition for zooming and

[indiscernible]. So that was for machine [indiscernible] like having

[indiscernible].

And I don't know how much time I really have. If I have time. So.

>>: So you have a demo?

>> Alvaro Cassinelli: The demo doesn't need time. Just press a button. So this is this system working without calibration on anything, like touching the

[indiscernible]. Doing a local scan and doing a lot of information. That was the thing I was explain explaining -- not here today -- so doing, for instance --

>>: Mike?

>> Alvaro Cassinelli: Mike. And this is actually show my point like perfect registration in time and space. This is a [indiscernible] scan based system.

Maybe you saw it. It's totally different technology. So [indiscernible]-based system. And you an infrared laser and a red laser. So they are co-linear.

They are moving at the same time.

When the infrared laser and one photo detecter, when it senses a lot of absorption, in particular in the veins, then the red laser is turned off in realtime per pixel. So you have a kind of end viewer. Now this company has made that, calibrating a projector and a laser. And a camera.

But here, there's no like plane of calibration, because there's [indiscernible]

I'm not sure they're calibrating the stuff, because they have no marker. Here, it's impossible to lose it. So this is what I call, like, low level image processing on reality. So on the real world, you know, like counter, like,

20 enhancement.

So you could have, I mean, maybe on your [indiscernible] and enhance

[indiscernible] without needing to actually acquire the dimensions and then project something. Someone who is not seeing very well could have this intelligent light that projects and somehow, like, prepare.

So I will show this. I don't need to do this. So it's [indiscernible] art installation that uses this technology to perform to make sound. See, I'm moving the thing and it never, like, loses the tracking. To make music, I didn't know. I mean, I'm not responsible for the sound part. But it's just the first step. We're trying to create our kind of language of maybe related to the behavior of light. But we never really finished this one. It's a work in progress.

But you see, it looks alive and you can play in realtime. And yeah, that's what I showed ideas. I'm not sure it's possible to do now with camera and projector calibration. It would be a little tricky, because this

[indiscernible] instantly when it touches something. So it was just kind of an idea. It's really all the stages, as you can see. I'm always projecting a little circle. It's not that interesting. But imagine if you can project like full graphics on your surface of your body. Make like, you know, some full body feel on your body and control things. It could be interesting. It open up so many possible.

So okay. You see something -- so I'm finishing. It's almost over. So for vision-based, like camera projector, you have to calibrate it. This is -- we're talking [indiscernible] it's a camera projector [indiscernible] frameworks and you calibrate in one minute. Camera and projector. So it's start like doing this. [indiscernible]. It's based on [indiscernible] community and it's an extension I made. So you do like this at the end, it's calibrated and start augmented reality. Without the Kinect. Of course, now we have the Kinect so it can do much more. You can combine. This is a normal camera projection.

And camera and projector doesn't mean that you don't have anything else. So here, this is -- I'm showing this because he send me this yesterday. He said latest version of so-called [indiscernible] developed in the lab. So it uses some structure light, very first cameras, a lot of processing, and it's scanning at 250 pages a minute, per book. When it says scanning, it's really

21 having exactly the shape, taking many pictures as the thing is moving, using shape from shading, knowing like can't even measure the reflectance of the surface.

So it's not just scanning and making it perfectly flat, but it's also you're recording the reflectance of the book. So if one day you want to print it, you can even print it in the same material. We are acquiring a lot of data. So it's a kind of cloner of books. Wow. Okay. So now I reach the meat of my presentation -- no. Almost.

Okay. This is something I will talk maybe with you later. This is what I'm working on, this idea of [indiscernible] collaborative through the space. No, no more time. I will kill everybody. So I already explained this idea of the robot made of light. I've been playing with that a lot. Making this little beast that we show now. I'm sorry, the last like slides, you wanted to see.

Very silent.

>>: So it moves [inaudible].

>> Alvaro Cassinelli: Yeah. One is just actually should [indiscernible] and projected in Finland. And then you make -- yeah, and then the [indiscernible] somehow projected on the beach. But here, it's completely simulated.

>>: [inaudible].

>> Alvaro Cassinelli: It's a water line. So [indiscernible].

>>: I love the purity of the laser.

>> Alvaro Cassinelli: Yeah, yeah.

>>: Scared of making everybody blind?

>> Alvaro Cassinelli: No, I did all the calculations. Serious guy. I was very serious about it. Now, really. This is what makes maybe the maximum was

500 milliwatts because I was super concerned. And, in fact, it's too conservative. Now people, yeah, use laser like five watts.

The important things are the optical parameters, like divergence of the beam and et cetera, how it moves, et cetera. Now it's safe. And they tried a much

22 better light, lighting.

Okay, last project. Yeah, sorry. The whole thing, like, points to the idea we're not always playing with projectors, and, you know, trying to make things appear in places that somehow augment the meaning of the message that you're projecting. But so is visual.

So the idea will be to make something that's a function projector. You project on some surface or some object some function. Here, it was very simple idea.

Here, it was like projecting sound through this kind of [indiscernible] speaker on an object, like a banana that looks like a telephone so you can talk on the phone.

And really, the sound come from the banana. So the idea was to project function on objects. It's a multi-modal display. Or having kind of pizza and it becomes a video player and you can like.

He was working with me as a student. Not a student, tech assistant in the lab for some years. We had a lot of fun. That was [indiscernible] one of the winners last year -- two years ago. So this is our very cheap version of something like [indiscernible] with sound, and I think like the models, several models were fried every three hours, something like that. So I want to know what you've been using, it's much better.

But the idea was thinking that, oh, thanks can be augmented from the outside.

This is the extension of the concept of spatial augment the reality. It's not just spatial augmented reality with images, with sound. Maybe functions.

Maybe compute functions which brings me to this last idea. Now I'm in the idea, this is the end of the talk.

I've been working on a book flipping scanning that you saw. I mean, he was working on it, also work on book flipping printing, trying to print a book in realtime using photo [indiscernible]. So you like changing the affordances of the objects by projecting with this kind of projector from the outside.

That means, yeah, affordance switched on and off on objects. So this is a kind of joke, you know, [indiscernible] rechargeable. But it's not so much a joke.

When you buy a CD now today or download some music, you are not allowed to copy into another computer something like that. You can, physically, but it's illegal. Same thing here. If you use it, it will be illegal, just be fined.

23

And maybe the hammer will be more completely like doesn't work anymore. But if it breaks, it will repair for you, because it's a service. You're paying the service.

Same thing with everything. You think that could be a strange utopian future in which you don't owe things that have affordances. Someone else can control them. So that's why I reflecting all this matter. We can make a lot of mistakes and get trapped.

And that's the end of my talk. So I've been advocating this idea of reality center interface design, which I think was my idea, and, of course, has been like described like years ago with many, like, thinkers about this kind of thing.

So also, this idea of real space, of organizing [indiscernible] for data, there's this idea of the cloud that you don't know where it is. In

[indiscernible], you know what it is. Outside. But in general, you know, I need to place things somewhere, organize my data through space.

And generalizing the rules of physics, you were talking, intuitive physics, we're trying to grasp things. You cannot really because it's not really real objects. So you need to make something that makes sense. And maybe with you, to date, I didn't see anything that makes sense. And completely -- the closest is like what [indiscernible] magic walls in which, okay, you have maybe some --

I call that Harry Potter mystique. It does something and you know something has power. Like remote control. But it's too basic, you know.

And I believe very much in this. The [indiscernible] put too much

[indiscernible] something that looks very nice and you press buttons and that's how you interact with -- you design interface, basically. And I think that even if you tried to make it as simple as possible, intuitive, et cetera, you need to learn some kind of set of rules that are fixed. Instead, if you have someone, you modify the system after you interact. People have probably tried that already. But really, it's like how people learn things in the real world, interact with someone, you learn about the other by interacting.

So trying to create this physical engines, engines instead of knobs that have a behavior, [indiscernible] communicate with them and train them my interface.

That was about one hour and 20 minutes. Much more than expected. I'm sorry.

24

Thank you.

>> Andy Wilson: Any quick questions?

>>: I was curious about the banana. So there's no speakers inside banana?

>> Alvaro Cassinelli: No.

>>: It's using [indiscernible]. How can that be?

>> Alvaro Cassinelli: You have to make it work.

>>: [indiscernible].

>> Alvaro Cassinelli: Okay. Yeah, absolutely. [indiscernible].

>>: [indiscernible].

>> Alvaro Cassinelli: Yeah, please, you can put it. So here it's not a game.

I'm just showing you some very simple -- so you can move this. No one can erase it. So make it bigger. [indiscernible].

>>: What if you don't make it flat. You make it [indiscernible].

>> Alvaro Cassinelli: Well, it has to be on the line of sight.

>>: Oh, okay. [indiscernible].

>>: [indiscernible].

>> Alvaro Cassinelli: It cannot work, because I'm projecting from here. But

[indiscernible].

>>: What were the [indiscernible]?

>> Alvaro Cassinelli: The [indiscernible] is because I modify to be able to track it so you can really --

>>: Is that working with the slides?

25

>> Alvaro Cassinelli: I hope so.

>>: So what is this here?

>> Alvaro Cassinelli: This is another photo detector. It's detecting how much light reflects on the object, and that's it. That's the only information you have. And now, [indiscernible] made in two milliseconds. It's a very fast scan.

>>: [indiscernible].

>> Alvaro Cassinelli: Came back. I will show you.

Download