>> Qin Cai: It was my great pleasure and... concluding talk for his internship here. So Tom was...

advertisement
>> Qin Cai: It was my great pleasure and yet also was a bit gloomy to introduce Tom's
concluding talk for his internship here. So Tom was originally interning from Montevideo,
Uruguay. The country was to me, originally associated with World Cup in the soccer. So I think
that after interacting with Tom, then this thing will change, and I'm sure Tom will tell you more
about his country. And on Tom I saw a lot of mixture stuff. So he's a PhD candidate with Dr.
Alvaro in University of Tokyo, and yet, himself is a faculty member of University of Republic
Uruguay. And he's a computer engineer, researcher in human computing interaction and yet,
he himself is an artist at heart. So he claims he's doing an internship here with me. But on the
other hand, I feel I'm the one who's entering the internship. I learned quite a lot from him. So
last and the lot least, he’s also the cofounder of MSR MediaLab upstairs. So in the case you
have a chance and go visit. So without further ado, let's let Tom talk about face and
interactions.
>> Tomás Laurenzo: Thank you. So, hi everyone. Thank you for coming. As Qin said, I'm
finishing my internship here. Qin was my mentor and within the [inaudible], which is
[inaudible] interaction and communication. The manager is Zeng Zu[phonetic]. Zeng Ku
[phonetic]. I never know how to say it. I learn it. I have one more day to learn it. So, yeah. I’ll
start by introducing myself. Qin said I work, come from Uruguay. I work for the University,
Republic University of Uruguay, it’s called Universidad de la republica in Spanish. I am an
associate professor at the engineer school where I run a small lab, and we focus in ATI
computer interaction and also in media arts. So most of the things I do are related with art and
with interactive art.
I also work with people from the school of psychology where I'm an associate researcher. I’m
also a visiting professor of architect of school because human computer interaction and media
arts are quite [inaudible] so we tend to collaborate with different groups and different persons.
I am also a researcher at the national system of researcher with the public funding system for
outstanding researchers of sorts. And here, the MSR, one of their research fellows, [inaudible],
I got the [inaudible] last year, and this is my second internship. The first one was with
[inaudible] in San Francisco. Now here with Qin. And this is called doing things backwards, was
a phrase of [inaudible] was there. [inaudible] was talking about myself and you're doing things
backwards because I'm doing an internship after working, which has been quite an initial career
path but the way it's been done. I started my PhD after being appointed as a professor. My
advisor is Alaro Cassinelli, who has given a talk some years, some days ago from the University
of Tokyo.
So I'm pretty sure everybody knows everything about my university and my country, but just in
case you did not, the university is not very well known internationally but it's the biggest
university in, he is from [inaudible]. He says, I do, over there. It's the biggest university in my
country. It was founded in 1949. It's free, it's public. We have more than 82,000 active
students, just to give an example. Almost 20,000 new students in 2006. About 8000 teachers
and researchers in 19 schools. And this is in Uruguay, which everybody knows is this small
country there in South America. It's quite small; it's about, for you Americans, it’s about the
size of Missouri, the state of Missouri. It's about 2% the size of Brazil. And it's twice the size of
Portugal, four times Switzerland and half of Germany. So it looks really small; it’s quite big in
Europe. It's really small in the US or in Brazil.
And people often say, tends to know our country because football. I don't know if you guys
follow the soccer World Cup, but we were quite famous because we got Ghana out of the
World Cup, they were hosting it and one of the guys stop the ball with the hands, and we also
organized the first World Cup and we won it, we won it twice. And quite strange to meet,
lately, people I've been trying a lot for the last almost 10 years and people say, oh, you're from
Uruguay, I love football, they have football, and some people talk about some beaches, and
lately they talk about our president because he's been named the world's poorest president. He
donates about 90% of his salary to charity and that’s his car. And he's actually going in like to
work. And so somehow he became kind of famous. So that's the new thing. Now we are
known by because of him and because of football. And this is my city. This is my school over
there. And there's a couple pictures of, so when you guys want to visit me, go in summer and
do not do it in winter.
>>: Summer’s no.
>> Tomás Laurenzo: Well, yes. Summer’s no. You guys should be there. It's really nice to me
to be here in this beautiful Seattle weather. So the things I do necessarily for are, mainly in
these four areas, in new media art. Also called media arts. I'm more interested perhaps in the
art part of the new media, but new media, it's kind of a natural way for me to do art. Also, ATI
or interaction design and pretty much entertained the interaction part and designing
interaction as a thing and not that much concerned about the actual apparatus that we create,
but the design of the interaction is the thing that happens in time for a person trained to solve
a problem within a specific context. Also, in music, I tend to like computer music, but I’m more
interested in music than in computers. And in cognitive psychology, especially in perception, so
we work a little bit, perhaps not that little, in new ways of perception, applying what we can
learn from perception onto art, onto interaction.
I have a couple of pictures of art that's mine. This is [inaudible] installation with LED lit
balloons, was quite big to handle square meters and the balloons we added to people, to
sound, and to music and also we have a web interface. This is, I don't know if you can, I really
like this. This is interactive installation that people could burn mutually a picture of a guy who
was killed by, for political reasons. This is a screenshot of audiovisual instrument that I used for
some years to perform with the Jays[phonetic] and with bands [inaudible] in the world.
And all these things I do, some people call it creative computing, which it's kind of a misleading
name, but this is also interesting because it is computing. We do create software, we do create
electronics. The things we do fall within the normal things that engineering researcher does,
that's why we work at the engineering school or that's why I got invited to come here at MSR.
But it's creative computing in the sense that all the things that we create have this intention of
being a tool to reflect on different things. I want to talk about this a little bit more later, but I
conceptualize art as a tool to reflect about meaning, to create meaning and to think about what
things do mean. So this creative computing, we create technological artifacts as ways to
actually reflect on what things in the world can mean or do mean. I'm going to be back to that.
Now I’m going to talk a little bit of what I actually did here. We call this talk face interaction.
It's going to be quite obvious because we created interactive things, and they are all based on
face tracking. We started working with this idea; we ask ourselves how can we communicate
sensation and emotional states in a nonverbal ways? That was our main question. Okay. We
want to communicate things that are not variable in nature because they can be emotional
states or sensations. And we also want to communicate them in a nonverbal way. So that was
the first question. And that we thought, okay, but the natural thing to wonder is, okay, one
person communicate things to another person, but that's quite restrictive. It could be like,
might want to communicate it to other persons or to ourselves, or to things or places or to
places or to spaces. The basic question is quite real. Okay, how can we communicate
nonverbal things in nonverbal ways to whatever. And then again, I come back to this. And
what would that mean? If we are communicating some feeling to another person or to this
scan, what does it mean in terms of what we think about the thing that we are communicating
and the communication act itself?
So we are talking about sensations and emotional states, and to be able to communicate that
we somehow need to measure that, so we need to have a representation of that. So we said,
okay, building onto this idea of, like the eyes are windows to the soul, I’ve read that many
times. Okay. The face is quite a reasonable way to estimates emotional state or feelings or
number of communications. So if we, and we also have the connect face tracker, which is quite
handy. So if we can track faces and we can recognize some gestures and facial gestures, then
we have some information that we can work with. There's a phrase by a researcher whose
name I don't remember, that he said that the mirrors are the first interactive art pieces, or they
wear the first interactive pieces, and mirrors are also kind of like design patterns in media arts.
You've got some [inaudible] like a bunch of interactive art pieces that are actually this kind of
magical mirrors. I have one piece there, and he has one piece there, they're kind of mirrors.
So, we said, okay, if we have this face tracker and mirrors are kind of, this is the same pattern,
what we can do. And we decided to create this, which is also similar to the thing that to, and
these pieces there. We said to create a mirror where you’re always somebody else. So how to
it works, you face the mirror, the mirrors take a snapshot of you, we have the face tracker, so
we know the head position, the head rotation, and we know some, or we can estimate some
things about the gestures, like the eyebrows are raised or the mouth is open. Also, the face
tracker gives you some things called animation units which are estimations of just as good,
they don't really work yet, but in the future new iterations of the face tracker that could be
used. And this idea of this mirror that you your summary else comes, I've been working with
this for a couple of years already with these ideas, I also work with this on my first internship.
And I call them [inaudible] because the [inaudible] the middle sound that starts with this phrase
I am he as you are we as you are me and we are altogether, which just kind of shows this idea.
So I have a small video of this running. There's no audio is there? It stopped or is it finished?
So here we only have two subjects which were Qin and I. But you get an idea.
[video]
>> Tomás Laurenzo: And the PowerPoint is not very happy with the video. I think if I close it
and open it again it works. I'm going do that. Sorry. But you could use the video or--so I’ll
continue. So that was our first prototype, creating something that, again, reflects on this idea
of using the face and what the face means and what the face means for the user. What does it,
we always [inaudible] faces with one person. Okay, this prototype tries to reflect on that, on
separate our image, our preconceived image from ourselves. In that video we only have two
subjects. If that that was run for a longer time, then you will always get somebody else.
So the second idea was, okay, we have this face and this face tracker, how we communicate
with places or with things? And, for example, can we assign using the face tracker, some kind
of behavior, some kind feeling to a thing? We create this small prototype called Look At Me.
using the gaze as [inaudible] or [inaudible] word the user is looking at. We created some kind
of networking system so that the things that react are not actually the computer but they are,
they communicate over the network using a prototype called OSC to the computer who's doing
the processor, and we could have many things that react. So I again have a small video. Let's
hope it works.
[
>> Tomás Laurenzo [video]: So I created this little demo that is called Look At Me. And
[inaudible] little system it really wants me to look at it. So if I turn it on and I need it to focus. If
I turn it on and I look at it, I'm going to cover the energy light because it's too bright. So when I
look at it, everything's fine but I stopped it, it goes crazy. If I move close to it, it just turns the
light on. [inaudible] stops if I look away. And if I look either way the [inaudible] will stop
vibrationg. It causes me to look at it. I've been trying to turn on the light and it really wants, it
really makes me look at it. Yeah, that’s it. Thank you for watching.
>> Tomás Laurenzo: I don’t know. The videos used to work alright half an hour ago. But now
they don't. I've been told that's, it’s a feature. So the idea was that, okay,that was also a small
prototype. It has an LED light and a vibrating motor, and it’s, I think that if you're not looking it,
it reacts. If you start to, your gaze starts to go away, it turns the light on, and if you actually
look farther away, it vibrates, makes sound, everything. Again, it's only a prototype, but the
idea is, okay, we know how to track faces. We can estimate things out of the face tracking, for
example, what the user’s paying attention to and then things can react. And again, that's easy.
It's interesting as a tool to reflect on and to think on what things actually mean. What does it
mean if we put behavior into a specific thing? But also, it can be not only things but places. So
we did another, it’s more of a [inaudible], it’s more than a prototype, we actually submitted it
to a [inaudible], and I'm pretty sure it's going to be accepted, so we’re going to be showing it
next year. So we have more questions. If instead of thinking about things, we think about
places, spaces, rooms, one natural thing that I think I've been reflecting on for quite a long time
is this room is different after we used it. If you come back here and you know that Albert
Einstein was giving a lecture here, do you think this room is different? Or if you know that you
yourself did a lecture here, is this place different? So do we change the ways we think about
places because somebody was there? Do we leave some kind of trace?
It's also related with a very common theme in media arts which the surveillance. There’s
probably a camera wall, I'm being filmed, so of course there's a camera, I even have a mike. I
signed for it. But everywhere, many places we go to that there are cameras and we are not
actually aware, even if we rationally know they are there, we are not aware of them. So that's
another thing that are often do to make things explicit by using them as part of another artist
can take things that we know that are there or perhaps we don't, and show them back and
again, reflect on what those things mean. What does it mean that we’re being filmed
everywhere? What does it mean in terms of rights, what does it mean in terms of privacy, what
does it mean in terms of some of these aesthetics? So for this I'm going to explain it later, but
for this we created a small blink detection system. So using the connect camera we can know
when the user blinks. So I have a small video showing that.
>> Tomás Laurenzo [video]: [inaudible] using the face tracker. And then the [inaudible].
>> Tomás Laurenzo: So I’m going to blink on the screen, it’s going to turn white. So we did that
and we created this artwork called traces, which also has some poetry in it. That's another
aspect that is very interesting to me. What's the poetry of, the poetic aspect of the things that I
or that we can create? So this is an artwork that captures when the users of a space blink,
takes a snapshot of the face with the eyes closed and projects the faces onto that place. So
[inaudible] you go to somewhere and you see some blurry faces that are fading away with the
eyes closed, and then you think, okay, what's happening here? And you blink and your face
gets captured and your face is one of the faces that are there. So yet another video.
[video]
>> Tomás Laurenzo: So here, this is a screen capture. So all the faces are going to be in the
same place because I'm sitting in front of a computer. But when I blink, they get captured and
projected. And they start to fade slowly away with time. And this is just my office here with
the projector and how they look projected on the wall. And this is in one of the big monitors
that we have. One thing that we want to do in the future with this is to have actual use of
recognition so each user gets snapshots on how you light one face [inaudible] that space. And
it's not going to be showing it in a screen like this; it’s going to be projected onto a big place.
We probably will need to use more than one camera because I don't like the idea of other faces
like crumbling together. Now I have a snapshot here to, showing how their faces look with the
eyes closed and fading away.
But this communication [inaudible] in a natural thing is to ask how two users can
communicate? How can we use this thing of nonverbal face-to-face communication, playful
communication between two users? So started thinking, okay, how can we use vibrating wires
to communicate things? We can use tactile feedback from the system to one user and try to
send some information back. So when we thought about [inaudible], one of the things that we
wanted to do that we did not implement but we have everything done for it is, for example,
having some vibrating wires in the joints that react to music. So, for example, a deaf person
could dance or could be helped to dance or somebody who wants to learn to dance could be
teach how to learn, how to dance using that. But after we have these joints, vibrating joints,
we say, okay, but we can communicate things. We did another prototype, for which I have
another video.
>> Tomás Laurenzo [video]: What we did here [inaudible] to [inaudible] motors. So, let’s try it.
>> Qin Cai: Okay. The pump has some power [inaudible]. When he look at this direction his
motor vibrates, and look at this direction, this one vibrates. [inaudible] this thing is vibrating.
You can see from the [inaudible], and then he’s looking at that. I can’t feel which direction he's
looking at by these two motors. [inaudible] your gaze.
>> Tomás Laurenzo: So again, here we are communicating where one user is looking at to some
[inaudible] feedback. But then we said, okay, we have music that can turn into vibration or
gaze can turn into vibration, what happens if the things we're sensing can turn into music? So
we did the other thing which was to track the face and to create some sounds.
[video]
>> Tomás Laurenzo: This is one of the first videos I did so I took a lot. But I kind of like it, that it
explains the system because we worked a lot in how to use our system within professional
sound making software, which was kind of a hard [inaudible].
>> Tomás Laurenzo [video]: So you can see if we are here in this mode, in my software, we can
map the hair onto some parameters>> Tomás Laurenzo: So you see here that the software is reacting to the user’s head position
and rotation.
[video]
>> Tomás Laurenzo: Of course it also works [inaudible]. I imagine it was quite interesting for
people outside of my office seeing one [inaudible].
>> Tomás Laurenzo [video]: The second mode is to show some trigger samples, also
[inaudible]. These samples can be roared or not, can be on time or not, just with the notes on.
Okay, let's just [inaudible], is going to trigger a new something. Yeah, that will do. Thank you
for watching.
>> Tomás Laurenzo: Yeah. I actually did it so that the kids could play with it, but it did not
work.
>> Qin Cai: It did scare them off.
>> Tomás Laurenzo: And that was kind of a proof concept of manipulating sound in real-time
using the phase as an input controller. But it can also be more musical, or kind of musical, kind
of more of a real controlling system. So I created this prototype.
>> Tomás Laurenzo [video]: If I roll or rotate my head like this, I can choose one note of the A
Minor Pentatonic scale, and if I roll my head like this, I could choose a different octave. So then,
when I open my face, when I open my mouth, it would trigger notes in the, out of my
instruments. So let’s turn it on.
>> Tomás Laurenzo: And the video, it's again, not really working.
>> Tomás Laurenzo [video]: So this [inaudible] in my hands allows me to play some basic tunes,
some basic melodies such as [inaudible]. [inaudible] sometimes has problems following when
the mouth is open or not, especially when the head is rotated. Let's see how it works. This kind
of handless interface allows me to presume [inaudible] and try to make melodies in my head
while I'm playing the [inaudible] at the same time. So let's try that. I'm going to switch
[inaudible] off. I'm going to turn on another [inaudible]. Let's see how it goes. It's not going to
go very well.
>> Tomás Laurenzo: The video is not working very well. Let’s stop it.
>>: [inaudible] PowerPoint?
>> Tomás Laurenzo: Yes. I can do that. I don’t know, it's kind of, I really like it actually.
>>: [inaudible].
>> Tomás Laurenzo: No. That's one good thing. This was absolutely no practice at all. It was
the first take, I mean it's very, very, very bad music and very simple, it's this A Minor and
Pentatonic scale, and they're like the blues notes. So A, E, and D or F. I don't remember. But
anyways, with basic musical skills, it’s immediate how to play this. And it's [inaudible]
immediate as useless. It’s like a toy. Yeah, of course.
So this goes to our last but not least, prototype which was trying to making everything come
together of sorts. So this is what we are working on and we will probably keep on working in
the future, which okay, from the face go to vibration, go to music, and also here is what we call
the virtual choir which is taking this idea of one mirror that you're always somebody else but
replicating it into several different faces. So we built a matrix of nine faces, one is the real
mirror in the center and then always somebody else on the other eight. And when you open
your mouth, those other eight persons create sound. And that sound is specialized. So Qin has
to see some of specializing sound. I've been trying to say that word for two months already.
Anyway, to do that with sound, and again, it has no verbal communication because we design a
vibrating vest of sorts. It's the facial controller and the specialized sound. So here's just a
screenshot of my face coming back with a different faces from the database. This is one
[inaudible], that's a really bad picture, in the monitor with the faces there and all this sound
system here where the specializing. Yeah. And this is Qin. And this is our vibrating vest. So I
should've brought here. I was going to, but I forgot. So it's a backpack with 12 monitors that
they can vibrate and we can control them with, they react to sound, but you can control them,
again, with a head rotation gaze.
>>: Are you going to try to get that through security?
>> Tomás Laurenzo: I met one artist in Japan, this summer was there, that he’s a media artist
and he works with these kind of things. So he went through security here in the US, and his bag
was full of these kind of things. So he got stopped and then he got like, went through all the
scanners and everything, and then when we, he was finally waiting for his bag, his bag had been
taken to a room and exploded. So these things are>> Qin Cai: [inaudible] really explodes [inaudible].
>> Tomás Laurenzo: No, I mean, the security personnel put it there and bombed it. They made
it explode. Just in case.
>> Qin Cai: I was talking about this vest.
>> Tomás Laurenzo: I'm saying that imagining going through this, with this through Europe and
have this backpack with. So this is a close-up of the vest with the motors. This is a video of the,
it's a video of me doing more sitting things.
>> Qin Cai [video]: Yes. Your face sounds good.
>> Tomás Laurenzo: Okay. I also have those videos working there in the real, better way. So
what, this is what we ended up doing from the face communicating, the face and what we can
estimate or we think we can estimate about the internal state of the user or the performer or
the desires of the performer into several things. We could map it into things, or into the space,
or into the user himself, or into different users, or into music, or have the user react from the
music in this kind of self-reflecting system. The vest vibrates. We've been trying different ways
of having it react. We still don't know, which is kind of interesting, which will be the best, and
we still don't know what will be the best for, after we know it's the best. We only know that we
are trying to communicate something by this kind of big system. We still don't have a specific
purpose for it, it's an interesting [inaudible], it's an interesting performance if you see
somebody using it, but we still need to reflect and to think about what does it point to. But
what we did, we created some different patterns, so for example, when the user looks up right,
the monitors can vibrate in this by start vibrating here and to create that pattern. They can also
react to music, so to different frequencies, so different intensities, so this motor can be
associated with high-pitched sounds, and then if the user’s looking this way, there’s a highpitched sound, then the pattern will change depending on the sound.
So what are we going to do next within this work? Or at least Qin because I, tomorrow’s my
last day. Okay, we have a tech fest presentation in March or April, [inaudible], so we’re going
to be showing the last version or the next version of this virtual choir. The virtual choir is going
to have a direction, so we’ve been thinking about having the user point with his or her hand to
one of the faces and making that face react in a different way, for example, make it louder or
make it brighter or make it something, or make it that face control the vibrating motors. I also
want to try a kind of a sequencer that you can point to one face, use your head or your other
hand or something to store a melody, and then have that face repeat that melody. So you can
start creating different patterns of music. And then you can have a gesture to start or stop it.
So it's kind of a musical instrument, more elaborate musical instrument and you can use it in
real-time.
We’ve been told that some of the things we've been doing can be created, can be important to
products, especially for the Xbox, having the users play with some things. We've also been
working with Slit-Scan I don't know if you know what Slit-Scan, but it's a way of reproducing
videos or sequences of pictures where you show different pictures in different time
stumps[phonetic] so it creates some kind of wavy effect, and we want to see if we can use that
with the traces and see what happens there. We also want to try like remote communication,
not being in the same place. We have an automated system of the vibration motors using our
Adrenas [phonetic],which is wireless, and it works. So we might switch to that. Also,
bidirectional communication, not only one user can say things but another user receiving them,
but something more bidirectional. And of course, all these things, they're really fun to do. And
they’re really fun to use. So you use it for fun and you say what if we do this or what if we do
that? So the ideas are plentiful, and it will be fun to try them out.
And for me, I want to come back for a second to this because for me all these are, belong to my
art practice. And I've been thinking after talking with Andy the other day, I've been thinking
more about why I'm at MSR and why I work at the engineer school. I’ve been thinking about
this for the last, I worked at that school for 10 years. And before that I was an engineering
student, so I’ve been thinking about what am I actually trying to do. What am I doing for the
last, I don’t know, 15, 20 years. And I still think that it makes a lot of sense. And it makes a lot
of sense from a philosophical point of view and from practical point of view. From a
philosophical point of view, I still think that, I'm not going to go into what art is, I really like
what Picasso, they asked him once, what is art? And he said, I don't know, but if I knew, I will
not tell you, which is kind of a reasonable answer because everybody knows and nobody
knows. And everybody has his or her own conception that it doesn't make any sense to discuss
it. But it's interesting to think of ways of seeing art. And one way of seeing art or see art is the
construction of meaning. It's the reflection of what thing, from the semiotic point of view.
What things can mean. If you don't have that, everything is meaningless. Everything has,
makes no sense at all. So to have somebody who tries to reflect on what’s the meaning or what
the things that we create from a [inaudible] point of view can or should mean, I think it was
really interesting. But also, and this is along the same, that goal that [inaudible] was saying, he
was saying it better, but he was saying in the talk he gave here some days ago, and it goes with
this idea of creative coding, it is a creative activity that immediately pushes the boundaries of
the technological artifacts that we have. So it only takes you a little bit of work to start
thinking, it would be awesome if we did software, if the hardware could do this or could do
that. And I think that's very important for any researcher to do, to try to push the boundaries
of what would you expect from any technological artifact, being it software or hardware or
both.
It would, the face tracker, [inaudible], it would be awesome to the face tracker could recognize
persons or we could actually measure, I don’t know, emotional states from facial expressions
or, I don’t know, my fiancée, she’s from Lithuania, and if you read any travel guide about
Lithuania, one of the things you read is: do not smile because people don't smile there. So if
you smile to people, they think you are making fun of them or that your, you have mental
problems. It would be interesting to, okay if you, to have software or to have an appliance that
helps you not to smile in Lithuania. Or what happens if you think about an appliance that only
works when you smile. I don't know, your phone, you’re a person with emotional depression,
with anger management problems, okay, why don’t you buy a phone that only allows you to
pick it up when you’re smiling. We all know there are plentiful psychology research studies that
if you smile a lot you actually get happier. So why not, I mean, it's immediate after you start
trying, if you to start reflecting on what the things can mean, how to take that into, I don’t
know, [inaudible] loop of feedback of creating things.
So to end, I would really like to thank Qin. It's been really fun to work with her these three
months and seeing you and everybody. All of you, world peace. And with Qin, we created the
MSR MediaLab, we have a sign for it, which is our workspace on the fourth floor So as Qin said
before, you are all welcome to go up and see what we have been doing and to play with things.
Yeah. And that's it. Thank you. Let's talk about things. I also have a website with some things
I've been doing for the last decade or so. So you're all welcome to take a look at them. What
time is it? And we have time to talk. You can see the videos or whatever. Any questions?
>> Qin Cai: Can you replay some of the videos?
>> Tomás Laurenzo: Yes. I have the videos.
>> Tomás Laurenzo [video]: So hello, this is my third video>>: [inaudible]?
>> Tomás Laurenzo: What? Sorry?
>>: I'm just curious how you interconnect the face tracker with your [inaudible]?
>> Tomás Laurenzo: Well, that's quite interesting because okay, to make sound, what I did was
to send Media, it’s musical instrument device interconnections, it's a standard protocol to
connect musical instruments. So every standard synthesizer or electrical instrument
understands Media. So my software, we have a loopback Media connection. It's like a virtual
cable. So my software outputs Media and the thing that listens to the Media and outputs the
sound. But to make, to connect it with this specialized, I'm going to say 3-D, with a 3-D system
of Qin, it was quite of the team carrying interesting problem because my software outputs nine
different Media channels, then the synthesizer listens to that and outputs nine different audio
streams to virtual audio cables, which is a third vendor software that emulates a cable that
goes, actually an audio cable that goes out and in. And we also installed something called
ACO[phonetic] which is also a third vendor driver for real-time sound. That allows us to send
nine different audio streams to Qin’s software that creates the difference outputs. And then
that is captured with a mike and thrown into the vibration. So it took us like, I do know, it was
not easy. Windows does not provide any kind of real-time audio system for more than two
channels. So it depends on the driver of the sound card or some software that you need to
make it work. So here's the, how it actually looks, the video.
>> Tomás Laurenzo [video]: So this, without this in my hands, allows me to play sound. This
kind of handless interface allows me to, for example, to play my [inaudible] and try to make a
melody with my head while I'm playing the [inaudible] at the same time. So let's try that. I'm
going to switch the meter off and turn on another CD [inaudible]. Let's see how it goes. It's not
going to go very well. So, yeah. I'm sorry about the bad music. But that was my small video.
Enjoy it, and thanks for watching.
>>: That was very good. I liked that. You should post it on YouTube.
>> Tomás Laurenzo: I’ve been told that I'm not allowed to because everything I thought
belongs to Microsoft.
>>: Can ask a question?
>> Tomás Laurenzo: Of course.
>>: So basically, we are designed to use this part to [inaudible] the work rather than working.
But your work is trying to use this part to work and my question is, is this a natural way or>> Tomás Laurenzo: It is and it's not. For example, this way of making music is absolutely not
natural. I mean, you all like doing this. One thing that, while testing my different prototypes, I
got dizzy a lot of times by taking the blink detector and then shaking my head to see if it work in
different conditions. So it's really not natural to use. But at the same time it is because you're
making a facial expression that is telling me that you're actually paying attention to what I just
did. You're communicating without saying anything.
>>: Are you [inaudible] a space [inaudible]?
>> Tomás Laurenzo: Yeah. You're sending. But you're not shaking your head dumbly, but you
are sending something without actually talking. So that was pretty much the idea, to take that
idea, and also as a user it's really interesting that when you open your mouth it makes sounds,
which is something you're used to, but you're not actually making it. It's really weird because it
feels like you are singing but you're singing with the voice that comes from, I don’t know, it's
natural and it’s not. And that happens a lot when you work in these things. You want to do
things that feel natural from an ACI point of view, you want to take natural interaction and
natural user interfaces, but you also want to do things that are not natural, that are disruptive,
that you try to use in a different, so it works both ways.
>>: Do you think explicitly about how best to accommodate the limitations of the
technologies? So, for example, when you're singing there, you arranged it so that [inaudible]
octaves. So the thing made a mistake on the octave it actually didn't sound bad, like if you had
played some other then it would sound terrible. And there are other reasons, and I wonder if
you sort of consciously tried to>> Tomás Laurenzo: Yes and no. This has many aspects. In the musical part, it was on purpose
and it was by design because it's A Minor Pentatonic, so everything was going to sound all right.
You can shake your head without any kind of meaning and it’s going to sound pretty much all
right. So that was taken into account. But also, when you create an interactive art piece, you
want to take the border case into account and to know that the user is going to have like a
[inaudible] or artistic experience all the time. But also you're extremely open to serendipity.
You’re simply open to see what happens. There’s one author that calls happy mistakes. So it
doesn't really matter from one point of view if it does not sound right. And that happens a lot,
and in my classes, I took a course at ATI, and this one class that I [inaudible] asking the students
what's best as a musical instrument, a CD player or a violin? Okay, discuss. I don't know. It's a
rich discussion. So the CD player does not allow you to make any mistake and if you take a
violin and you are not a violinist, it's going to sound terrible. So if you're creating a tool for
expression, you are always playing with this balance between giving freedom to the user and
trying to help him in his performance. And there's an third aspect that is, I do come from
Uruguay, so I am extremely used to working with very limited hardware, and so the idea of
taking something like that can actually [inaudible], which is kind of a basic tool, and trying to
make the most out of it systematically, it's pretty much the way I'm used to work. I think it's a
good way to work actually.
>>: Do you see any possibility there for maybe [inaudible] something around, like, when I think
of anything that does [inaudible] or anything that does like translating a language, because
there's a lot of assumption of like, you can't really understand the tone or anything like that in
terms a way of capturing the person's a motion as they’re writing or something like that and
then you can display that in the output. I don't know.
>> Tomás Laurenzo: That would be awesome. If there's a way, yeah, perhaps. I'm sure there
is. I have no idea how to do it. But, yeah. Let’s do it.
>>: So converting a face, [inaudible], motions to music, how about converting a code in
reference to music?
>> Tomás Laurenzo: Oh, yes. It's>>: That would be interesting.
>> Tomás Laurenzo: It is. An algorithmic composition, it’s a very rich area. There's a bunch of
people working on that.
>>: [inaudible] of people actually converting [inaudible].
>> Tomás Laurenzo: Yeah, and I can show you some awesome things that have been done.
>>: That's sounds cool.
>> Tomás Laurenzo: Thank you.
Download