Zhengyou Zhang: Okay. So let's get started. ...

advertisement
Zhengyou Zhang: Okay. So let's get started. I'm very pleased to introduce Mar Gonzalez. She has a
very interesting background. She was trained as a neuroscientist and she did a Ph.D. in immersive
virtual reality. And she has visited many places, MIT, Tsinghua University, and MSR. She did a
wonderful internship with us two years ago and she would talk about some human behavior and also
design some tests for immersive [indiscernible].
And I will be monitoring the online questions. So for people online you can ask the questions through
the web page and I will ask question for you.
Okay. Mar, please.
Dr. Mar Gonzalez-Franco: Thank you. Thank you.
Okay. First of all I want to say wow. It's very exciting to be here today. I have prepare a talk that will
mostly show [indiscernible] incidences of how we can alter human perception and behavior through
immersive virtual reality.
In order to do so you'll see I'll introduce experiments. So we will go through that. Most of the work
that I am present here is immediately related to my Ph.D. with [indiscernible] in the University of
Barcelona and MIT, as well as my post-doc at the Britain Environment Computer Graphics Group at
the University College of London. You'll see during the research here that it has a strong component on
computer science and neuroscience, which are my two backgrounds. And let's go into it.
So I want to start by proposing that two main methods that I've used to generate immersive virtual
reality during my research. The first one is the cave systems, which combine with real-time computer
graphics, of course. So the cave is a room in which the walls are screens, and once the participant
enters, all the projections of the screens are generated from his point of view, so everything is rendered
from his position. This is stereo/video. And it creates the illusion that you enter another space of
computer reality.
The second approach for generating this immersive virtual reality is using head-mounted displays. We
can render the virtual reality from the point of view of the participant using this technology. And it has
similar effects to using a cave. There are some differences. We will go through that later.
And let's focus on the common effects. One of the common effects that I find fascinating is that when
people enter an immersive virtual reality, they experience what is known as the presence illusion. And
it's usually described by a combination of two factors. The first one is the place illusion, this illusion
that you are somewhere else, and the plausibility illusion, which is the illusion that the events that are
occurring are really happening. Which is more important because when participants are given a
plausible situation, they behave realistically.
A clear example of this realistic behavior can be seen here in this experiment by Meehan, et al, in
Siggraph 2002. They ask participants to cross a room with a cliff. You can see the real scenario here in
virtual reality. And it's interesting to see that all the participants went through the wall, even though
they knew there was no real cliff. Not only that, they reported feeling like they could fall and also they
underwent a strong psychological and physiological changes such as heart rate increase. So we do find
realistic behaviors and reactions inside the virtual environment.
And these responses are also realistic when you interact with an avatar. For example, if you enter in an
immersive virtual reality and the avatar that you're interacting with sneezes, so it's very likely that
you're going to say "bless you." So we just feel impolite not to do so. So this realistic behavior can
potentially be exploited for sociological or behavioral experiments. So using immersive virtual reality
for that.
And precisely at UCL in the cave at UCL we carried out a reproduction of this [indiscernible] Milgram
experiment on obedience to authority. This is an experiment where people are asked to give increasing
electroshocks to a learner when the answer is wrong. And Milgram's findings, he did this with the real
people.
So Milgram's findings were shocking. He found people would provide life-threatening shocks under
the command of an authority figure. We have to think of these studies under the background of a how
could people behave like that during the Second World War. They were trying to figure out how can
people follow authority this way.
So even though the shocks were not real, participants didn't know that, and they were simulated of
course. But there was an unethical area there, because there was a deception scheme. And this is kind
of banned by ethical communities at the moment. But in virtual reality, we can do it anyway, because
it's not deception. The participants know that they are interacting with a synthetic avatar.
So I want to show you a video of this real experiment.
[Indiscernible]
So we recorded the motion captor from a realty actor on the audio. The participant has to read aloud all
these words. The first one has to be matched with one of the four choices below. And the only correct
one is the one that is highlighted.
[Indiscernible]
So we start pretty soft, the shocks. They increase. It's not an easy experiment to do this, because
people don't really like to do it.
[Indiscernible] "Let me out."
So we see at some point the avatar starts trying to fight the experimenter.
[Indiscernible] "I don't want to continue. Don't listen to her."
It's kind of funny because I'm triggering also the avatar.
[Indiscernible] "That's incorrect. The correct answer was [indiscernible]." "You have no right to keep
me here. Let me out."
So let's keep it here. But it goes worse and worse and worse, and at the end the avatar doesn't respond
anymore.
So let's go into the actual experiment. So we run twenty participants in this condition where they
interact with the avatar and twenty control participants that did not interact with the avatar, they just
read the words continuously, like given the choices.
We find the same results as Milgram, people follow the authority, and authority being me in this case,
the experimenter. So anyway, they are given a task and they follow it. But we do find something
interesting, which is they increase their anxiety level the more they feel this is realistic. So this is the
semantic perception questioner that you provide before and after the experiment, and see when they've
been like having harder, stronger heart rate or these kind of things. So it's something that people
perceive, how much do you perceive your heart rate being.
And after the experiment, so you do the difference pre/post. And we find that people that were
believing the realistic environment, the plausibility illusion, they had a stronger perception. Not only
that, we also find that participants tried to help the avatar. So here we have the experimental condition
and the control condition. And this is the sound pressure level of the words that they read allowed.
And this is the correct word and these are the other options. So they are significantly saying louder the
correct word trying kind of helping the avatar. Even though many of them they don't actually notice.
And we wanted to rule out that it was only because the word was in capital letters, so that's why we ran
the control condition, but this doesn't happen in the control condition.
So we find these realistic behavior and we find that we can use immersive virtual reality through a
study and change behavior in people. And not only that we can also use immersive virtual reality and
the advantages of the head-mounted displays to not only make people feel like they are on a new
location, but also that their body has substituted by a functional body. And this process is sometimes
refer as embodiment.
So let's have a look at how can we substitute this real body for a virtual body. There is more
information on how to do technically this in here. So basically when a body is co-located with a
participants body, and just like the participant does it generates the ownership illusion. Let's see the
video. So you can see there is a motion captor system working in real-time and this is what he's seeing.
He can see basically as he moves around.
So there is quite a bit of technology working here in real-time. As I was saying, the head-mounted
display, the motion captor, and it creates the illusion of you being embodied on a different body.
And I want to interact more with the underlying neuroscience factor that allow for a participant to
really feel this body ownership illusion. And there are multiple studies having [indiscernible]
perception without any technology at all, only mannequins or rubber hands. And it's possible for
people to perceive these external body part as their own under the proper multisensory stimulation.
This is synchronous [indiscernible]. So I'm going to show you a video so you really understand what
I'm saying.
So this is the participant. He puts his hand behind a curtain and we put the rubber hand just in front of
him. So he only really sees the rubber hand. But he feels synchronous touch in both hands. So at the
end he thinks this is his hand, because he's seen the touch and feeling the touch.
And there are ways of measuring this illusion. One of the ways to measure is very drastic. So there is
a response to a harm and this is not a fake. This was an actual participant. He really felt the ownership
of the illusion. And the stimulation lasted what the video lasts. Less than a minute you already feel
this hand being your hand.
Another interesting way of measuring this illusion is by a proprioceptive drift. If you ask them to close
their eye and point like where their hand is, many people don't say where their actual hand is but
somewhere in between the rubber, which is something that doesn't happen when the stimulation is
asynchronous. So there is proprioceptive drift. Some people have found temperature drop as if you
were rejecting your own hand, so the temperature drops. But, well, basically, there are underlying
proprioceptive multisensory integration ways of generating illusion. And of course, the illusion can be
also generated over the whole body.
I ran an experiment, it's one of my earlier scientific contributions, to show whether this illusion can
enhance using real-time mirrors during the experiment. You have a mirror that you can see your body
there. And here we don't be so tactile, like we don't touch. You are self-stimulating by the motor
actions, so you see you're moving and the body moves.
So I'm going to show you the video. Here is the person moving around. So we can see also that we
don't need to have the whole body tracked. Sometimes we can just see here in inverse schematics to do
the rest of the body movement.
So this mirror it was just like one of my first developments at the lab. I was a research assistant and we
were, I was helping the researchers in the laboratory preparing [indiscernible] graphics. And here you
can see all of a sudden there is an harm in this case and the harm is something that comes to you and
people try to avoid it, only in the synchronous condition.
So let's see, they don't want to get in the middle, but at the same time, so that's what we found. So that
mirror you'll see it repeats over many other experiments that people have done in the lab later. Because
it really enhance the illusion. So what is interesting of this technology is that we can modify the virtual
body in almost impossible ways. For example, enlarging your hand, people don't like this experiment
by Kilteni. People would touch surface and the virtual hand would expand as they are touching.
People would then notice that the hand was super large until it was up to two meters or 40 inches,
which is super big.
Other experiments that I have shown people accept larger bellies, and then they have a proprioceptive
drift, when you ask them where are your bellies, they think it's larger. You can change the age of the
participant. You can change gender or the race also. So you can embody people in different race
avatars and everything.
And interesting, using this multisensory stimulation, we can also embody people in robots. And here
I'm going to show you this work is part of the future on emerging technologies project [indiscernible],
which aim at enabling strongly paralyzed people to reenter the physical world.
The electrodes measure brain impulses, enabling a person to control the robot's actions without moving
their own limbs. The idea is to enable severely disabled people to enter the world via a real-life avatar.
"The actions are associated with a square that is flashing with different frequencies and the operator can
trigger an action by simply focusing attention on one of the squares. The frequency of the flashing will
be reproduced in the brain's visual cortex so we can detect which square the operator is looking at."
[Foreign language]
Giving an [indiscernible].
So we presented this to the Vice-President of the European Commission [indiscernible] in 2013. It was
kind of interesting hit. And you can read about more about how this can be used in a social
environment in this paper.
So overall it seems like we can substitute this body and it also seems like this field of self body
perception can clearly benefit of using the technology. And there is research to this having virtual
body. So one of the things we can study is all these thresholds like how can you accept these radical
changes and which changes are actually not accepted.
Another is to study the implications of the body modifications. For example, in this experiment it,
[indiscernible] pick on the racial bias he would put caucasian participants into black avatars and that
would reduce the racial bias towards black people. So we can find attitudinal changes, behavioral
changes.
And observing it seems very interesting also just to study how can this really happen, how is the body
represented in the brain and how we manipulate it. So how can it be that a virtual stimulating computer
generated this process maybe as a real stimuli and how can we prove that.
So the next three experiments I will present are more in the neuroscience perspective. And they are
directly related with my Ph.D. So I'm going to show these three ideas. The first one, this is like, okay,
an axis. And the first one is how can I believe this is my body, that ownership. We're going to test it
and see what's going on in the brain and how do I accept changes to the body. Then I can control it. So
I have agency over it. What happened if, for example, you're controlling a robot and all of a sudden
somebody hacks into it and moves the robot in another direction, how are you going to perceive that
error?
And the last one is what if it looks just like me? Is that going to effect how much I feel embody into it?
So to explore all of this, I'm going to, just because we want to see how the brain interprets the virtual
stimuli. So we actually turning to EEG. Many of you already know EEG has a great temporal
resolution. We're talking milliseconds. But it has some bad signal-to-noise ratio. Like I cannot see
anything here. And also it has kind of bad spatial resolution. I mean how many neurons are be in one
of these electrodes. But some of these limitations can be overcome using varying [indiscernible]
approach. This is, for example, using unrelated potentials. When we know the exact moment when we
are representing on a stimuli, we can measure how this stimuli is processed right after. If we present
the same stimuli several times, we can average and erase noise and unrelated activity. So we can see
that when we present this stimuli 80 times, we find this response with these components. We have this
component in 100 milliseconds, sorry, a positive, a negative, et cetera. And this happens in different
parts of the brain and different stimuli will produce the different signals. So here this is the typical
[indiscernible] paradigm where all stimuli are having a stronger BP, heart rate.
So we just go with that and figure out these are micro-boards, so you blink and they are out. So we
have to be careful with that.
I will focus on the first experiment now on this aspect of the body substitution. So if people really
believe that this is their body, if somebody threatens that body, they should have a realistic response.
We're going to basically have two conditions, that person is embody into this avatar when it's looking
down, sees exactly the avatar, and we present these two stimuli here and we measure E.D., we measure
you're whether they move their hand. We ask them not move their hand. We really want to see what
happens in the brain before any conscious decisions is going on like deciding to move.
And I have a little video here so you see. So you see this is the first-person perspective. The
participant can look around, they see a different environment. When he's looking down his body or her
body has changed, it's been changed. And then we have these two conditions.
And we repeat it many times and see what happens. So what happens is we find an activity in the
motor cortex. There is a motor activation both in time and frequency domains. We're going to focuses
on the readiness potential to start with. So there is a readiness potential starting about 250 milliseconds
after the stimulus is presented.
I want to explain you a bit more what is this readiness potential. These readiness potential is the
different same voltage between the two hemispheres of the motor cortex. And from [indiscernible] the
studies are free will. We know that this potential happens for movement preparations. And
consciously before the conscious urge to act. So Benjamin finds that we actually inhibit motor actions
rather than create motor actions, and these are the typical responses of a motor action that is being
prepared.
Furthermore, when we look only at the contralateral hemisphere of the right hand, as you know if you
move the right hand you find the activity on the left hemisphere. And we find [indiscernible]
synchronization. [Indiscernible] is between 10 and 12-hertz. It's relate to motor action. Only in the
hand condition after one minute, one second after the stimuli.
And here we can see the baseline versus the response one second sort time [indiscernible] we see
clearly there is a motor preparation here, similar to the motor [indiscernible] if you have worked with
the brain computer interfaces.
Not only that, we find all these voltages are correlated with the strength of the ownership illusion. So
the more you feel the illusion, the stronger you react to the attacks. Refer to this paper for more details.
And we find that if we are really in the virtual body when we attack it, the motor cortex prepares for a
hand movement and the level of ownership modulates this activation. Not only that, there is a role of
this [indiscernible] amplitudes on the pain perception. I haven't talked about pain perception, but you
can check on the paper if you're more interested on that. And basically we can say that people are
accepting these virtual bodies on a very unconscious level, not only on this level, they are preparing
like very realistic response. If somebody attacks me I would like to move out from there.
Okay. So let's move to the next experiment in which we know the body has been accepted and we
know we can move it. So how does this control really happen? And what if we generate external
levers. So when we execute a motor action normally, we experience a certainty about the ownership of
the body and the agency of the actions. If I move this hand I know this hand is mine and the action of
moving the hand is mine.
However, when there is conflict in information we undergo a break in agency. This is something that
rarely happens to healthy humans, but existing pathology such as the anarchy hand syndrome where
people cannot inhibit certain motor actions. So one hand can possibly go somewhere and you don't
want it.
So probably due to this connection between the contralateral hemisphere and the frontal lobe. But we
don't want to enter that here, we want to create another scenario where there are motor errors. So we
create this rapid decision-making task which is an Ericksen Flanker task. This Flanker task, you have
to follow what the arrows say, the arrow in the middle, and there can be distracter arrows pointing in
the other direction to kind of make it more complicated or in the correct direction.
So let's watch a video. Interesting, when you're doing this experiment of course people go very fast
and they make their own errors, so self-generated errors, I had to go there but I went in the other
direction. But in virtual environments we can also generate external errors in which the participant did
go in the right direction but the avatar went in the opposite direction.
So let's have a look at how these tasks really work. The participant is here, his body is substitute and
the hand is struck again with inverse schematics. So the task is going to start. [Indiscernible] one,
okay. Wrong. So actually when you observe this you also generate this similar response just by
somebody making mistakes. And there went one of these externally generated errors. We will see
another one. I'll let you know when it happens. So every twenty try-outs we introduce an externalgenerated error in which the avatar you will see will go to this side and the, sorry, the person will go to
this side and the avatar to the other side. And you'll see it here now.
So we can see we have these two conditions really, congruent movements in congruent movements.
And we ask for participants to go 1,280 try-outs, which is like 50 minutes of this task. And what we
find is that when people make their own mistake, like the arrow was pointing here, I'm going to the
other direction, there is an error-related activity happening 100 milliseconds after the motor action has
start. Only for the real self-generated error. And this doesn't happen with the externally-generated
error. There are no correct responses. You see both are very similar. But here in the posterior
electrode, we find that for the externally-generated errors there is a strong negativity that happens much
later, about the 44 milliseconds. So here we can see more clear without, this is the difference why
[indiscernible] so it's [indiscernible] and external errors [indiscernible]. You can clearly see one is
frontal, the other is parietal. Also temporarily they have a different expanse. And let's look at these
two errors. So this error is well described in the literature. It's called error-relating activity. It's
maximal of the frontal electrodes. And this other one we call it N400. And it only happens when there
is an agency disruption. And when we look at the literature we find similar errors for semantic
violations. If I said I was going to eat a house, you're going to have an N400. But if I say I'm going to
eat a pizza, it doesn't sound so strong. So we believe that there is a semantic violation of how you
perceive the world. You really thought the hand would be there and it's going the other direction.
So both components are associated to different error monitoring loops. And I don't want to enter too
much in the [indiscernible] model so [indiscernible] but I want to just show you, let's go very quickly
through these.
Basically when you have a motor action, you create a motor command and you execute the movement.
So in our brain what's happening is we have several controls to see whether the movement was the one
that I wanted to execute. So I create a copy of this effort and movement, and compare it to my original
state, like where was I supposed to go? And if there is an error here I find this error relating activity.
This is the one the arrow is pointing here and I'm going the other direction.
But our brain has a second mechanism which is once I produced a movement, I get my sensory input.
So this is a reafferent feedback. And I compare it to this copy that I had and the intended state. And if
there is an error here we find this N400. These models are from Gallagher and Frith. So there are
these two error monitoring mechanisms happening concurrently.
And just to finalize with this experiment, I want to show you this in N400 strength. So the more
negative, the stronger than the agency disruption, is correlated with a sense of ownership of the virtual
body. The more you feel this is your virtual body, the stronger the break on the agency is. So just want
to say that we find two different kinds of errors is different than the brain recognize the internal errors
and the externally-generated errors as different. So if somebody hijacks my robot, I'm going to notice
it. And we find that as an agency violation trace in this N400, we find a correlation associated with the
illusion of ownership. And we also support this theory of error monitoring models, because they have
been proved, but it is very hard for them to prove actually this specific error loop, external error loop,
because in reality it's very hard to actually produce that sensation that you're going one direction but the
hand goes in the other one. Which in a virtual environment we can do.
So let's move to the last experiment here. So we've seen the body can be accepted unconsciously and
we can control it and if there is a break, we break agency over that body so we lose this ownership
illusion kind of. And what if it looks like me? So we run an experiment. Because there is some
research here before us that about this how people can be used in ownership illusions on other different
races, genders, that share more or less similarities with the participants. And some studies says that the
stronger, the more the avatar looks like the participant there's a stronger identification. But we want to
find whether this is perceived on the neurological side. So we prepare an experiment where we bring a
participant and we create an avatar that looks like them, like an unknown person and like a friend. We
use the face [indiscernible] for that. So each of the participants is observing the images randomly
during 30 minutes. And it's face is on for 300 milliseconds. So this is super fast. And we do subjective
scoring, how much this avatar looks like the real person.
So what we find is that, well, this is the visual cortex is over here on the back of our head. And we find
that traces of object recognition. Usually when the object is classify as a different category we find a
difference in the N170. So this object is different than that object. So this is 170 milliseconds after the
stimuli, so it's super fast. And we don't find differences here, so they are same class objects. But we do
find difference in P20 [indiscernible] and 250. Here is when we first access the memory, and we see
that the self is processed with much less power in voltage than the other images, both for the real faces
and for the virtual faces. So that means somehow this virtual face that looks like you, and this is
significant difference, this virtual face that looks like you is actually been accepted or processed in a
different way than the other faces. Not only that we find a correlation between the subjective and the
unconscious identification. By unconscious I mean this rapid 200 milliseconds the brain is already
making a distinction. And the subjective is the questioner. Like how much this face looks like you.
For the self faces we find that this P20 difference between the real face and the virtual face of the self.
So the closer to zero the better, it means you're processing them exactly the same. The difference is
zero and the higher the score. So actually there is a correlation in the way we're combining a higher
[indiscernible] function with a very low-like voltage result.
So here this is very interesting. And basically we find that of course if the avatar looks like you, you
can be more identified with it. But not only that, also at the neurological level. So they can recognize a
lookalike avatar. The correlation between the face recognition mechanisms and the subjective for
scoring. And of course this has many applications, because maybe this way we can introduce a
stronger self identification towards avatars for behavioral studies. Or maybe even measure the
identification of lookalike avatars. Because, well, this looks like him to a degree but maybe we can
generate a better avatar and then we generate better we can study the [indiscernible].
So basically we have seen that we can use immersive virtual reality to study the aspects of our behavior
and perception, to embody people into robots or avatars, and also in the last part, to study the
mechanisms involved in this body representation in the brain.
And after all this empirical studies showing the many ways in which we can use immersive virtual
reality to alter human perception and behavior, I really want to end this talk going back to the bigger
picture. So up until now consumers have interacted with many different technologies from a firstperson perspective, my hand, I have the phone, I can move it, I can touch it, I can see. But the content
has remained constrained inside the device most of the cases. Of course, there have been metaphors of
tangible interfaces, so some very great. But the reality that those initiatives have not reached the
market in a standardized way.
But I think this is about to change. Because lately we've seen all these booming immersive
technologies HMD that are aimed at consumers, [indiscernible] Hollow Lens. And these technologies
have the potential to blend the digital content and the real physical world, which will allow us to
interact with the content from a first-person perspective, not only with the device.
Okay, for me, that means two things. The first one is that we have to create the technology that can
provide the highest quality of experience for the user. So they can be able to literally feel the digital
content with their senses from a first-person perspective. And to [indiscernible] through natural
interactions. We will need a much higher degree of [indiscernible] intelligence of course, better
tracking systems, tangible metaphors, [indiscernible] displays, et cetera, all of them working together in
real-time.
But the other thing that this means to me is that we must be aware that in a fully blended reality where
digital content is blended with physical content, the way these things are going to be processed in the
brain as real physical content might effect how people behave or perceive their reality.
So for me there is no doubt that MSR with so many experts in different disciplines is the perfect
platform to really shape the future of immersive technologies.
Thank you.
[Applause]
>>: So would you show that [indiscernible]. So obviously one of the first uses of this would be games.
So would you say that if a person playing a shooting game, immersive shooting game, doesn't feel the
same for instance if I shot at first?
Dr. Mar Gonzalez-Franco: I think it's more close to reality than on a screen. As we were saying just
now, the content, you can feel the content or you can just see the content inside that device and that
makes a difference. And we've seen that, for example, if I see a cliff on my screen, I'm not going to
have a heart rate increase. But people do when they are actually inside a virtual cliff. So at least in the
short-term we will find the physiological responses, we will find behavioral changes and perceptual
changes. Maybe people get used to it.
It's kind of related to me with some people suffering hallucinations. So real people suffer
hallucinations all the time. I have the feeling that my phone is vibrating but it's not, and I really have
that feeling. It's coming from the sensory input. But when I checked it's not. And I'm able to
distinguish what's an hallucination and what's not.
So for example, people with schizophrenia, they are not able of making that distinction, so maybe in
the future basically we're able of distinguishing what's virtual from what's real. But we have to make
the conscious thought about it because our brain is actually thinking the sensory input is saying it is, so
must be real.
And I think it's very similar to a hallucination in a sense. So I do think people will make a difference
between real and virtual, of course. But this stimuli are going to be physical like a physical object. So
people will just perceive them as real things.
Yeah? Sorry.
>>: You mentioned a newness. Back to the previous example of when War of the Worlds was on the
radio and people had really thought aliens had landed. And part of it was because radio was new and
people hadn't acclimated to what might be true and not true there and it imitated a news report. So I'm
wondering is there any way to control for the novelty of the experience. Because people haven't yet
figured out how to separate this from reality.
Dr. Mar Gonzalez-Franco: So I think the answer is not really. In fact when you give the clues or one
of these devices to a first person who first enter, they either get super sick because of the non-good
multisensory integration, the vestibular. Because we are not stimulating all the censors. Or they are
just like, wow, you know, it blows your mind. When I enter to that experiment, I say, yeah, it's all right.
So really think it's a matter of time. So the first time you try it, it's, you know, like the first time you
got a phone. You just don't remember because technology go too fast, it change so fast. But the first
time you use that technology of course has a different impact on you. I could say so, I could say the
first time is going to be stronger than the other [indiscernible].
>>: So a related question. On your further experiment you had a knife appear on the table, on the
hand. You did that sixty times I think. Did you see a difference in people's response over the sixty
trials, especially in like in their, well, did their psychosomatic response decrease? Did they also, like,
you know, did other things change?
Dr. Mar Gonzalez-Franco: Yes, they do decrease. But we don't have enough trials to really do
[indiscernible]. The same happens with the faces. The more time you're exposed, the more you see
that avatar look like you, for people that say the avatar look like them and the less for the people that
let's say the avatar didn't look like them. So you either reject or accept and you get accustomed. There
is a time [indiscernible].
>>: So when you say they have virtual experience to become less immersive over time as they don't
compute with our other senses.
Dr. Mar Gonzalez-Franco: I would say it's more about the surprise effect. Like the first time you get a
Tweet, it's like, wow, you know. And when you have so many of them you just don't have the surprise.
I think the surprise effect is very important.
>>: Related to that, have you done or are you aware of any researcher that compares the surprise effect
like somebody watch a horror or movie will jump when something bad happens on the screen, compare
that to the virtual reality similar kind of jump scare experience?
Dr. Mar Gonzalez-Franco: I haven't done it myself but the colleagues from UCL [indiscernible], he has
running an experiment on moral dilemmas. And basically this was a [indiscernible]. The avatars were
like not really human looking, just like drones. And one of the avatars would enter and there was this
elevator. You were manipulating the elevator, so that was your task. And the elevator you move people
to the second floor of the museum or the first floor. And all of a sudden one of the avatars starts
shooting and you have to move the elevator. Either you decide to kill the floor, or if you don't do
anything you kill the floor, if you move the elevator down you kill the one. But then is your action.
You have decided to kill that one versus the other thing. So there is a moral dilemma there. I don't
want to enter into this.
But around the experimenting in a cave and in a desktop and found there's very strong differences in
physiological response. So when people really feel they are in the place. And this is everything we
find like in the Milgram experiment here we find that the more plausible the situation, the more
pressing solution, the stronger the response is. And some people have a stronger illusions.
And I think over time, for example, if I enter a new scenario, I immediately have a very strong illusion,
after having been to many scenarios. I know what is not real or anything, but every time I put myself
into a new body it's like, wow, I love it. And some people are just like, you know, I think this grows on
you. But it doesn't happen when you are on a screen. So there is a difference.
>>: Why are all the stimuli negatives?
Dr. Mar Gonzalez-Franco: Yay, it's so bad.
>>: Why isn't it positive stimuli?
Dr. Mar Gonzalez-Franco: So in general it's very difficult to measure how embodied you are until you
break the embodiment, or how much agency you have until you break the agency. So this, it's funny,
because it comes from the other hand illusion. So this is like 1998 already and the year, you know,
attacking the hand to measure the response. So it's very consistent over the research in perception that
you attack something.
We have done some workshops in the laboratory to think of new ways not having this bad negative
impulse. But really it's very hard. For example, there were some experiments in which we would do a
beep and they would have to rate like how much embody they thought they were, and then you would
do the average over time. But it's much faster if you show a thread and you see what happens, then you
really see whether they were embodied or not.
But on the other, and you break the illusion. So you cannot continue the experiment so well.
Sometimes it has a balance.
>>: It sounds like virtual reality is only going to be used to convince me that bad things could happen.
Dr. Mar Gonzalez-Franco: No, but actually very nice things happen in virtual reality I would say. It's
nice because you're under a new space.
>>: You may be running with [indiscernible] on the football cam. But the truth is for us to measure
this kind of illusions is sometimes very hard. So that's why some other experiments don't use harm
they use proprioceptive drift. For example, for the thresholds of how fat you are, how long your arm is,
you don't really need to thread the hand, you just say like where is your hand and then your hand is
farther away, you know. There is habit of everything. In my experiments I like these because it's super
[indiscernible] stimuli. And I need that for the EEG because I'm talking about milliseconds.
>>: What if there is a way to [indiscernible]?
Dr. Mar Gonzalez-Franco: I think that could be. But because -- but that would be like general minus
state. Because we are averaging just the stimulus-related component, so we just focus on the stimulusrelated information. At the end you average that out the same as you average the noise out or I mean
there are many things you cannot measure with EEG really. And I'm very skeptical about the people
that use one electron and they say they're measuring emotions. Like they are so simple with one
electron you can measure them. So yeah, I don't think you can even measure that here.
Zhengyou Zhang: The question was do we have any comment on the impact [indiscernible]?
Dr. Mar Gonzalez-Franco: Yeah, I know this is a hot topic. Maybe we know soon because at least they
are starting this amusement park with virtual reality and it looks very exciting to go there, and they are
putting a shooting game. So maybe -- well, there is already a problem in this country with that, but
maybe we'll see it growing.
>>: You said that was where?
Dr. Mar Gonzalez-Franco: In Utah.
>>: Which place?
Dr. Mar Gonzalez-Franco: I don't know but you look Utah, virtual reality amusement park, you'll find
it. It came out last month. They are building it. And it looks really cool.
Sorry, so many hands.
>>: So what do you think virtual reality would be used, like in a learning environment?
Dr. Mar Gonzalez-Franco: Yeah, I think any environment really. Like, for example, here I didn't show,
now I work for Airbus and we're doing research on how to use this technology for training scenarios.
And there are many interesting things like many of the [indiscernible] that we use for assembling, they
are super expensive. So if you have to train somebody and rent the space out, why, this is not being
productive. So if you can generate a 3D version of that, then you can really work on that without
having the physical object in front of you. So for learning I think it has really strong applications.
>>: So you did some experiments on having people in bodies that didn't quite look like their own. Did
you do anything with, say, embodiment with nonhuman bodies, like you had a robot hand or a tentacle
or something?
Dr. Mar Gonzalez-Franco: So we did the robot embodiment, and that works pretty well. So it's crazy
how the brain accepts and how flexible the brain is to accepting modifications. And well, you have
here [indiscernible] in this building who is very much into these things. And I think you accept very
strong changes. Like for example, having a different bodies or having three hands or like if it works
like you think it should work; like if I move this hand and then the third hand or something, you know.
I mean, for example, for digital acting, I think if the actors would have this in real-time maybe they
would do this better, like I embody the four-legged whatever. So they are better at acting because right
now it's so abstract for them, they are just doing the motion capture. And now I say this moves like this
of course, and then I do this and all the hands go like a wave.
>>: How much do you think that virtual reality will have an effect on like a person's personality and
behavior? Because I know that like I read something somewhere that people who watch more violence
in the media are more accepting of violence and they actually become more violent themselves. So
virtual reality is like more exposure to this. Will that have a higher impact?
Dr. Mar Gonzalez-Franco: For example, Mel is running now with the Justice Department in Spain. We
have a violence in the homes, you know, male abusers scenario. And the European Union is strongly
pushing into working for that and we got a grant on there. And we have found that they are currently,
violent offenders are being sent into the virtual reality with [indiscernible] and they are having an
effect. I mean because they are not exposed continuously. It's a one-off. But the judge is sending them
to hearing. It's sort of another therapy method. You have fraud playing, you have maybe this scenario
and this scenario, the violent offender sits on the victim place, so it's like a role playing and he watch
this. Because it's not really violent, it's maybe verbal violence. There are not that many offenders that
actually go all the way. Maybe they are just being very, very bad people.
>>: So will there be like control so the population doesn't become more violent?
Dr. Mar Gonzalez-Franco: I don't think there will be. I mean why? It's free will kind of thing unless
it's starting to effect the rest of us for real in [indiscernible] way. I don't know. It would be that some
therapists really benefit from this. And they are already working on the flight/fear, no, or this kind of
scenarios because you take it out and you're in real life, so before you decide to go on an airport, if you
are fear of heights or fear to fly.
>>: So are there people who are not affected by this? [Indiscernible] so like I don't care if my arm is
[indiscernible].
Dr. Mar Gonzalez-Franco: Some people are. But we can see that. I mean people are effected to
different levels. That's why we have this present solution or presence questionnaires that we ask for
this kind of thing, how realistic do you think it was? Did you feel you were in a different place? These
kind of questions let you know how much somebody feels that. But this also happens with the rubber
hand illusion. Some people don't have it at all. So it's a matter of how you perceive things. Your brain
is a matter of your brain.
>>: [Indiscernible].
Dr. Mar Gonzalez-Franco: No, it's pretty low. It depends. Some people need more time, some people
they're like -- but I would say about 5 percent or something like that.
>>: And I have one second question. So does it affect my, say in virtual reality I crash cars when I
play games. Like will it have brain effects and behavior [indiscernible]? Will I be more prone to
crashing?
Dr. Mar Gonzalez-Franco: Well, I guess I don't know. This is pretty unknown yet. But I would say it's
like the hallucinations case I was mentioning before. If you know you are in the reality, if you lose
your mind with better detective before you start crashing. But you would say this is a matter of you
having your mind able of distinguishing what's real and what's weird one.
>>: Could you comment on the advantages of the cave versus another display [indiscernible]?
Dr. Mar Gonzalez-Franco: So the main difference that in a cave you are represented by your own body.
And in AHMD your body can be substituted. In fact if you have tried the [indiscernible], one of the
things that to me are strikingly bad is that I don't have a body. And in reality I have a body, and if not
I'm just an observer. So I think HMDs need to have a body. We don't conceive life without a body.
Like what are we out of body? Are we ghosts? No, we need the body to interact and do other things.
Otherwise we don't have this real scenario.
And in a cave there is a stronger illusion that you're entering something because you physically enter.
But at the same time you remain in the same place. So there are different connotations. I would say
we, for example, we use caves very much for behavioral studies and not too much for perception. I
would say that because in a cave you really have this very, very big field of view. Because for
example, you are actually inside somewhere and there are many avatars and they are interacting with
them. For example, [indiscernible] he's working with violent scenarios where some people start a fight
and you're looking at the bystander effect. If there are many people around you don't want to act. But
if you are the only one there then you will intervene and stop the fight. And this is done in a cave.
Because I think it's better for you to really have, but I think the moment the head-mounted display
really has the same field of view.
>>: Are there any studies that they've done both in the cave and HMD that measure that sense of
presence in finding the one that's better than the other?
Dr. Mar Gonzalez-Franco: For example, the [indiscernible] he's working on a fear of heights in a cave
on HMD. I don't think he really found much difference. Because he say you have a body in both. The
difference, for example, is if you don't have a body. But I kind of like better [indiscernible] than this,
because they give you this chance to substitute your body also.
>>: When you did the experiment it wasn't clear to me. When you have real faces and virtual faces,
were they kept in separate treatments of this is all real faces that render familiar, and these are the
virtual [indiscernible]?
Dr. Mar Gonzalez-Franco: They would be totally random, all of them. But for analysis purposes they
would be like this. Otherwise, you would see many lines and not understand any.
>>: So when the person would see you, they're kind of avatar themselves. Was there a measurable
difference between themselves and the real photo of themselves? I mean they were both measurable
like identified as self, but was there still a doubt between the two of them?
Dr. Mar Gonzalez-Franco: Yes, for some participants this was very low difference. For some it was
higher. And that's what we were saying here, that this difference, especially the one we're talking
about, the voltage difference between the virtual and the real self, was actually correlated with the
identification. So the more I really think that avatar face looks like me, the more my brain process it
like me.
>>: So would be interesting if there would be any kind of followup on what aspects or differences of
the face map to or correlate better to identify self. Like if you change the eye color is that easy to
ignore, if you change it, it's a -Dr. Mar Gonzalez-Franco: I think there need to be, they would be very strange changes. Because this
avatar, it's not super perfect, and it's already been identified as the self. So I'm not sure if you would
really see many differences. But if you have like a strong differences, maybe you will. In fact, some
experiments they actually fade from one unknown person to your face without avatars, and they find
the difference when you start recognizing the several or when you started recognizing the other.
>>: It would be interesting to correlate with different components so I know like, oh, the eye shape is
more important than the jaw or creating more -Dr. Mar Gonzalez-Franco: There are some experiments in this field of face perception, which for
example they move around the parts. Like two left eyes or this kind of things, and people still
recognize themselves. Well, recognize the face, sorry.
Yeah?
>>: I'm interested in that 400 error area that you gave [indiscernible] and that's an error, but it's an
error that you might find in [indiscernible] in that that would be effective, maybe change the way you
view the world, something like that. And I'm curious, are people trying to think through the idiom of
VR, so what you think of, well, a house doesn't correlate to eat, and are people trying to create new
experiences for the sake of expanding down to free experiences or -Dr. Mar Gonzalez-Franco: This is done on an auditory stimulation, so there is virtual reality for these
kind of experiments. They were first found by a researcher in the Netherlands. And he basically, in the
Netherlands I don't remember now but the trains are just one color, like yellow. And he would tell
people the train is blue and people would have this response. And like he would say the train is yellow,
people would have that response, and then he would say, okay, this is a violation of how they
understand the world. And so he continued working on this area of research.
And the reason I show this here is really because it's in the literature it's very similar to what we find. It
doesn't mean that this is virtual reality, it's not. But it may mean that because here you have perception
inputs, so offering signal, that there needs to be integrated, and it gets integrated here, and it has the
same temporal dynamics as the other afferent signals, which in our case is this one. So it really kind of
matches. So this is our discussion.
Of course when you have AD they don't tell you anything about the interpretation, so you have to look
in the literature and find out what can it be or if other people have found similar things. But this is not
just about virtual reality, this is about you feeling that body as yours and moving it and then the break
on agency rather than just virtual reality. So it's about your own body.
Zhengyou Zhang: So actually on line [indiscernible].
Dr. Mar Gonzalez-Franco: Thank you.
[Applause]
Download