36727 >> Hrvoje Benko: Thank you so much for coming. ... I'm a researcher here at Microsoft Research. And it's...

advertisement
36727
>> Hrvoje Benko: Thank you so much for coming. My name is Hrvoje Benko.
I'm a researcher here at Microsoft Research. And it's my great pleasure
today to introduce Evan Suma. He's visiting us from USC, from institute of
creative technologies down there. And he's one of the world's experts on all
sorts of VR trickery, trying to basically understand locomotion, expand our
spaces and kind of walking abilities and manipulating abilities in VR, mostly
by manipulating what you see and what you get and the differences between.
But I won't take much from his talk. I just want to welcome and say, you
know, thanks for coming down and giving us this presentation.
>> Evan Suma: Okay. Thanks, Benko, for the introduction. So today, I'm
going to be talking about how to make a small space feel really large. So
this is all about perceptual illusions and virtual reality but with a
practical spin because what we're trying to do here is solve a real
engineering challenge for VR. So VR for me is a really kind of special place
to work. So ever since I put on my first HMD, I became fascinated by the
experience of being in VR and creating VR experience for others. And what
really ups the ante for me is when you can use VR as a medium to create
really surreal experiences that transcend what is possible for the real
world. So some people say recreating the world in VR is kind of the holy
grail, but for me, it's transcending the real world, going beyond what we
normally can do. So as an engineer, I think it's a really -- this is a
really challenge -- being able to do this is really challenging because VR
has so many moving parts that need to work. As a researcher, it also is a
really interesting empirical tool for the controlled study of human
perception and behavior. You can actually use it to test things you can't
evaluate in the real world. And this talk really is about the intersection
of these two goals. It's engineering magic experiences but experiences that
also enable us to do the empirical study to understand more about the human
experience. So an overview of this talk, I'm actually going to first take a
step back from that and focus more on kind of the work that we just did in
just the fundamental display technology because VR, these experiences are
technology mediated and then from that, I'm going to move into kind of the
next big challenge that I see that we're trying to work on and how we're kind
of looking at this from the perspective of what I call hacking human
perception through a couple different types of illusions and then finally
talk about more about the practical spin of this which is how to enhance
interaction fidelity which is kind of my code word or my technical term for
creating magic. So, looking back at kind of when I started in VR, so I
started in 2004-ish, 2003, but I got really started grad school in 2005, and
these were the displays that I was using at the time. So we're talking
pretty low resolutions. 640 by 480 per eye. Bulky, heavy, very expensive,
but probably the most problematic at the time was the field of view. So the
VA, kind of the standard that you saw a lot was 60-degree diagonal field of
view. And so if we look at this, this is actually mapping what that type of
field of view is on to your entire -- on your retina and what your human
visual system can actually see. It's like looking through a very, very small
window. And this is for me why VR to that point, all throughout the time I
was studying in graduate school, really wasn't magic. In fact, a colleague
asked me when I got started in VR and I told him the year. And he said, you
decided to get a Ph.D. in VR when VR was down and awful? What is wrong with
you? So, I guess I just liked a challenge.
>>
[Indiscernible].
[Laughter]
>> Evan Suma: So, this changed for me in 2010 when I moved to the institute
for creative technologies and we have a prototype display there called the
wide five. And so this one really -- and the antlers there are kind of funky
but they're just for the motion tracking system. But the real fun thing
about this display was that was when I first experienced VR magic because it
has 150-degree wide field of view. So the difference here, you can see that
it really is the difference between looking through a tiny window at
something distance and really, for me, being there. And so if we look, that
wasn't the only display capable of doing that, although it was the widest at
the time. But if you look at the cost, you really see two categories here.
It's really useful to look at this as dollars per degree. So if you look in
the bottom, that is kind of more what we would target as not even necessarily
consumer level but more affordable HMDs that -- but their field of view was
really limited. And getting up anything over that 60-degree mark really
ended up, it did not scale linearly and it ended up getting extremely
expensive beyond what a lot of research labs can afford let alone the average
person. And so a lot of our funding at ICT comes from the U.S. Army. And so
we had in about 2009, the acquisitions people who were in charge of the all
of the purchasing and decisions for the Army came to us and we scheduled all
the demonstrations of VR and they said this is all great, but it will never
be used by the Army and the Army cannot invest in VR because if we want to
train people, we have thousands of people we need to train and it's just too
expensive. There's no way. You have to figure out a way to do this in a
more cost-effective manner. So this kind of got -- kick started us and got
us thinking and we were really inspired by this little piece of technology
here which I don't think has gotten kind of forgotten and doesn't get a lot
of credit but it was kind of a little before its time so you can actually get
this at target or Wal-Mart or Amazon at the time for less than $20 and it was
a little plastic device from Hasbro called my 3D that you can slide your
iPhone or iPod in and you could get a 3D experience. But it really was, from
a product perspective, not as successful because it was a capability with no
content. So what we got really sort of interested in this and the
possibility that now the smartphone screens were finally getting big enough
and high resolution enough where you could actually start doing VR
experiences with them. So we did -- this is a kind of a prototype system
that we published at IEEE VR in 2011 which combined this with a Bluetooth
keyboard for interaction and a few other moving parts. And so but so this
kind of got us thinking about this but then it's real kind of insight or what
was really, when we started doing this with higher-end optics. So this is
some LEEP Optics that we pulled off of an old boon display back from the '90s
and then we basically just cut out wood and slapped two iPhones in front of
it and there are all these problems with it. There's distortion. The eyes
are not synchronized properly. But the real experience is when we took this
live demo to the conference and we started handing it to VR experts and they
were looking into it for the first time and seeing high end optics in front
of these smartphone screens and being like, wow, I didn't think -- the
reaction we got from most people was I didn't think it was going to look this
good. So we left from that and we really started thinking about, okay,
there's something here. We really have to continue trying to make this
accessible. So the next year, we came out with what we call FOV2GO. If you
have been to IEEE VR, you might have seen us hand these out. We brought
about 200 of them. They're carbon foam core laser cut ones with lenses that
are basic a dollar. And then you just basically fold up your own cardboard
display. And we have them for all different types of phone models. So this
at the time this was really, really kind of an exciting thing. Today, I show
this but it's more of a historical kind of note because now of course Google
cardboard has really gone and popularized this notion of smartphone VR over
the last couple of years and that, you know, has really just been an
explosion in terms of that ecosystem that I think is really, really special,
spectacular. In parallel to the smartphone VR effort, we were doing some
really prototyping on the HMD side and we wanted to really start building not
just viewers but head mounts and so we found this hobbyist. We hired him.
His name is Palmer Lucky. And he was a hobbyist teenager working in
Los Angeles who had quite a eBay VR HMD collection that he was playing around
with in his parents' garage. So we brought him in the lab and we started to
build prototypes here which we referred to as Franken HMDs, which were basic
cobbled together from eBay parts from different displays and LCD panels you
could buy online and were literally held together with tape. But they really
started -- we started to see we can start to build this for about $300. And
so we decided to open source all of that. So all of these designs, the
socket HMD is kind of an interesting historical note. I think the consumer
market has now gone leaps and bounds beyond this but it was a -- our 3D
printed design for this display which was very similar to the Oculus Rift
DK-1 and all -- we started to move beyond the cardboard to 3D printed phone
viewers as well as tablet viewers which are really interesting because now
you can combine an immersive display with a touch surface and do a hybrid
interaction where you can actually use some manipulation of objects on the
screen while you're looking at it. So all of these are available on our
website and anyone with a 3D printer can download and create them. But
there's only a limited amount of people. I love open source and I believe in
it but it only touches so many people. And what really, really was the
catalyst to -- for this explosion was when Palmer sent one of those
prototypes to John McCarmack and then left the lab to do a kick starter and
the rest is history. You know, Oculus was eventually founded and bought for
Facebook for $2 billion and now Palmer is being one of the lead spokesmen for
the field of VR. And one can make the argument that though this was really
one of the catalysts, really, the real reason was that the display technology
had gotten low cost enough and other of course companies were looking at this
at the same time as well. So it really was something that, though this was a
particular catalyst, this is something I think was just inevitable based on
the cost of technology going -- display technology going down and the quality
going up. And you can really see this I think when you look at that chart
again and you put the Rift socket under the consumer level HMDs on the dollar
per degree mark. You were looking at that 90-degree virtual. It was kind of
the initial version of the Rift. And what we see there is that that's
getting pretty good enough. And the dollars per degree is so much lower than
everything else that really, now, we could go back to our people at the Army
and say, you can now afford these and so now we can start to really see
people care about VR. And of course now there's so many companies, HTC,
Sony, and all of the other ones. I don't put Microsoft on this list because
Holland's is a mixed reality device which of course is similar but it's in
its own class and in talk is really more about VR. So can we say mission
accomplished? Is VR a solved field at this point? And so I'm giving this
talk to hopefully it won't be surprising that my answer is no and the reason
is I think a lot of the content creators and a lot of the experiences that
we're seeing coming out at least right now and in the last several years have
interactions that look like this. Seated use or movement in a very, very
small area and a lot of the interactions are mediated by controllers, game
pads or other sort of handheld devices to move through a virtual world. This
to me, I've done this -- now that I've been doing this for well over a
decade, when I have these kind of experiences that are mediated by
controllers, and it just fields like a video game to me. This doesn't feel
magic anymore. And I worry that in the long term, that once the novelty
effect wears off, that people have been immersed enough and seen enough VR
experiences, it's not going to seem magical anymore. It's just going to seem
like another type of gaming. So ->>
That experience, do you still get nauseated from induced motion?
>> Evan Suma: I'll talk about nausea in a little bit. But yes. That's also
another problem with these kind of locomotion metaphors is that motion
sickness, because you're motions don't map with what your body is doing, that
is a problem for some people. At least some people. So why is walking a
problem? Well, it should be pretty obvious that even if you have a tracking
space that can allow some physical body movement, if you want to walk through
a large virtual environment, say a virtual city or a large office space or
what have you, at some point, you're going to run out of physical space.
You're going to walk and you're going to, in the best case, run outside of
your tracking area. In the worst case, you're going to physically collide or
walk into a wall. And because you're wearing a headset, you won't be able to
see it. So this is kind of one of the real fundamental challenges for any
sort of VR system in, you know, is this real problem of physical locomotion
through the environment. So we researchers have been studying this for a
while. So this has been something that people have been thinking about for
quite some time. And so there is one interesting solution that came out of
the literature about is a years ago. So this is not my work: This is the
original redirected walking work which comes out of UNC Chapel Hill and so
which really I think has been an inspiration for a whole class of research in
the field. And the basic idea here is that you just decouple physical and
visual motions. They're related to one another but if you, for example, get
someone to walk through a zigzagged corridor, you can actually get them to
walk back and forth in the real world and if you just realize that there is
a -- that you don't have to have a 1 to 1 mapping between physical motions
and virtual motions, there's a lot you can do with that. So let me give you
some examples of how this works. So there are -- the easiest way to do this
or the original way that Sharif suggested was through what's called
manipulation of gains. So a gain is just a multiplication factor applied to
your motions. So in the case of rotation, I might walk through a virtual
space and then I, if we're in this example, rotated -- I'll show you that
again. Rotate 90 degrees in the virtual space but 180 degrees in the
physical space. So that you can, after your turn, you are now walking along
a different vector. You can do this in other ways. Another one that's been
identified is called curvature gain and in here, it's different because you
are walking straight and there's a continuous kind of rotation applied as you
walk forward and you will bend and walk along a curved path. And then
finally, there's translation gain which is just a basically a multiplication
on your step size but only in the forward direction because you don't want to
amplify side to side movement. So you can travel greater distances virtually
than you are physically. So why is this better than locomotion -- using
controllers? And the reason is because it's linked and controlled to by your
own motion. And so there's two different perceptual systems in place here.
One is your vision. And one is your vestibular sensation, your sense of
balance, your body's sense of your movement. And turns out that vision tends
to dominate when those two senses are in conflict as long as they're kept to
within a certain threshold. So there's been researchers who have studied
this and so these are just some of the numbers out of the literature and
turns out, if you do it within these parameters, and do it the right way,
then not only is it imperceptible to the user, but it also hopefully won't
make them sick as long as you just don't do this too much. Now, I can tell
having been in these environments where you do it too much, it can make
people sick. So that is a concern. But, the key here is that vision
dominates over your sense of balance. So one of the thing I wanted to do was
really kind of understand not just kind of do I notice these illusions but
how does this impact my sense of spatial orientation cognitively? How does
this impact my experience in this virtual world? So this is an experiment
that we did where we were pointing at targets. So what happened in the
beginning was you see a virtual target and you aim and we're using a tract
wand to aim and then you point at a real target so they bend up the optics of
display, look at a real target in the real world, point at it and then it has
to remember where these are. So point at the real target, point at a virtual
target, then you're back in HMD. We go through some sort of virtual
experience where we apply these redirected walking techniques. So for the
sake of simplicity, I'll just show you, it would happen continuously but at
the end of that walk through, you're 90 degrees offset from where you were
when you started in terms of your physical orientation. So if my original
virtual target was there, we want to know, would you point at that virtual
target where you originally saw it or would you point at it in its position
as if it were redirected, basically is your memory of that virtual -- your
orientation of that target now 90-degree offset. And more interestingly,
what would happen to your perception of the real target. Would you point it
where you originally saw it or would your reference frame in the real world
move as well? Another way of thinking about this is are you maintaining two
models of your spatial orientation? Do you have your real or your virtual or
if I manipulate the virtual, do I manipulate also my real? And so the way we
look at this, we had to figure out how do we measure this qualitatively. So
the way we look at this is looking at angular pointing errors so this is a
way of doing this as a pretty common metric in VR. And so what we looked at,
what those positions were, we calculated the angular pointing error and so
what we found is actually exactly what we would expect if they were
correcting for that -- the redirection and they were moving those targets
reference frames. So for both virtual and real targets. And the way we do
that is we can see that the angular error, so the lower this is, the more
accurate, those angular are pretty consistent with what you get with just
regular pointing. So those numbers, 35 to 40-ish, are pretty typical of just
the angular areas you get when you're asked to point to something whereas you
can clearly -- they were about 90 degrees offset from what their original
positions were. So this was a really cool result for us because this
confirmed kind of an -- or resolved an argument that I'd been having with
Mark Bolas, my colleague and codirector of the lab where he maintained
adamantly that there were dual models and I said no, if I mess with you in
the real world, I think I'm going to be able to get you to your reference
stream in the virtual world, the real world is going to shift as well. And
this kind of resolved that and he stopped complaining at me. So now, I want
to start talking about another class of illusions. So this type of work with
redirected walking really kind of was just the initial insight that what
happens in the virtual world can really transcend what happens in the real
world and we're not bound by the same laws of physics. So now, let's talk
about another type of illusion that I just kind of came up with and I really
got this inspiration from the kind of common psychology kind of stuff you see
in psych 101. And here's an example. So I'm going to ask you to look at
this picture and then I'm going to change something. I'm going to ask you to
just call it out if you see what's changed. Anyone notice it, just call it
out if you saw what I changed.
>>
The [indiscernible].
>> Evan Suma: Here, I'll make it really easy. And so the reason that's
difficult is because the human visual system uses motion to be able to detect
changes. And just that split second interstimulus image, that black screen,
disrupts that perception of motion and then it becomes very, very hard and
this has been well studied for many, many years in psychology. It's called
change blindness and it's very, very consistent across people and a very
powerful illusion. So I started to think what happens if we apply this to
VR? And so here's an example of how this works. So in this example, you're
going to see walking through this virtual room and watch what happens on the
overhead view on the top right -- left as they approach this desk. So, they
were looking forward at the desk and in this study environment, they were
just kind of looking pictures. But what happened behind them was that this
doorway moved 90 degrees. It stayed in that same location, but the
orientation of the doorway swapped along the corner. You're about to see it
again as they walk to this corner of the desk. Oh. Oops. Okay. So the key
here is that this manipulation occurs behind their back. So everything
appears consistent to them in front of them, but then they turn around, the
door is offset by 90 degrees and the corresponding hallway is also offset by
90 degrees. This basically means that within about a 15-by-15-foot space, I
was able to infinitely repeat this and do a 3,600-square foot office building
within less than 200 square feet and this is basically -- because this is a
consistent illusion, as long as they're going linearly and entering into
these rooms and kind of not looking back like walking at the door, like
staring at it while walking backwards, which people never really do, but
they'll never see it because the manipulations occur behind their back. So
this was a resource manager interesting illusion. I wanted to test it, so I
ran several experiments. And I was really trying to get some statistically
significant results of how many people noticed this. And what I really found
was that I completely failed in getting statistically significant results
because no one noticed. So I did this to them throughout this environment 12
times each, multiple experiments and one out of 77 people noticed or reported
that illusion. Even after it was disclosed to them afterwards, they -- we
tried to tease it out gradually with questions, and it just -- it was so
effective that we were not expecting it. So this was really, really
interesting. So we started to look at kind of how this affects your
perception of space. Okay. You might not notice it, but what do you feel
that this environment looks like? So they're going through this environment
which cannot be represented with a single drawing because it's a dynamically
changing world but we asked them to sketch map it and these are kind of very
consistently across the study the types of maps that people drew which looked
very similar to kind of the conceptually this kind of office space with the
kind of in this square environment. So what this -- we analyzed these
through some subjective ratings and statistics which I won't get into here,
but what we really found over all is that the spatial inconsistencies just
seemed to get resolved. You know, perceptionally or cognitively, when you're
going through this environment, as long as your experience -- my take away
was as long as your experience is locally consistent, then kind of globally,
we figure, you know, we figure out a way to make this work. When you're
forced to draw a map, you figure out a way to make it fit on paper.
>>
[Indiscernible]?
>> Evan Suma: Yes, yes. So these were drawn after the experience ends,
immediately afterwards. Yes?
>> So this is sort of interesting to me because as long as they do the path
where they go into every office, this will work. But if it's sort of a open
world where they can explore however they want, it won't work, right?
>> Evan Suma:
That's correct.
>> So it seems like in order to decide what you need to do, you almost need
to be a little predictive about where they're going to go so you can kind of
start manipulating the world ahead of time. And manipulate them later or
something.
>> Evan Suma: Yes. Yeah. You are 100 percent correct. And that is the
latter part of my talk which I will directly address that question. Yes?
>> So during your study, did the participants saw the space before they try
or they wear like the headset before they go inside the space?
>> Evan Suma: In these studies, they -- we did not blindfold them before
they entered the physical space. So they did see the physical space
beforehand. Yes?
>> Did you do a kind of a longitudinal study [indiscernible] in terms of
like people adapt to technology and their skill sets change and the way they
perceive things change. Any information on that? Were there people over
time would actually mature and understand it differently?
>> Evan Suma: Nothing beyond anecdotal. So I'm not really aware of
longitudinal studies in VR because it's difficult. You know, or impractical,
I would say. So no, I haven't done it. I'm not aware of it. But
anecdotally, I can say that these illusions tend to -- for us around the lab
who get to participate in these all the time and see this, they just seem to
work like even though I know that this illusion is happening, that it doesn't
seem to have a negative impact on the experience. It's one of those thing
that I think I can just accept because the experience seems locally
consistent. So yes?
>> As a follow-up on Michelle's question, did any of the subjects report any
kind of unease in knowing, I shouldn't be able to walk down this hall this
far because I already saw there's a wall in front of me? Or do they kind of
start walking slower as they get ->> Evan Suma: Not that specific. No. Not that we could tease out in any of
the data. Although they did report a general sense of feeling turned around.
And I think that that goes to your question which was about did they see the
space beforehand. They knew, I mean, if you see the space beforehand, you
know how big the tracking space is. You know you can't walk through an
infinitely large space. So I think there is this general sense that, like,
yeah, I got turned around but when we tried to tease out of them, how did
that work, how did that work, it overwhelmingly seems to be like I have no
idea how you did it. I just know that I've been turned around. So. Yes?
>> And observation on that, it was the earlier study somebody else with the
zigzag hall.
>> Evan Suma:
Yeah.
>> I noticed in the first zag, you see a lot of movement there where that
first time they get adjusted, you can see their brains trying to figure
something out, and then after that, it was pretty clear the brain just kind
of --
>> Evan Suma: Yeah. And this is a phenomena called perceptual calibration.
We know that it takes a couple minutes and this has been studied in
perceptual psychology that people will accept very -- you can actually
recalibrate to all sorts of different things like different walk speeds,
different motions, even remapping of kind of the movements of your head along
different axes. It takes usually a couple minutes and then afterwards, a
couple minutes to calibrate back, but it's a very rapid process. Okay. Any
more questions about change blindness before I move on? Okay so like I said,
this illusion was unexpectedly powerful. Will so I started really thinking
about other types of spatial manipulation that we could leverage in VR and so
for this one, I drew my first inspiration from psychology. This one, I'm
drawing on my nerd credibility here from science fiction. So who here is
familiar with the BBC television show Dr. Who? I love giving talks to
technical audience because they actually get this reference. So for those
who aren't familiar with it, in this kind of timeline or mythology, the
doctor travels around in this thing called the tardis which is basically the
size of a phone booth here. And but the inside of it is much, much larger
than could ever exist in the real world. In fact, conically, it's supposed
to be infinitely large. So this was the kind of basic illusion that I wanted
to investigate in VR. I wanted to understand -- I wanted to experience this
magical sense that all the people in the show got whenever he leads new
companions into the tardis and they go with a sense of awe. It's bigger on
the inside and it becomes this joke. But I wanted to experience that sense
of magic that's walking into something that's bigger on the inside.
Fortunate, in VR, we can create this. So here's the kind of experimental
environment that I built to test this. So I use the kind of very similar
environment to what you saw before where you walked to desks and you're
looking -- you'll be given the task of going over to look at monitors to see
pictures because it's just a way of letting people move through the
environment. So you go through one desk. You turn on a picture. You walk
down, exit a room, you walk to the hallway. And then by the time you get to
the entrance of the second room in the hallway, the first room is essentially
deleted and the new room is put in its place and of course, if we look at
these kind of superimposed on one another, this is a severe violation of
Euclidian geometry and could never exist in the real world. So I wanted to
now see what is it, how sensitive are we to these types of illusions. So the
way that we figured out to do this is to do what's called a psychophysical
experiment. So what we did was we just asked them to -- we did these on
whole different levels of overlap ranging from zero percent which means
perfectly can't exist in the real world to over 75 percent which basically
means like most of the rooms are completely overlapping and they're almost
completely on top of each other and we have them just give this kind of
discrimination tests. They experience whole bunch of trials and we ask them,
is this impossible or possible? And we do this over many, many, many
repeated durations and then from that we can calculate the probabilities and
generate what's called a psycho metric function. So I won't go too much into
all the detail of this but the point here is that this is the probability of
being able to detect that it's an impossible space and this is the overlap
level. So this is increasing amounts of violations of geometry. And what we
found in our data is that about 56 percent by convention is when they start
to become reliable so you can get quite a bit of space savings by overlapping
geometry and people won't really be able to detect it if it's less than 56
percent. And the interesting thing is this is when people are explicitly
told about the illusion and instructed to try really hard to figure it out.
So I think this is actually conservative and if you did this on someone who
is completely naive to the illusion, probably you could get away with a lot
more.
>>
[Indiscernible] corridor and walk it one to one.
>> Evan Suma:
Yeah.
The corridor is walked 1 to 1, yes.
>> What happens if you sort of make it virtually faster?
the understanding that this is a bigger space.
Would that make
>> Evan Suma: I haven't tried it. It's a very interesting idea, though. So
yeah. You might be able to because to some degree, people are using their
bodies as a ruler and judging those steps as a way of judging distance. Some
of the people, I noticed some strategies in this study that some of the
people who are really, really good at the task were actually counting steps.
[Laughter]
>>
Some people are just innately better at judging distances.
>> Evan Suma:
That is true.
That is true.
>> People that are probably the best at it are people that work the cameras
for film directors. They're amazing. I'm wondering if you normalize against
that.
>> Evan Suma: I do take some demographic data. I haven't looked at that. I
will say so most of our subjects are not university students which is a lot
different from the way a lot of academic VR labs do it. We recruit off of
the general population on Craigslist. So I had a pretty broad kind of
selection of people. But not -- the sample sizes we're looking at are too
small to be able to draw any conclusions about the population. The only
thing really I can tease out is video game experience but even then, it
wasn't predictive of performance in this time.
>>
Is it delayed just on employed actors?
[Laughter]
>> Evan Suma: I need to modify my demographic questionnaire then. So beyond
just do they notice it, though, I really wanted to go beyond just that and
understand how does this impact your experience, you know, because
self-reports of I notice this are only so useful. So for this I came up with
a metric that I kind of drew on from the VR literature you called blind
walking and so this is a distance estimation test to get at your point. So
in this sense, what they did was they walked -- after they walked to that
second room, they were asked to look -- turn back to through the wall to
where they saw the first target. So both of those desks were as pictured
there kind of against the same wall. So they were asked to turn to where
that first one was and close their eyes and imagine how far away they were
when they were standing in front of that target. The HMD at this point goes
completely back and then you're asked now to walk until you are physically
standing on that point that you were. So that's why it's called blind
walking. This is a very common metric used for distance estimation studies
in VR. The difference and the caveat here was that in the cases of overlap,
then actual place that that was would have been either forward of the wall
and in some cases, in the extreme overlap conditions, literally only a step
or two away. Like they were almost on the exact same space. So the question
here was would they walk to where it actually was that they were physically
standing or would they correct for the compression and continue walking as if
those two overlapping rooms had actually been moved out and were actually now
correctly side by side. So this is what we found in the data. And so what
we're seeing here is overlap level again around here. And this is in
percentage of the actual -- the walk distance relative to what it actually
should have been if it were accurate to the real world. So if they weren't
correcting from the compression and they were walking to where they actually
were, we would expect to see the data follow that 100 percent horizontal
dotted line. The red dotted line is what we could expect to see if they were
correcting for that compression and over walking. And this -- I love to show
this data because this is one of the most clear cut examples of an effect
that I've ever seen. You really don't need a lot of statistics to be able to
look at that and see exactly what they were doing. So even in the case of
75 percent overlap where it's really obvious, very few people, 90 percent of
people were able to reliably detect that this is an impossible space. Even
in those cases, they were still walking those exaggeratedly long distances
when kind of in behavior correcting for this illusion. So I think that was a
really, really interesting finding. And it kind of goes to this idea that I
like to say that we've learned that spatial perception is malleable. And
that people even if they kind of can perceive that these things are -illusions are going on, they'll still try and behave normally as long as it
doesn't mess with their experience or at worse, make them sick. As long as
you can make the experience system very rich and pleasant, then these
illusions even when obvious, could still be useful. But now I'm going to
start to get more of the -- raise more of the practical question, stepping
out of the basic research hat and now start talking as a VR engineer. How is
this actual useful in a real practical system? And so now, if you want to
ding me on anything, you can ding me on saying, hey, you know, these are
really only work in these linear environments where you have this kind of
purpose-built experience that kind of validates the technique. But if you
want to give free-flowing exploration of just an arbitrary environment, how
would you do that? These aren't generalizable. And I think they're not like
there is no generalizable solution for a redirection that applies to all
spaces at all times. At least we haven't discovered it yet. What they are
is tools. They're tools for VR developers and VR designers and content
creators to use for the experience that they're doing. And they're best
employed when you can actually couple of content creation with the ton of
experience and techniques you want to use. So some of the interesting ways
we can use these tools, so this is an example of how we used it in a mixed
reality setting. The change behind this technique is really interesting
because it's a discrete change. It's unlike the motion illusions. This is
not a continuous change. It's a single state switch and so because of that,
it's predictable. So instead of doing a 90-degree offset, this is one where
I did a two-stage building where there's interior rooms and what you're going
to see here is when you get to this back room, I'm going to pull the same
door switch here. And then this door switch is going to move over here. So
there's actually two doors moving here. All behind your back. And what this
essentially lets me do is change -- is reuse this road infinitely. So we
trucked about a thousand pounds of gravel into the lab which I would not
recommend for cleanliness because we were cleaning up dust for the next three
years. But when you enter in a building here, and then when you exit the
building, you end up exiting here. And then every time you stepped right
back on to that gravel road, you feel the crunch under your feet. You feel
that haptic sensation and so it was a really very, very compelling illusion
or again, that sense of magic for me because now, there's a sense of realism.
The real world is actually kind of playing along with this illusion. Here's
an example of another kind of way in which these impossible space techniques
could be used in a practical setting. So this is a technique I call flexible
spaces and in this sense, what we're doing is we're playing with similar
versions of non-Euclidian geometry but we are doing it by creating twisty
hallways that kind of curve back in on themselves and so what we're doing
here is this is an environment where each room is premodeled. This is
researcher art so that's why it looks so bad. But each one of those all ways
is procedurally generated on the fly in unity based on the -- the polygons
are just generated as needed based on where you're standing in the space and
where you need to go. And so it's really cool thing is that you can just
basically do this infinitely. And so I need a hallway that gets me to be
standing over here. I can just generate a twisty hallway. And you do get a
kind of a sense of like again, that general sense of like something fishy is
going on here. But because we're not employing any of these motion
illusions, there's no real risk of inducing additional simulator sickness.
So I think this is a really cool technique that could be used for
entertainment and for experiences in general where the individual layout of
the environment doesn't so much matter. So educational experiences, museum
exhibits. Things where you're trying to experience content but the exact
spatial layout is irrelevant to the experience. And trying to move this now
to really into practice, and I mentioned that IGT has a lot of DOD funding,
so what we want to do is really again do the same thing we did with HMDs and
make it possible for people, our funders and also just the general public to
make use of these techniques in this toolbox. So this is the example of the
redirected walking toolkit which we built for unity. We actually have
completed it and we're actually just getting the website up. We'll be
released by the IEEE VR conference in March open source for unity. And so
what we're really doing here is trying to build all of these kind of more
generalizable techniques into sort of a toolkit that's plug and play. So I
can just hand it to a developer and they don't need to know about all the
math and the perception. They just need to tell it how big is my tracking
space here's where I want to go in the environment. This was an example
where we're actually planning way points so we're telling it, this is where
we want you to be able to go so the environment can plan it. And then it
will figure out the math and make that work. And one of the real reasons
we're doing this now is because we are finally seeing a consumer-level wide
area tracking system with the HDC vive coming out which can get tracking in
around five by five meters or so four by four meters, something like that.
And so it's really now you can start to see, okay, if we have -- we're not
getting up to like the huge spaces but we're starting to see consumer-level
tracking that can actually allow some movement and I'm going to give you an
example of how we use redirected walking using the toolkit within a vive set
up so this is an example of an environment that we did for the C graph ARVR
contest last year which actually won first place at the contest and this is
in collaboration with our partners at the School of Cinematic Arts. So what
they did, what you saw there was a turn table where we're using, we're
working with stop motion animators. And so there's a rig that spins that
around and there's an image taken every second for every degree and it -- and
so then you're able to do kind of capture the image at every angle. And what
we're doing now is doing image-based light field rendering within an HMD. So
we created this experience for the conference but then they said, oh, you've
been invited as a finalist. You have to bring it to the conference. Here's
our demo space and we're like, the environment doesn't -- our environment
doesn't fit within your demo space and our environment, like the key to this
stuff is really being able to move around it because you get all these
specular reflections and subsurface scattering and all of these really fine
visual elements that don't come through in geometric rendering but you really
get from these image-based light field approaches, so that movement, that
physical body movement around it is really, really important. So what we did
was we basically went through and we used the redirecting walking toolkit and
this is that same zigzag idea that the redirected walking paper from Chapel
Hill did but this is actually the physical kind of space dimensions you would
get with a vive. So now, as long as we give them verbal instructions so
there's a narration that says turn to this exhibit and explains linearly
because there's a progression do this exhibit. As long as we're able to
direct them to go where we want to go in this environment, you can see we put
a couch here as a kind of visual indicator of the scale that this could
actually work in a living room-sized space. And then so interestingly
enough, I do have this demo on my laptop. I know that there are some people
with vive setups in this building, so if anyone who has a vive setup what's
to experience this, I have some time this afternoon. I'm more than happy to
come by and you can actually see what it feels like. And so that's an
example where we know the path in advance. Another way of being able to deal
with this path prediction problem is in letting the user plan the path. So
this was one of our kind of early cheats was that, okay, we can't really let
you walk anywhere but we're not going to tell you where to go in this kind of
a free flowing environment so we give them an app and they basically said
plan a route and then we'll figure out the algorithm will then figure out how
to get you where you want to go. So this is a little bit more free. But not
totally free yet. Now we're working on the kind of totally free case. So
this is for the copping work that's just been accepted to IEE E3 DY will be
published later this year and what we're doing here is building short-term
prediction graphs based on the geometry of the environment and your movements
and what we're doing here is we're basically leveraging a tool that's already
available in all these game engines for doing game AI and the navigation
message so navigation messages are basically how game AI does it path
planning and so we're basically taking all of that technique that's built
into all of this and we're using that to essentially build up these
prediction graphs about where the user is theoretically going to want to
travel in VR and so this was just the initial technique and now our next step
is we're building this into the predictive algorithms that we have
implemented and then we can measure the kind of expected performance or
advantages of doing that.
>> [Indiscernible] notoriously static. Are they? And are you able to
dynamically change the level using [indiscernible]?
>> Evan Suma: So we actually -- I can't speak to that because we haven't -we're dealing with the static case first. We're not even dealing with
dynamic yet. I do know that while our preferred way to use [indiscernible]
are not the unity built in but using a packaged on the asset store that
extends it. So I'm not sure about the dynamic case. So and then another
technique that we have in the toolbox here is a reorientation techniques. So
what happens if these techniques fail? So let me start this over again. So
what happens is basically, if we know at the very second that -- we try to
predict as best we could, but at some point, you're about to hit a wall. We
have to do some sort of failsafe or some sort of way of intervening and
maybe -- this is one potential trick that may or may not work in a particular
experience but we asked them to take a panoramic photo. So this is an
example where we basically just ask, take a panoramic photo. That's
something that most people with smartphones now are familiar with. And what
this basically does is it gives us an excuse to give them a spinning motion.
But that spinning motion occurs on the spot so we can basically do an
emergency reorientation away from a wall. So again, it's disruptive to the
experience. We don't want to do this too much, but as an emergency, when
everything else fails, it's better than having to take off the HMD or crash
into something. So you can see now how these kind of techniques work. We
try to apply this continuously and predictively as best we can. But as
they're going through the environment, if eventually they end up hitting a
wall, that's when you see that kind of space spin around them, that's when we
do some kind of reorientation technique as a fail safe. So you can actually
get through a city-sized space right now with a little bit of interruption.
The reorientation techniques though, do provide a great metric for
effectiveness for evaluating these algorithms. So what we did now to kind of
move this forward, as we're developing these algorithms more is we developed
the simulation framework where we can tweak algorithm parameters and then
generate procedural pass-through environments of different types and
distances and systematically measure using those reorientation techniques,
those periods of failure as a metric, a minimization metric to try and get
these algorithms better. I'll give you one quick example because I'm almost
out of time. And so here's an example of popular ways of doing steering for
any sort of like continuous system. People in VR, they thought, too, the
previous convention was steer to center is best. We just -- we just kind of
naively try to get you to be in the center of your physical space. Other
people have said, well, maybe it's best not to go to the center. The
algorithm should just you to just kind of orbit around the center. So steer
to center versus steer to orbit was an argument, the steer to center people
went out and so it was kind of conventional wisdom said steer to center is
always better but when we started being able to do the simulation and go up
to larger and larger tracking sizes, what we actually found was that the
conventional wisdom was not true after a certain point, that as the tracking
size increased, and I'll just highlight this here, there's an inflection
point. So this is the relative effectiveness. This is the derived metric
based on the probability of getting those reorientation triggers, those fail
safe techniques. There's a period of time where the steer to center
algorithm out performs it but at some point there's a crossover and there's a
substantial increase in steer to orbit with sufficient space is actually much
better and in fact hits the theoretical maximum of never having to do a
reorientation sooner with a smaller tracking area than steer to center. So
this was all kind of an interesting paper we published last year that was
just one example of how we can use evaluation to better design these
techniques. And in the future, what we're really trying to do, I think, the
holy grail for the redirected walking is not just one user but you multiple
users because now you don't have to just deal with the physical boundaries.
You have dynamic targets. You want to have people not bump into each other
but if someone wants to handshake or hand an object to another, you might
want to converge spaces so there's a convergence and divergence of individual
spaces which is a problem we've just barely begun to explore. And the
question is, I know five by five meters is pretty good for redirected walking
in some cases but it's not going to work for multiple people. So how big a
space do we need? That's kind of the answer we have to look at in the
simulations of how big, how scalable are these techniques. And I think this
is going to be increasingly interesting once we can beyond what a vive system
can do in the consumer level and was one of the reasons I'm really -- there
are many interesting things about HoloLens, but the tracking on it, the fact
that it's all inside looking out and tracking on the device is something that
I would love to see built into a VR, a pure VR headset, you know. HoloLens
being a mixed reality device is somewhat different and these techniques don't
really translate well to that kind of realm because you can see the real
world. But then you can start to see if the device can just track me I don't
need infrastructure, can I just go out to parking lot, a football field, I
can just make ad hoc use of big empty spaces and then you can start to think
about, okay, I could theoretically see a multiuser system like this working
but you know, the tracking technology just needs to catch up with our dreams
and our goals. So with that, I'm just going to wrap up. Like I said, VR for
me, the real power and the reason I chose this field is because you can
create magical experiences. You can transcend the laws of physics and you
can do things that you can't even dream about doing in the real world and
that is something that I this is just going to -- we've barely just, you
know, it's just the tip of the iceberg in terms of what VR can do. And the
other thing is, the role of researchers, I think it's really interesting that
the -- this is from a blog post earlier this year where someone, a random VR
hobbyist was thinking about the vive and he saw the vive and he's like, what
can I the start to do in a vive? I want to go through a larger space and he
actually started sketching out things about walking along circular arcs.
Walking in 90 degrees. He hadn't come up with the idea of marrying it to
rotations employing illusions but he -- the hobbyists and the kind of general
public doesn't perform literature reviews and through brute force, they're
rediscovering with a little bit more thinking on this and work on this
they'll come up with redirected walking. So as researchers, that's kind of
the goal is to create tool kits and inform the general public so that they
don't brute force it and invent the wrong thing. All right. With that, I'd
like to thank -- I should acknowledge this is the work of a lot of my
students, Ph.D. students, interns, undergrads, engineers, and so work of a
lot of different people. With that, I'll take any other questions.
[Applause]
>> Are we doomed to walk in VR or at least 25 percent faster than
[indiscernible]?
>> Evan Suma: I think -doomed because one of the
expenditure and fatigue.
through a large space and
three-quarters of a mile.
you might want to slow it
always a bad thing.
I think we -- I don't know if I would call it
things that I do think about in VR is energy
I think that it would be great if I want to go
I don't have to walk a mile and I only have to walk
Unless I'm trying to get exercise at which point
down and then get a higher distance. But it's not
>> No, but then you will get problem of inconsistency between sentences,
right?
>> Evan Suma: Yes, yes. Although, like I said, certain amounts of
inconsistency is tolerable. And in fact, not just tolerable but
imperceptible. And that's not an intention. That's actually a -- when it's
a perceptual effect, it's a solo level that it's actually more -- it's more
of a brain thing than it's not a level of your attention. Like if it's
imperceptible, you can't -- it's really hard. You can't even do it. Even if
you try. But of course individual variation exists. Yes?
>> I have a question observation about the gravel path you showed. That
adds a very strong constraint to your system to that really won't work with
free walking, right? Because once you exited the first building and came
back to the path, if you had gone back, you would have stepped immediately
like outside of the bounds of the room, right?
>> Evan Suma:
Right.
>> So but you can use different cues like audio cues, right, to simulate
that he's walking on a different type of surface, even if he doesn't
physically feel it? Have you done anything with that.
>> Evan Suma: Yeah. We've done audio. We've played around with audio
sounds like the crunching sound. The other thing I played around with was
actually trying to build shoes which had a layer on the bottom that actually
gives you like different sensations on it or could even -- we tried to cut it
at an angle to kind of twist your foot a little bit in a direction and bias
you towards walking a little to the right or the left. Turns out that works
when you close your eyes, but when you're walking, your vision dominates and
you just try to correct for it anyway. So it didn't work out as great. So
we tried to look at a bunch of different ways of being able to do that. But
to your point, yeah. You know, all these techniques have different
limitations in terms of generalizability and when they would be useful and
not and different ways of being able to violate those assumptions but the
points is that each of those techniques have different assumptions. So you
pick the ones that are best and in combination, you could potentially get
away with quite a lot.
>> So that's true for all the variables, right? I mean, like the level of
fidelity you were mentioning in this is that you -- I mean a research-driven,
research-drawn space, if the level of fidelity had been a lot higher, it
would have, in the whole equation of comfort, it would have probably
mitigated against other things that were created [indiscernible]. Does that
make sense?
>> Evan Suma: Yes. And in fact, I think -- so I think some of this is
actually empirically needs to be measured when we get to the systems that can
render it greater than 90 hertz and have really low latency tracking because
some of the things that were intolerable or just not really, you know, or
things that were good before, techniques that were acceptable because they
might have been masked by latency and jitter, might not. And we need to
reevaluate them under these new circumstances. Yeah.
>> Most of the measurements that you do are kind of verbal scores in some
sense. Like, you know, have you noticed, where was the door? But I noticed
in one of the pictures that you had [indiscernible] hat on wearing the
headset. Have you thought about any more kind of biological sensing or
electrical sensing of -- that what people might not report being noticeable
but actually is noticeable by the brain?
>> Evan Suma: Yeah. I haven't personally. There are people at ICT who have
done that. I think they tend to be more in the medical VR sector. And so I
think that there's all these -- there's -- I think a lot of difficulties with
doing this with large-scale walking because there's all the physical
movement. I haven't personally done anything with brain scans but I have
over time I've been less enamored of using verbal reports and subjective
measures. Which is why I started to use things like those distance
estimation studies and started to look at designing experimental tasks where
if I can't go for a psychophysical biological signal, I can hopefully get
some sort of objective behavioral signal and I can measure user behavior
instead of relying on just a self-report which has all those problems.
>> So far you have assumed there's sort of an empty area. Have you looked
at any like having objects that the user would need to avoid, physical
objects and redirecting them around those objects? Because I think living
rooms or houses could be a lot larger if you can force them to kind of go
through from room to room as they walk around.
>> Evan Suma: Yeah. I think we haven't really as a field yet tackled that.
We've just started with our evaluation framework to start tests non-square
spaces because we started to realize, wait, these can be -- there's no reason
why these have to be squares. In fact, ours is rectangular. So we're
starting to look at different shape and how shape affects -- shape is
actually interesting because you can get long walks in one direction but very
short walks in the other. To look at obstacles like that, no, we haven't
really done that. With the exception of Love Coley's work, also from UNC
Chapel Hill. So about 5 or 6 years ago, he had a paper where he tried to
combine passive haptics with redirected walking and what he did was he was
using the rotation techniques so he picked cylinders because they're rotation
invariant. So he could put like a cylinder and he would -- what we do is
redirect someone and then they would always -- the space could be circularly
rearranged around it but he could always reach out and touch the circle or
the cylinder in the space. But navigating around obstacles I think, yeah,
that's not an area for work. It is an area for future work. Yeah.
>> I had a clarifying question for myself. I couldn't remember, as in the
first part of the talk, as people are walking what they think is straight, do
you sometimes curve as they're walking straight or is it mostly that you do
the variation as they turn themselves like when ->> Evan Suma: Yeah: They're two distinct techniques.
done during your rotations.
>>
Rotation gains are
Right.
>> Evan Suma: Curvature gains are done when you're walking straight. So
you're walking straight and there's just a slight continuous rotation that
gets you to bend your path and both have their own thresholds. We
actually -- but they've always been measured separately. We actually are now
doing a study where we're doing them simultaneously because we think that
there's a combined effect that hasn't been empirically measured yet. But
they are two distinct [indiscernible]. Yeah.
>> Is there anything interesting to say about vehicles? So if you had a OV
sitting in your lab and you hopped into it, started it up, drove someplace
and then got out, I mean, that's I guess not that interesting to you. I'm
wondering if there's something else to be done in that space.
>> Evan Suma:
I haven't thought about --
>> I was thinking about that as one of your examples had a Jeep sitting
around.
>> Evan Suma: Oh, yeah, yeah, yeah. No. That was just a -- yeah, that was
a simulator. I haven't used that -- done anything researchwise with it
because I think once you exit the vehicle simulator, once you enter and exit,
you can be in completely different places visually.
>>
[Indiscernible] interesting.
Might just work.
>> Evan Suma: Yeah. I think we've been asked actually, so by the perceptual
manipulations that we've been asked to consider are more about when
[indiscernible] vehicle simulations are more about haptic control surfaces.
So they want the Army for example wants reconfigurable easily reconfigurable
simulators for prototypes so they want to be able to do VR environments that
can have these kind of dynamically repurposable haptic surfaces so that's
kind of where the -- I see more perceptual manipulation potentially being
employed.
>> Hrvoje Benko:
Well, let's thank Evan.
[Applause]
>> Evan Suma:
Thank you.
>> Hrvoje Benko: And he'll be around.
Download