Large Displays Enhance Spatial Knowledge of a Virtual Environment

advertisement
Large Displays Enhance Spatial Knowledge of a Virtual Environment
Jonathan Z. Bakdash
Department of Psychology
University of Virginia
jzb3e@virginia.edu
Jason S. Augustyn
U.S. Army Natick Soldier Systems
Center
jason.augustyn1@us.army.mil
Abstract
Previous research has found performance for several
egocentric tasks to be superior on physically large displays
relative to smaller ones, even when visual angle is held
constant. This finding is believed to be due to the more
immersive nature of large displays. In our experiment, we
examined if using a large display to learn a virtual
environment (VE) would improve egocentric knowledge of the
target locations. Participants learned the location of five
targets by freely exploring a desktop large-scale VE of a city
on either a small (25” diagonally) or large (72” diagonally)
screen. Viewing distance was adjusted so that both displays
subtended the same viewing angle. Knowledge of the
environment was then assessed using a head-mounted display
in virtual reality, by asking participants to stand at each target
and point at the other unseen targets. Angular pointing error
was significantly lower when the environment was learned on
a 72” display. Our results suggest that large displays are
superior for learning a virtual environment and the advantages
of learning an environment on a large display may transfer to
navigation in the real world.
CR Categories: H.5.1 [Multimedia Information Systems]:
Artificial, augmented, and virtual realities; H.5.2 [Information
Interfaces and Presentation]: User Interfaces – Screen design,
User-centered design; J.4 [Social and Behavioral Sciences]:
Psychology.
Keywords: display size, navigation, virtual reality,
immersion, presence
1 Introduction
Physically large displays are becoming increasingly popular,
presumably because the immersive nature of large displays
results in a greater feeling of presence or a sense of “being
there”. Previous research has found scenes from movies were
liked better on a large screen compared to a small screen
[Revees and Nass, 1996]. Also, people reported that they felt
more like they were “there” when watching action movie clips
on large screen [Reeves et al. 1993].
Large displays also improve performance on cognitive tasks.
Memory for the content of movie scenes was improved
[Reeves & Nash, 1996]. Additionally, performance for
egocentric (viewer-centered frame of reference) cognitive
Dennis R. Proffitt
Department of Psychology
University of Virginia
drp@virginia.edu
tasks is superior on a large display compared to a small one,
even when the visual angle (angle the stimulus subtends on the
retina) between the displays is identical (see Tan [2004]). For
cognitive tasks that were exocentric (object-centered frame of
reference), Tan [2004] found no performance difference
between a small and large display.
Tan suggested that this advantage may stem from large
displays biasing participants into developing egocentric
strategies, which does not appear to be beneficial exocentric
tasks.
One of the egocentric tasks used in Tan’s dissertation was path
integration on a large and small display (for a summary see
Tan et al. [2004]). In this experiment participants saw a first
person, egocentric view of a sparse VE and used a joystick to
move along a path. Since the VE contained no landmarks,
participants continuously updated their spatial position relative
to the starting point using velocity and acceleration cues from
optic flow. They then returned to where they believed the
origin was which was not indicated by any features in the
environment. Distance errors relative to the origin were lowest
in the large display condition and no difference in display size
was found for the heading angle to the origin.
While navigating through the environment, one way people
are able to effectively reach a particular location is by using
spatial updating, which is the process of keeping track of
where things are in the environment after they have gone out
of sight. Spatial updating is inherently egocentric, locations in
the environment are updated relative to the self [Rieser, 1989].
One example of spatial updating is walking into a room and
knowing still knowing the location of the door when it is out
of sight. Another example of spatial updating is moving
through a city towards a destination, having the route blocked
by construction. and taking a detour but still knowing the
location of the destination.
In the current work, we wanted to determine if the benefits of
learning a VE on a large display would extend to a complex
navigation task and transfer to a fully immersive test
environment by using virtual reality (VR). Knowledge of the
VE was assessed by having participants point at unseen target
locations, measuring their ability to perform spatial updating.
2 Experiment
In our study, participants freely explored a VE on either a 25”
or 72” (measured diagonally) projected display. They were
instructed to find a set of targets scattered throughout the
environment and learn where these targets were in relation to
each other. In order to examine physical display size
independent of field of view, the visual angle of both displays
was fixed at 15.8°. The experiment was divided into a learning
phase (on a large or small display) and a test phase (in VR).
a) 20” wide small display
b) 58” wide large display
c) Bird’s eye view of the small and large displays.
Figure 1: Display size setup. Note the visual angle was the same for the small and large displays; both have the same horizontal field
of view 15.8°.
2.1 Participants
Thirty-eight (19 male, 19 female) University of Virginia
students, ranging in age from 18 to 23, participated in the
experiment. Participants either received course credit or were
paid. All had normal or corrected-to-normal vision.
2.2 Virtual Environment and Equipment
The virtual city environment was created in Alice 99 and was
approximately 150 meters by 200 meters in size. Participants
traveled along streets which were laid out in an irregular grid.
head-mounted display (HMD), which provided a 48°
horizontal field of view. A Dell Precision 360 computer with a
GeForce 4 MX420 and GeForce 4 MX200 graphics cards
providing stereo images to the HMD rendered at 640 x 480
and 60 frames per second. An Intersense Model IS-900 motion
tracking system registered head movements to appropriately
update the HMD images. Participants rotated in place and
looked around, pointing at unseen targets by pressing the
bottom button on a tracked wand, shown below in Figure 2.
Angular pointing error was measured by the deviation from
the center of a target, ignoring elevation.
Five target locations were placed throughout the environment
such that no other targets were visible when standing at any
particular target. The average straight-line (going through
buildings and any other objects) distance between two targets
was 96 meters. Movement through the VE was controlled
using a Saitek Cyborg EVO joystick; the throttle was used to
control walking speed and heading direction was adjusted via
the joystick.
Learning Phase: During the learning phase of the experiment,
the VE was rendered at 640 x 480 and 60 frames per second
on a Dell Dimension 8250 computer equipped with a GeForce
4 Ti 4200 graphics card running Windows XP. An Epson
PowerLite 811p projector and a Sharp Notevision 6 projector
were used to display the environment for the small and large
screen conditions, respectively. A switchbox was used to send
video output to the appropriate projector. The projectors were
qualitatively equated for brightness and contrast.
The images for both projectors were displayed on a DA-LITE
screen. In order to hold viewing angle constant at 15.8°, the
small display condition the screen was positioned 6 feet away
from participants, in the large display condition the screen was
17.33 feet away from participants. The size of the small
display image was 20” wide and 15” tall and the large display
image was 58” wide and 43” tall. Figure 1 shows each display
condition and the distances and field of view.
Testing Phase: In the testing phase, the same VE was rendered
with Alice99, but it was viewed through a Virtual Research V8
Figure 2: On the left is a person in VR wearing a headmounted display and holding the wand in their hand for
pointing. On the right is the representation of the wand in VR.
2.3 Design
Two conditions were tested in a between-subjects design. In
the 25” display condition there were nine male and seven
female participants and in the 72” display condition there were
seven male and nine female participants that completed the
study. Six participants were unable to complete the experiment
due to motion sickness.
2.4 Procedure
Prior to starting the experiment participants completed the
Santa Barbara Sense of Direction Scale (SBSOD) [Hegarty et
al., 2002]. In addition, participants also indicated their level of
experience with first person video games and the average
number of hours per week spent playing first person video
games.
Participants were instructed how to use the joystick. They then
practiced navigating in a VE, similar in appearance to the
actual experiment environment, for about a minute. To ensure
participants were able to use the joystick, they had to walk
around a city block in the practice VE in less than 45 seconds.
Only two participants were unable to circle the block on their
first try. These participants were given additional practice and
were then able to meet the criterion for mastering the joystick.
Learning phase
In the learning phase participants were given 20 minutes to
freely explore the VE. Participants started at one of the target
locations, selected at random. They were told to learn the
locations of five targets (tank, school, helicopter, gazebo,
humvee) relative to each other. Also, they were instructed that
their knowledge of the environment would be tested by
standing at each of the five targets and pointing at the other
unseen targets in VR. Participants started at a random target
location. The exploration time was sufficient for all
participants to visit every target at least once.
Testing phase
In VR, participants stood at each of the target locations (5) and
pointed at the other unseen targets (4) with the wand. The
order of the target locations stood at and pointed at were
randomized for each participant.
3 Results
Learning phase
During learning, participants traveled the approximately same
distance through the VE and visited the same total number of
targets, regardless of display size. This insured that learning
was equivalent under both screen size conditions. Independent
samples t-tests indicated that neither the distance traveled
through the VE, small display (M=2368.62 meters) and large
screen (M=2228.00 meters), nor the total number of targets
visited, small display (M=21.54 targets) and large screen
(M=20.50 targets), differed significantly (all ps > .05). Due to
a technical problem, learning data was missing for three
participants in the small display condition and two in the large
display condition.
Testing phase
Angular pointing error was assessed using absolute values
collapsing across target location for pointing and target
pointed at for each screen size. An independent samples t-test
showed absolute pointing error for the small display condition
(M=27.99°) was significantly greater than for the large display
condition (M=13.46°), t(30) = 2.29, p = 0.03, see Figure 3.
No relationship was found between angular pointing error and
the score for the SBSOD scale or the video game experience
questions.
Absolute Angular Pointing Error (degrees)
Practice
40
35
30
25
20
15
10
5
0
Small Display
Large Display
Figure 3: Absolute value of angular pointing error for the
small display and large display condition.
4 Discussion
This study shows that learning a complex VE on a physically
large display compared to a small one promotes the transfer of
spatial knowledge to a fully immersive test environment. The
superiority of the large display condition is demonstrated by
the finding that it resulted in less than half the angular pointing
than the small display condition.
Since visual angle was equated, our findings provide further
evidence that it is the physical size of the display that is
important for cognitive task performance. More specifically,
Tan [2004], Tan et al. [2004] has reported that egocentric task
performance is superior on large displays because they are
more likely to evoke egocentric strategies. The promotion of
egocentric frames of reference by larger displays may be
linked to the increased sense of presence they afford.
In this study, there was an inherent confound between viewing
distance, physical display size, and field of view. Future
research could use virtual displays in VR to disentangle these
variables, similar to the experiment design used by Dixon et al.
[2000].
Participants’ knowledge of the VE was assessed in VR,
suggesting that learning a computer generated version of a real
environment on a large display may transfer to the real world,
leading to improved navigation performance. However,
previous research by Darken and Banker [1998] and Darken
and Goerger [1999] found negligible real world transfer
performance with VEs.
Future research could investigate unexplored areas in display
size research such as using an exocentric navigation task like
map learning and passive learning of a large scale
environment.
Acknowledgments
This research was supported by NBCH Grant 1050023 from
DARPA to the third author.
References
Darken, R.P. & Banker, W.P. 1998. Navigating in Natural
Environments: A Virtual Environment Training Transfer
Study. In Proceedings of VRAIS 1998, 12-19.
Darken, R.P., & Goerger, S.R. 1999. The Transfer of
Strategies from Virtual to Real Environments: An Explanation
for Performance Differences? In Proceedings of Virtual
Worlds and Simulations 1999, pp. 159-164.
Dixon, M.W., Wraga, M., Proffitt, D.R., & Williams, G.C.
2000. Eye height scaling of absolute size in immersive and
nonimmersive displays. Journal of Experimental Psychology:
Human Perception and Performance, 26, 582-593.
Hegarty, M., Richardson, A., Montello, D., Lovelace, K., &
Subbiah, I. 2002. Development of a self-report measure
of environmental spatial ability. Intelligence, 30, 425-447.
Tan, D.S. 2004. Exploiting the cognitive and social benefits of
physically large displays. PhD thesis, Carnegie Mellon
University.
Tan, D.S., Gergle, D., Scupelli, P.G., & Pausch, R. 2004.
Physically large displays improve path integration in 3D
virtual navigation tasks. In Proceedings of CHI 2004
Conference on Human Factors in Computing Systems, 439446.
Reeves, B., Detenber, B., & Steuer, J. 1993. New televisions:
The effects of big pictures and big sound on viewer responses
to the screen. Paper presented at the Annual Conference of the
International Communication Association.
Reeves, B. & Nass, C. 1996. The Media Equation: How
People Treat computers, Television, and New Media Like Real
People and Places, 193-201.
Rieser, J.J. 1989. Access to knowledge of spatial structure at
novel points of observation. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 15, 11571165.
Download