Exploring the neural representation of faces using high

advertisement
Communication in Virtual Environments
Trevor J. Dodds, Betty J. Mohler, Heinrich H. Bülthoff
Introduction
Virtual environments (VEs) are of interest, (a) as objects of study themselves; (b) as
applications for the real world, e.g. medical training [1], urban planning [2], collaboration; and
(c) as tools with which one can research ‘real world’ experiences. Collaboration in virtual
environments requires both interaction and communication with other parties involved. Here,
we focus on the communication aspect. When we talk to each other face-to-face, body
gestures naturally accompany our speech [3]. Using state-of-the-art motion capture tracking
we can map the real-time body motion to a virtual character creating ‘self-animated’ avatars.
Goals
The goal of the present project is to research the effect of body gestures on communication in
virtual reality. Further, we design experiments to gain insights about the role of body language
in face-to-face communication in the real world.
Methods
Three studies analysed performance in a communication task. The first two studies used
head-mounted display virtual reality. The third study used large-projection virtual
environments. Participants worked in pairs. One participant in each pair, the ‘describer’, had
to describe the meanings of words to their partner, the ‘guesser’, who had to infer the word
being described. Our main comparison was between static and self-animated avatars.
Initial results
In the first study, we found that participants performed better in the communication task with
bidirectional nonverbal communication in third-person perspective, i.e. participants were
aware of their own avatar. Describers also gave up describing more words when their
partner’s avatar was static, i.e. no nonverbal feedback available. In the second study, we
further investigate the importance of nonverbal feedback, and found that participants
performed worse when the guessing avatar was animated by a plausible but unintelligent prerecorded animation [3]. In both studies, participants moved more during the same task in a
real world face-to-face setting. In the third study, we replicated our previous results in a firstperson perspective in a large-projection virtual environment. In this new scenario, participants
moved as much as we previously recorded in the real world.
Initial conclusion
Taken together, the results show that nonverbal communication is beneficial to a word
description task in virtual environments; that awareness of our own bodies is important for this
benefit to take place; and that real nonverbal feedback cannot be substituted by an
unintelligent animation. The latter finding is relevant to the generation of avatars that attempt
to provide nonverbal feedback automatically. In addition, providing users with a scenario that
enables them to move at similar levels as they do when talking face-to-face is important for
making meaningful comparisons to interpersonal communication in the real world.
References
1. Alexandrova IV, Rall M, Breidt M, Tullius G, Kloos C, Bülthoff HH and Mohler BJ (In press)
Enhancing Medical Communication Training Using Motion Capture, Perspective
Taking and Virtual Reality 19th Medicine Meets Virtual Reality Conference (MMVR
2012).
2. Dodds TJ and Ruddle RA (2009) Using mobile group dynamics and virtual time to
improve teamwork in large-scale collaborative virtual environments, Computers &
Graphics 33(2) 130-138.
3. McNeill (2007) Gesture & Thought, The University of Chicago Press, Chicago and
London.
4. Dodds TJ, Mohler BJ and Bülthoff HH (2011) Talk to the virtual hands: Self-animated
avatars improve communication in head-mounted display virtual environments,
PLoS ONE 6(10): e25759. doi:10.1371/journal.pone.0025759
5. Dodds TJ, Mohler BJ, de la Rosa S, Streuber S and Bülthoff HH (2011) Embodied
Interaction in Immersive Virtual Environments with Real Time Self-animated Avatars
Workshop Embodied Interaction: Theory and Practice in HCI (CHI 2011), ACM Press,
New York, NY, USA, 132-135.
6. Dodds TJ, Mohler BJ and Bülthoff HH (2010) A Communication Task in HMD Virtual
Environments: Speaker and Listener Movement Improves Communication 23rd
Annual Conference on Computer Animation and Social Agents (CASA 2010).
Fig. 1. Two participants in a shared virtual environment, with static avatars (left) and self-animated avatars
(right).
Fig 2. Two participants in a shared virtual environment with self-animated avatars. The avatars and
environments were rendered from the perspective of the participants, therefore in the images, particularly in the
multi-projection display, the projected image appears distorted but would not appear distorted to the participant.
Download