CAOS Presentation: SIGGRAPH 2001

advertisement
ACM STUDENT CHAPTER MEETING!
RECENT ADVANCES IN VIRTUAL REALITY
Bill White will present a survey of several VR research projects that
were demonstrated at SIGGRAPH 2001 in Los Angeles this summer.
THURSDAY – SEPTEMBER 20, 2001
12:30 – 2:00 PM
ENGINEERING BUILDING 1033
FREE PIZZA & SODA - REAL, NOT VIRTUAL!
The Woods Hole Oceanographic Institute commissioned the
development of a Web-based simulator of the ALVIN Deep Submersible
Vehicle, used for scientific research at extreme ocean depths.
The simulator is used as a training tool for scientists to plan and
rehearse their dives, in an effort to budget energy consumption (i.e.,
outside lights, propulsion tanks, manipulator usage) and extend the
length of dives.
TECHNICAL SKETCH
ALVIN ON THE WEB
JAN JUNGCLAUS,
FRAUNHOFER CENTER FOR RESEARCH IN COMPUTER GRAPHICS
Using a data-glove capable of measuring 3-D position as well as applied
force, a doctor performs a palpation exam on real humans.
The resulting measurements are then used to produce a haptic
simulation of a virtual human abdomen, with realistic force feedback of
soft tissue malleability.
TECHNICAL SKETCH
VIRTUAL HUMAN ABDOMEN
KEVIN CHUGH,
VR LAB – STATE UNIVERSITY OF NEW YORK AT BUFFALO
Mixed-reality systems using see-through optical displays have
traditionally been unable to deal with mutual occlusion.
Instead, virtual objects have been (a) transparent, (b) opaque but
occluding real objects, or (c) opaque but occluded by real objects.
The Enhanced optical see-through display using an LCD panel for
Mutual Occlusion uses a 5-camera system to capture depth maps,
applying this info to produce a Z-buffer yielding mutual occlusion.
TECHNICAL SKETCH
ELMO
KIYOSHI KIYOKAWA,
COMMUNICATIONS RESEARCH LABORATORY
Using special props, the user
provides depth, lighting, and
background data to an enhanced
reality system.
The user can then modify the video
image to include certain special
effects, such as rendering
synthetic objects.
TECHNICAL SKETCH
ENHANCED REALITY
RICHARD MARKS,
SONY COMPUTER ENTERTAINMENT
Head-mounted projective displays, containing two miniature projective
lenses and a non-distorting retro-reflective sheeting material, can be
applied to interactive augmented environments, without the occlusion
and distortion problems associated with most head-mounted systems.
This system has been applied to a computer-generated game of GO
between remote opponents.
TECHNICAL SKETCH
HEAD-MOUNTED PROJECTIVE DISPLAY
HONG HUA,
UNIVERSITY OF ILLINOIS, URBANA-CHAMPAIGN
CAVE-based display systems typically differ geometrically from the
environments that they are displaying, forcing the user to resort to
walking in place or execute “virtual flying” to traverse the scene.
In addition, the difference between the virtual environment’s geometry
and the physical space of the CAVE system causes system latency
errors, as well as projector and tracker calibration difficulties.
The diorama approach addresses these problems (and, to some extent,
the problem of multiple untracked users viewing one scene) by using
physical display surfaces closely matching the actual scene geometry.
TECHNICAL SKETCH
LIFE-SIZED PROJECTOR-BASED DIORAMAS
KOK-LIM LOW,
UNIVERSITY OF NORTH CAROLINA AT CHAPEL HILL
A series of compact photosensitive arrays are arranged around a
display device, yielding a dense spatial sampling of the illumination
field near the display.
This illumination information is then used to modify a synthetic or
scanned 3-D image, producing shading and shadows that are
consistent with the location of the light source.
TECHNICAL SKETCH
LIGHTING-SENSITIVE DISPLAYS
SHREE K. NAYAR,
COLUMBIA UNIVERSITY
To facilitate editing a virtual environment without having to travel from
one’s current position within that environment, this tool was developed.
The user invokes a preview window of a remote location, then uses a
stylus to manipulate the remote site (if the stylus is positioned within
the window) or the local site (if the stylus is placed outside the
window).
TECHNICAL SKETCH
THROUGH-THE-LENS REMOTE OBJECT MANIPULATION
STANISLAV L. STOEV,
UNIVERSITÄT TÜBINGEN
PlaceWorld was
developed as an
interface by which
users could coexist
and interact in
multiple virtual
environments.
The test cases included a radiosity model, a textured 3-city tour, an
artistic interpretation of VR history, and a CAD model of an oil rig.
TECHNICAL SKETCH
PLACEWORLD
JON COOK,
UNIVERSITY OF MANCHESTER
The extent to which mental processes are affected by the use of an
artificial immersive environment was tested by studying memory recall
and awareness states in a VE replica of an actual room.
As expected, the use
of a head-mounted
display resulted in
more accurate
memory responses
than the use of a
desktop monitor.
However, a rather surprising result was the fact that the use of a
realistic head-tracking system proved less successful than the use of a
mouse for navigation purposes.
TECHNICAL SKETCH
SIMULATION FIDELITY METRICS
KATERINA MANIA,
UNIVERSITY OF SUSSEX
The degree to which users in separate virtual environments can
collaborate across a network was tested for two setups: a
homogeneous system of two CAVE platforms, and a heterogeneous
system consisting of one CAVE and one desktop machine.
Using a 3-D cube puzzle application, the C2C
proved analogous to a real-space condition,
while the C2D proved much less effective,
with the non-immersed user rated as less
contributing or even uncooperative.
TECHNICAL SKETCH
SOLVING A 3D CUBE PUZZLE IN A CVE
ANTHONY STEED,
UNIVERSITY COLLEGE LONDON
Producing realistic
avatars to represent
human users in
collaborative virtual
environments has been
problematic, especially
in inherently dark CAVE
systems.
By taking a 3-D laser scan of the user’s head and applying it as a
texture map to the avatar’s head, a good static model is
produced, with positioning and orientation determined by the
CAVE’s head-tracking system.
TECHNICAL SKETCH
VIEW-DEPENDENT TEXTURE MAPPING OF VIDEO
VIVEK RAJAN,
ELECTRONIC VISUALIZATION LABORATORY (UI-CHICAGO)
Using a half-silvered mirror sheet formed into a truncated cone, sitting
on top of a graphics display table, real objects inside a showcase are
merged with virtual objects projected onto the screen.
Using active shutter glasses, infrared emitters, and an electromagnetic
tracking device, stereo separation and graphics synchronization are
implemented to provide a seamless 360° view of a virtual artifact.
TECHNICAL SKETCH
VIRTUAL SHOWCASES
OLIVER BIMBER,
FRAUNHOFER INSTITUTE FOR COMPUTER GRAPHICS
Virtual fiction: instead of having the audience identify with the main
protagonist, as in a novel or film, the user is the main protagonist.
VR as a public display
medium: Does the
“wow” factor still play
a stronger role in
attracting an audience
than the work itself?
Configuring the CAVE: A life-sized
wooden puppet is used as an interactive
device with a CAVE environment.
PANEL DISCUSSION
VR ART IN MUSEUMS & GALLERIES
ANSTEY, COX, HÖRTNER, SANDIN, SERMON, SHAW
Virtual experiences of the entire
packing/airport/takeoff/flight/landing scenario are used to overcome
a patient’s fear of flying.
VR provides the ability to
conduct controlled mental
health studies, in dynamic
3-D stimulus
environments, as well as
SnowWorld: Immersive VR is used to
the capacity to record all
help reduce pain during the wound
behavioral responses.
treatment of burn patients.
PANEL DISCUSSION
VR MEETS MENTAL HEALTH
HODGES, HOFFMAN, RIZZO, SCHULTHEIS, STRICKLAND,
WATSON, WIEDERHOLD, WIEDERHOLD
Sharing a
stereoscopic
display is
problematic since
the image must be
displayed from a
single viewer’s
perspective,
resulting in a
distorted view for
all other viewers.
IllusionHole uses
a display mask
with a hole in the
center to display
multiple
viewpoints
simultaneously,
positioned so
each viewer can
only see the
image from his or
her viewpoint.
EMERGING TECHNOLOGIES
ILLUSIONHOLE
YOSHIFUMI KITAMURA,
HUMAN INTERFACE ENGINEERING LABORATORY – OSAKA UNIVERSITY
The Just-Follow-Me system is used to train users to learn certain limbmotion profiles (e.g., dance moves, golf swings, etc.).
A user wears five highly reflective markers (on the ankles, wrists, and
belly), and four cameras track the user’s motions.
EMERGING TECHNOLOGIES
JUST FOLLOW ME VR-BASED MOTION TRAINING SYSTEM
UNGYEON YANG,
POHANG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Visitors in this virtual environment are able to interact with common
microscopic structures that normally can’t be seen with the naked eye.
A high-definition multimedia space is created, suitable for academic
research as well as basic educational use.
EMERGING TECHNOLOGIES
MICRO ARCHIVING
TATSUYA SAITO,
KEIO UNIVERSITY
A mobile user’s position and orientation are tracked via a backpack
emitter and mapped to a campus model.
Routing and landmark images are then overlaid onto the mobile user’s
see-through head-worn display.
EMERGING TECHNOLOGIES
MOBILE AUGMENTED REALITY SYSTEMS
STEVEN FEINER,
COLUMBIA UNIVERSITY
•The drive electronics receive and process an
incoming video signal, & control the image display.
•The modulated light sources are multiplexed red,
green, and blue laser pulse streams.
•Two scanning mirrors are used to sweep
horizontally and vertically across the image.
•The viewer optics relay the scanned raster image
to the glass or plastic oculars worn by the user.
An object’s inverted image is
directly projected onto the
retina, the rod and cone cells of
which are responsible of
transmitting signals to the brain
through the optical nerve .
VIRTUAL RETINAL DISPLAY TECHNOLOGY
(NOT PART OF SIGGRAPH 2001)
Download