Chap1-Literature Review - Faculty of Mechanical Engineering

advertisement
Overview of Virtual Reality in Science and Engineering
M Kasim A Jalil
Department of Mechanical & Aerospace Engineering
University at Buffalo
Virtual Reality (VR) involves development of a computer generated virtual environment
intended to simulate the real world. It is an emerging computer visualization technology
that allows users to experience a strong sense of reality in a computer-generated
environment. Engineers have begun to realize the usefulness of VR as an innovative tool
to visualize, manipulate, and interact with complex three-dimensional (3-D) graphical
data that are difficult or even impossible to adequately understand in traditional twodimensional (2-D) drawings or even 3-D solid models. This chapter highlights the recent
developments and applications of VR in engineering and the sciences.
2.1 Definition of Virtual Reality (VR)
The term Virtual Reality (VR) is used by many different people with as many different
meanings. There are some to whom VR is a specific collection of technologies (i.e. Head
Mounted Display, Glove Input Device and Audio Device). Others stretch the term to
include movies, games, entertainment and imagination. Virtual Reality is a way for
humans to visualize, manipulate and interact with extremely complex data in a variety of
immersive environments. A computer is used to generate visual, auditory or other sensual
outputs to the user. This data may encompass a CAD model, a scientific simulation, or a
view into a database. The user can interact with the virtual world and directly manipulate
objects within it. Some worlds are animated by other processes such as physical
simulations or simple animation scripts. Interaction in an immersive environment is
perhaps the most intriguing part of virtual reality. In conventional human-computer
interaction, humans remain "separated" from the computer environment. In VR, humans
are totally immersed in the visualization-based world. They have the ability to
manipulate and interact with the objects analyzed just as they do in the real world. Virtual
Reality is often referred to by other terms, such as Augmented Reality, Synthetic
Environments, Cyberspace, Artificial Reality, Simulator Technology and Immersive
Environments. All of these terms actually refer to the same thing - Virtual Reality (VR).
VR remains the most used term by the media.
2.2 History of VR
In order to gain an understanding of where today’s technology is in the field of high-end
visualization, it is helpful to look at the history of Virtual Reality in both fiction and
reality. Surprisingly, VR is closely linked to the development of calculating machines as
7
well as the development of mechanical devices (such as automata). The concept of VR
can be traced back to the automata of the ancient Greeks [1]. Archytas of Terentim (circa
400-350 BC) was reported to have developed a pigeon whose movements he controlled
remotely using a jet of steam or compressed air. In China, at about the same time,
inventors had created an entire mechanical orchestra that could be controlled by operators
sitting yards from the instruments [2]. Calculating machines such as Charles Babbage’s
Analytical Engine were attempts to simulate reality in numeric form and then manipulate
that reality to learn the results of different forces [2]. During World War II, the first
computer was developed to decipher intelligence as well as to assist in missile research.
Rocket trajectory, airflow patterns, and other characteristics of rocket engines were
simulated on computers before prototypes were actually developed.
As with most technologies, fiction preceded fact in VR. In the 1932 novel, Brave New
World [3], Aldous Huxley described “feelies”, movies which allowed the viewers to feel
the action taking place. Isaac Asimov [4] explored the subject of virtual environments in
his Robot series. The books in that series featured positron brains that operated in virtual
worlds. Arthur C. Clarke [5], in many of his books, talked about a “cyberspace” created
by orbiting satellites. The first fictional description of a true VR concept may have come
from William Gibson, an American who moved to Canada during the late 1960s. VR, (or
"the matrix" or "cyberspace") plays an important role in Gibson's trio of 1980s novels,
Neuromancer [6], Count Zero [7], and Mona Lisa Overdrive [8].
Possibly the most well-known fictional VR system today is the Holodeck from the TV
series Star Trek: The Next Generation [9]. In this show, the Holodeck is controlled by a
computer that translates voiced commands into various scenarios. These scenarios can be
peopled with lifelike characters that seem to have volition. In fact, a computer bug
occasionally causes the characters to go awry, threatening the Holodeck user. The
Holodeck requires no special gear such as goggles, earphones, or tracking devices
connected to the user's body. Rather, all the mechanisms are hidden within the room,
providing what may be called "unencumbered VR" - a system not yet available in reality.
The roots of non-fictional VR in a form that might be recognized as VR today may be
traced back to early 1940s. An entrepreneur by the name of Edwin Link [10] joined forces
with Admiral Luis DeFlorez to develop flight simulators in order to reduce pilot training
time and costs. The early simulators were complex mechanical contraptions and illusion
of the flight was relatively poor in the early models. However, the increasing power of
computers and image technology have now made these simulators very realistic. Today,
the mocked-up cockpit turns and rolls on a moving platform almost exactly simulating
what would occur in an actual plane. The original simulators required that the user sit in
front of a computer or TV screen, which normally represents either a window or a set of
gauges [11]. The room was built to look like the equipment or vehicle the user is being
trained on - a cockpit, bridge or power plant control room.
8
When a VR system requires that the user view the virtual environment through a screen,
it is called Desktop VR or a Window on a World (WoW). Its origin can be traced back to
1965, when Ivan Sutherland published a paper called The Ultimate Display [12,13] in
which he described the computer display screen as "…a window through which one
beholds a virtual world". He challenged scientists to create images that would justify the
computer screen as the window analogy. While WoW systems may represent an earlier
form of VR, they are still considered an important part of the VR family.
In newer VR systems, more of the environment exists as a function of software. This
environment is displayed via goggles and represented as force feedback joysticks or other
sensing devices. An advantage of these systems is that without the requirement to build
large rooms or mock cockpits, they are less expensive to build and maintain.
Another level of representation is the video mapping approach. This merges an image of
the user's silhouette with an on-screen two-dimensional computer graphic called parallax.
In order to accomplish this, the user has to wear stereographic shutter glasses that provide
input to the computer as to his or her physical orientation. Artificial reality and Artificial
Reality II, both published in the 1960s by Myron Kruger [14], described such systems. A
few TV game shows have used variations of image mapping techniques. For example,
Nick Arcade (on cable channel Nickelodeon) places young contestants into video games.
Immersive VR systems, when and if they can be perfected, represent the final step as of
the VR technology ladder as we know it today. In theory, these systems should, from the
users perspective, replicate reality exactly. In other words, the user should not be able to
discern whether the world he or she is interacting with is real or virtual.
2.3 Why Virtual Reality?
There is a growing body of research that can now provide a strong rationale for VR as the
next Human-Computer Interface. As an interface metaphor, VR clearly has tremendous
potential. This has already been demonstrated in industry, commerce, and the leisure
communities. A continuing issue, however, is how will VR gain acceptance? To answer
this one, must consider the advantages that VR can offer, specifically in 3D perspective
and communication. Consider the following points.

3D Perception: The shape of objects and associated interrelationships remain
ambiguous without true three-dimensional representation. The perspective
projection onto a flat surface on a normal computer screen can be unclear. VR
removes this ambiguity, and therefore represents a fundamental objective in
design processes. Of particular importance is the sense of scale that can only be
conveyed by immersing the analyst or designer in the "design" itself.

Communication: VR promises to completely revolutionize the use of computers
for cooperative work interaction. Natural human interaction is not easily
9
achievable in two dimensions. The telephone or videophone is effective but
limited. When participants share a common location, they have the freedom to
more easily and naturally communicate ideas (i.e. engineers from Japan and
Germany simultaneously discussing a model of a car in the design process). When
multiple participants are involved then the VR environment is said to be a
Collaborative Virtual Environment (CVE). This concept will be explained in
greater detail later in this dissertation.
2.4 Types of VR System
Not all Virtual Reality systems require gloves and goggles, such as are seen in most
technological amusement centers and scientific magazines. A major distinction between
VR systems pertains to the mode with which they are interfaced to the users. The
following sections describe some of the common modes used in VR systems, including
Video Mapping, Immersive Systems and Telepresence.
2.4.1 Video Mapping
This type of VR is best demonstrated in the highly recognized computer game DOOM.
The game requires a player to control a battle-hardened warrior who must fight through a
virtual world of hell. In this hell, there are monsters that must be killed, lest they kill the
player. The interface of the user to the virtual world of monsters is in the form of a twodimensional display (i.e. a monitor), which places the user into the warrior’s body. The
game provides stereo sound that gives the patron an idea of the distance and direction of
any nearby roaring monsters. By using the keyboard, the player can make the warrior run,
turn, shoot or pick up objects. Everything shown on the screen seems real (i.e. moving
about in the corridors, monsters die in response to the user’s actions, etc.) but it is all
based on a set of very complex data stored in the computer. It is these visual, auditory,
and force feedback cues, as described above, together with the independent monsters
implementing different actions intelligently, that puts the game in the VR category. A
variation of the WoW approach merges a video input of the user's silhouette with a 2Dcomputer graphic.
2.4.2 Immersive System
The most advanced VR systems completely immerse the user's personal viewpoint inside
the virtual world. These "immersive" VR systems are often equipped with a Head
Mounted Display (HMD), a BOOM, or other types of VR peripherals. An HMD is a
helmet or a facemask that holds the visual and auditory displays. The helmet may be free
ranging, tethered, or it might be attached to some sort of a boom armature. A nice
variation of the immersive systems use multiple large projection displays to create a 'cave'
or room in which the viewers stand. An early implementation was called "The Closet
Cathedral", for the impression of an immense environment within a small physical space.
The CAVETM [15] system is one of the recent and famous VR spaces (developed at
University of Illinois at Chicago).
10
2.4.3 Teleprescence
Teleprescence is a variation on visualizing complete computer-generated worlds. This is a
technology that links remote sensors in the real world with the senses of a human
operator. In the virtual world, this technology has been used in medicine (called
telemedicine), robotics (called telerobotics), firefighting, underwater exploration and
space exploration, as well as others. One of the major uses of the technology is in
medicine. Surgeons use very small instruments on cables to do surgery without making
large incisions in their patients. The instruments have a small video camera at one end.
This technology potentially enables future surgery to be performed remotely. Substantial
research is undergoing in this area. Robots equipped with telepresence systems have
already been used in deep sea and volcanic exploration. NASA is currently researching
the use of telerobotics for space exploration. Therefore, while telepresence does not
create a virtual world for the operator, it does give the user enough visual and audio
information to make him feel as though he were virtually present.
2.4.4 Mixed Reality - Virtual Reality (VR) versus Augmented Reality (AR)
Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual Reality
(VR). As previously described, VR technologies completely immerse a user inside a
synthetic environment. While immersed, the user cannot see the real world around him.
In contrast, AR allows the user to see the real world. Therefore, AR supplements reality
rather than completely replaces it. AR can be thought of as the "middle ground" between
VR, which is completely synthetic, and teleprescence, which is completely real [16].
Augmented Reality (AR) is a growing area in virtual reality research. The world
environment around us provides a wealth of information that is difficult to duplicate in a
computer. An augmented reality system generates a composite view for the user. It is a
combination of the real scene viewed by the user and a virtual scene generated by the
computer that augments the scene with additional information. The ultimate goal is to
create a system such that the user cannot tell the difference between the real world and the
virtual augmentation of it. For example, in a medical application, it might depict the
merging and correct registration of data from a pre-operative imaging study onto a
patient's head. Providing this view to a surgeon in the operating theater would enhance
his performance and possibly eliminate the need for any other calibration fixtures during
the procedure. A typical application of AR is telesurgery. Here, the computer-generated
inputs are merged with telepresence inputs and the user’s view of the real world. For
example, surgeon's view of a brain surgery might be overlaid with images from earlier
CAT scans, coupled with real-time ultrasound. In this chapter, the term VR is used for
explaining both AR and VR concepts.
2.5 Scientific Applications of VR
11
Until the late 1980’s, researches in VR were limited to academia, the government, the
military, and some large companies, due to high cost of computer equipment.
AutoDeskTM presented an inexpensive PC-based system in 1989 that reduced the price in
VR hardware significantly. However, the graphics speed and the quality of the rendering
was limited since it served as a starter system for people who wanted to explore VR
technology [17]. In the late 1980’s and the early 1990’s, many commercial companies
have emerged and contributed tremendously to the development of VR technology (i.e.
Virtual Research, Ascension, Fakespace and StereoGraphics and Sense8). Many major
industrial companies such as Boeing [18], Caterpillar [19], Chrysler, Ford [20], General
Motors [22], John Deere [21] have made a substantial investment in virtual reality
techniques in both research and development. Many academic and government research
centers around the world, especially in Europe and Asia, have also become serious about
VR developments and applications.
2.5.1 Scientific Data Visualization
One of the most astounding applications of VR is in the area of engineering testing. In
this field, the actual testing environment is simulated in a virtual world to enable
engineers to interactively visualize, perceive and explore the complicated structure of
engineering objects. Applications have ranged from casting processes, wind tunnel
testing, and structural analysis. VR technology has been extensively applied in scientific
data visualization. The development of superior computer graphics capabilities has
enabled scientists to develop better visualization models, interact visually, and improve
their perception of new discovery processes. The Virtual Wind Tunnel [22] project was
developed at NASA Ames for exploring numerically-generated, three-dimensional,
unsteady flow fields. A boom-mounted six-degree-of-freedom head-position-sensitive
stereo CRT system is used for viewing. A hand position sensitive glove controller is used
for injecting various tracers (i.e. "smoke") into the virtual flow field. A multiprocessor
graphics workstation is used for computation and rendering.
Another application of VR in data visualization is called Computational Steering (CS)
[23]. CS refers to the interactive control of a running simulation during the execution
process. Scientists can control a set of parameters of the program and immediately react
to the current results without having to wait until the end of the execution process.
Several computational steering systems have been developed. The CAVEStudy [24] is a
computational steering system that allows scientists to steer a program from a virtual
reality system without requiring any modification to the simulation program. It enables an
interactive and immersive analysis of simulations running on remote computers. Another
system is SCIRun [25] developed by Christopher R. Johnson at University of Utah. It is a
problem-solving environment for computational science. It provides an integrated
framework to construct, steer, and study large applications. Other computational steering
systems include VASE[26], Magellan [27], CUMULVS [28], VIPER [29], and CSE [30].
All of these systems provide varying levels of a user-controlled visualization interface to
enable the ‘steering’ of the analysis. A variation on Computational Steering, is the Visual
12
Design Steering (VDS) [31] paradigm, which is less stringent in its application criteria
than Computational Steering (CS). VDS is more applicable to design while CS is more
applicable to analysis.
2.5.2 Virtual Prototyping and Modeling
Imagine that a group of designers are working on the model of a complex device for their
clients. The designers and clients desire a joint detailed design review, even though they
are physically separated. If each of them had a conference room that was equipped with a
virtual reality display, this could be accomplished. The physical prototype that the
designers have mocked up could be imaged and displayed in the client's conference room
in 3D. The clients could walk around the display looking at different aspects of it. Both
groups could, interact with the design and each other simultaneously. It is this type of
distributed collaborative interaction that is the goal of this research, with a particular
emphasis on scalability of computer platforms.
There are an increasing number of companies that use VR technology to give customers
better understanding of their future products. An example of these companies is Lavalley
Lumber [32] that supplied their customers with VR images of kitchen designs. Electrolux
has developed a VR marketing tool for its kitchen appliance range. Matsushita developed
a multi-user virtual home to market and sell its range of domestic electrical goods
[33,34]. Matsushita demonstrated an advanced interactive design experience with a
virtual reality architectural walk-through in a complete two-story Japanese house. This
includes fully textured, detailed bedrooms, kitchen, bathrooms, and interconnecting stairs.
With a head-mounted display and a 3D mouse, users can walk around the rooms and up
the stairs to different stories of the house. Airbus, a European aircraft manufacturer, is
using a virtual prototype of its new airplane interior to market future products. More
companies are investing in VR technology every day.
2.5.3
Robotics and Manufacturing
A telerobot is a robotic system controlled by a human operator at a remote control station.
With the development of VR technology in robotics, the control and manipulation of
robotic structures have become simple and easy (compared to the traditional method
requiring tedious programming). The operator can immerse himself in the robot’s task by
using VR peripherals (such as a glove and HMD). The manipulation of the robotic arm
can be done directly using VR devices. This technology has been widely used in
telesurgery [35] (remote surgery using VR technology), space exploration (MARS
mission) and underwater exploration. In the domain of robotics and telerobotics, a virtual
display can assist the user of the system [36,37]. A telerobotic operator uses a visual
image of the remote workspace to guide the robot. Annotation of the view is also useful
just as it is when the scene is in front of the operator. Since the view of the remote scene
is often monoscopic, augmentation with wireframe drawings of structures in the view can
facilitate visualization of the remote 3D geometry. Prior to an operator attempting a
motion, it can be practiced on a virtual robot that is visualized as an augmentation to the
13
real scene. The operator could decide to proceed with the motion after seeing the results
or could decide to modify the motion. Once the robot motion is determined, it could then
be executed directly. In a telerobotics application, this would eliminate any oscillations
caused by long delays to the remote site [38].
Another application can be seen in the maintenance field. When a maintenance technician
must learn how to use a new or unfamiliar piece of equipment, he could put on a virtual
reality display. In this display, the image of the equipment would be presented virtually
with annotations and information pertinent to the repair. For example, the location of
fasteners and attachment hardware that must be removed might be highlighted. Then, the
inside view of the machine would highlight the boards that need to be replaced
[39,40,41]. Training in the maintenance field is an obvious and important application of
VR.
Consider the additional maintenance applications. The military has developed a wireless
vest worn by personnel that is attached to an optical see-through display [42]. The
wireless connection allows the soldier to access repair manuals and images of equipment.
Future versions are expected to register those images on a live scene and provide
animation to show any procedures that must be performed. Boeing researchers are
developing a virtual reality display to replace the large work frames used for making
wiring harnesses for their aircraft [43,44]. Using this experimental system, technicians are
guided by the virtual display that shows the routing of the cables on a generic frame used
for all harnesses. The virtual display allows a single fixture to be used for making
multiple harnesses.
VR technology has also been applied in manufacturing planning, such as layout planning.
This technique has potential to replace the traditional and time-consuming approach of
physical modeling. Through the use of a VR model, planners and engineers can walk
through a virtual factory, move virtual machines to any desired location, and simulate the
intended manufacturing process. An example of this type of research is carried out at the
VR Lab at the University at Buffalo [45,46]. The use of VR has also been extended to
process simulation such as virtual tool path simulation and virtual milling and machining.
The main advantage of using VR in manufacturing is reduced cost and timesavings.
2.5.4 Tele-Immersion
Tele-Immersion enables users at geographically distributed sites to collaborate in real
time in a shared simulated environment as if they were in the same physical room. It is
the combination of networking and media technologies to enhance collaborative
environments. In a tele-immersive environments, computers recognize the presence and
movements of individuals and objects, track those individuals and images, and then
permit them to be projected in realistic, multiple, geographically distributed immersive
environments on stereo-immersive surfaces.
14
Tele-immersive environments facilitate not only interaction between users themselves but
also between users and computer-generated models. This requires expanding the
boundaries of computer vision, tracking, display, and rendering technologies to achieve a
compelling experience. This then lays the groundwork for a higher degree of their
inclusion into the entire system [47]. This type of communication is necessary when
complex 3D numerical or graphical models need to be assessed and analyzed between
different remote stations. The technological challenges include incorporation of
measured, on-site data into the computational model, real-time transmission of the
tremendous amount of scientific data from the computational model to the virtual
environments, and management of the collaborative interaction between two stations.
A great deal of research work in this area has been done in Argonne National Laboratory
involving CAVETM to CAVETM communication [48]. Research on the application of
telepresence in games and entertainment was also conducted at the University of Geneva
(headed by Dr. Thalmann). Projects include Virtual Tennis [49], Virtual Chess, and the
Real-Virtual Human [50]. National Tele-immersion Initiative (NTII) [51] is a project
carried out in collaboration with some of the major universities in the United States
(Brown University, University of North Carolina at Chapel Hill and University of
Pennsylvania in Philadelphia) to build a national tele-immersive research infrastructure
and to actively participate in the creation of the key technologies in tele-immersion.
2.5.5 Training and Educational
The capability of simulating an actual working environment using VR has yielded another
advantage for training and education. The development of the visible human, that
displays 3D anatomical details of a male and female human body, together with surgery
simulators, have contributed significantly to medical training [52]. Other examples
include the virtual tank [53], the virtual submarine (VESUB) [54], the virtual workbench
[55] and virtual classroom [56].
The military has been using simulation for pilots for years. Displays in the cockpits
present information to the pilot on the windshield or on the visor of his flight helmet.
This is a form of virtual reality display. SIMNET [57], a distributed war games
simulation system, also embraces virtual reality technology. By equipping military
personnel with helmet mounted visor displays or a special purpose rangefinder, the
activities of other units participating in the exercise can be imaged. While looking at the
horizon, for example, the display-equipped soldier could see a helicopter rising above the
tree line [58]. This helicopter might actually be controlled by another participant who is
involved in the simulation. In wartime, the display of a real battlefield scene could be
virtual with annotation information or highlighting to emphasize hidden enemy units.
2.5.6 Medical and Therapy
15
Because imaging technology is so pervasive throughout the medical field, it is not
surprising that this domain is viewed as one of the more important applications for virtual
reality systems. Most medical applications deal with image-guided surgery. Pre-operative
imaging studies (e.g. CT or MRI scans) of the patient provide the surgeon with the
necessary views of the patients internal anatomy. From these images, the surgery can be
planned. Visualization of the path through the anatomy to the affected area where, for
example, a tumor must be removed, is done by first creating a 3D model from the
multiple views and slices in a preoperative study. Also, VR can be applied so that the
surgical team can see the CT or MRI data correctly registered on the patient in the
operating theater while the procedure is progressing. Being able to accurately register the
images at this point enhances the performance of the surgical team and eliminates the
need for the painful and cumbersome stereotactic frames [59]. Just this year, the most
complex and longest recorded surgery ever performed (96 hours) to separate twins
conjoined at the head, was preceded by months of training and planning using VR
technology. The surgeons found VR an invaluable tool to achieve success in this
unprecedented surgery.
Medicine has definitely become a computer-integrated, high technology industry. VR
and teleprescence may have much to offer with its human-computer interfaces, 3D
visualization, and modeling tools. In information visualization, medical professionals
have access to a volume of information and data formats including MRI (magnetic
resonance imaging), CAT (computerized axial topography), ultrasound, and X-rays.
VR’s graphics and output peripherals allow users to view large amounts of information
by navigating through 3D models. Telepresence techniques can allow surgeons to conduct
robotic surgery from anywhere in the world thereby offering increased accessibility to
specialists. In the area of training, simulation of various surgeries has been done using
VR technology. Recent research developments include the use of haptic system in a
surgery simulator [60,61]. VR has also been used in therapy to treat patients suffering
from psychological disturbances such as schizophrenia and tragic trauma. Psychological
phobias, such as fear of heights (acrophobia), have been treated in virtual environments
[62].
Chugh et. al. [63] at University at Buffalo developed an object-oriented approach to
physically-based human tissues modeling for virtual reality application. It this research
work, an approach termed the Atomic Unit Method was developed to simulate a realtime, physically accurate, volumetric virtual reality simulation of human tissues using
haptics (force feedback) devices.
2.5.7 Entertainment
The entertainment sector is one of the major users of VR technology today. A substantial
amount of research has been directed towards human modeling for animation purposes.
The aim of the research is to make human modeling and deformation capabilities
available to the general engineering and entertainment community without the need for
16
physical prototypes or scanning devices. Another area of research pertains to the
development of a new generation interactive entertainment simulator. The new generation
simulator will provide immersive virtual experiences to the users.
A simple form of virtual reality has been used in the entertainment and news business for
quite some time. Whenever one watches the evening weather report, the meteorologist is
shown standing in front of changing weather maps. In the studio, however, he is actually
standing in front of a blue or green screen. This real image is supplemented with
computer-generated maps using a technique called chroma keying. It is also possible to
create a virtual studio environment so that the actors can appear to be positioned in a
studio with computer generated decorating [64]. Movie special effects make use of digital
compositing to create illusions in a similar manner [65]. Strictly speaking, with current
technology, this may not be considered virtual reality because it is not generated in realtime. Most special effects are created off-line, frame-by-frame, with a substantial amount
of user interaction and computer graphics system rendering. However, some work is
progressing in computer analysis of the live action images to determine the camera
parameters and to then use this to drive the generation of the virtual graphics objects to be
merged [66].
Princeton Electronic Billboard has developed a virtual reality system that allows
broadcasters to insert advertisements into specific areas of the broadcast [67]. For
example, while broadcasting a baseball game, this system would be able to place an
advertisement in the image so that it appears on the outfield wall of the stadium. The
electronic billboard requires calibration to the stadium by taking images from typical
camera angles and zoom settings in order to build a map of the stadium including the
locations in the images where advertisements will be inserted. By using pre-specified
reference points in the stadium, the system automatically determines the camera angle
being used. By referring to the pre-defined stadium map, the advertisement can be
inserted into the correct place.
2.5.8 Other Applications
Virtual reality offers the potential to enhance architecture by combining threedimensional design, HMD, sound, and movement, to simulate a “walk-through” of a
virtual space before the expensive construction on the structure begins. Although
architects are generally good at visualizing structures from blueprints and floor plans,
their clients often are not. Walking through virtual environments provides an opportunity
to test the design of buildings, interiors and landscaping, and to resolve
misunderstandings and explore options.
The use of VR has also been extended to business areas. VR technology has been used
for weather model visualization and navigating science. In arts and history, VR can be
used either as a novel medium to create interactive art forms, or as an instrument, that
takes the user on a guided tour of existing conventional art or historical sites. Several
17
projects have been implemented on creating artistic models using mathematical
techniques (fractals), virtual instruments, virtual theater, recreating historical sites (e.g.
Xian Terracotta) [68], and developing of a virtual museum. VR technology plays a major
role in early product development. New design models, such as a car interior, can be
simulated and tested in terms of ergonomics. The virtual Gorilla project at Georgia Tech
and various virtual museums projects around the world represent some of the more useful
applications of VR technology in the field of education.
2.6 Collaborative Virtual Environments (CVE)
Today, there are several high technology solutions to support cooperative interaction in
the work place. The Collaborative and Immersive Environment extends the concept of
Virtual Reality further by integrating networking concepts where users in different
geographic locations can communicate in a shared VR environment. Communication
technologies are used to overcome the geographical separation of collaborators and to
achieve the expected level of cooperation using teleconference, videoconference,
electronic mail, and network documents management systems. This follows the new
paradigm for scientific discovery wherein modern research is conducted by multiple
scientists around the world. Often, a team of distributed collaborators works in the same
subject area, sharing and discussing partial results. So, in areas where results are
presented as images, visualization can be executed in different places, at different
moments, and of course, by more than one person. Actual visualization systems treat
visualization as an individual activity [69].
Certainly, scientists have a great number of powerful systems such as IRIS Explorer [70],
AVS (Application Visualization System) [71], Khoros [72] and IBM Data Explorer [73].
However, even when using such systems scientists still have a need to be physically
together to verify the results. Usually, these systems follow the dataflow model to
implement a visualization pipeline, which has been detailed by Haber and McNabb [74].
Various Collaborative Virtual Environments (CVE) have been developed in the area of
distant learning, games and engineering.
There are two distinct collaborative virtual environments - window-based collaborative
environments and totally immmersive collaborative environments. Window-based
collaborative environments refer to an x-windows type of system where users at different
locations are represented by a video screen with voice capability. Another window may
be used to represent a design model or the subject of discussion. There is some similarity
here to video-conferencing done in an x-windows environment. In a totally immersive
collaborative environment, users at different geographic locations are immersed in a
single shared environment with each represented by an avatar. A system called IRI
(Interactive Remote Instruction) [75], developed at Old Dominion University, is a
geographically dispersed virtual classroom, created by integrating high-speed computer
networks and interactive multi-media workstations. Each student participates using a
personal workstation, which can be used to view multi-media notebooks, and to interact
18
via video/audio and shared computer tools. Each workstation is equipped with a speaker,
microphone and video camera.
A collaborative and completely immersive tennis game [76] has been developed by a
group of researchers at MIRALab (University of Geneva). In this environment, the
interactive players were separated 60 km from each other and were merged into the
virtual environment by head mounded displays, magnetic flock of bird sensors, and data
gloves. The Virtual Life NETwork (VLNET) [77], is a general purpose client/server
network system, is used for managing and controlling the shared networked virtual
environment via an Asynchronous Transfer Mode (ATM) network [78] using realistic
virtual humans (avatars) for user representation. These avatars support body deformation
during motion. They also represent autonomous virtual actors such as the synthetic
referee that make up part of the tennis game simulation. A special tennis ball driver
animates the virtual ball by detecting and treating collisions between the tennis ball, the
virtual rackets, the court and the net of the court.
The tele-immersion group at the Electronic Visualization Lab (EVL) [79] at University of
Illinois at Chicago has been involved in the development of a virtual environment to
enable multiple globally situated participants to collaborate over high-speed and highbandwidth networks connected to heterogeneous supercomputing resources and large data
stores. Among the projects developed at EVL are the CAVE Research Network
(CAVERN) [80], the Collaborative Image Based Rendering (CIBR), the Laboratory for
Analyzing and Reconstructing Artifacts (LARA), the Tele-Immersive Data Explorer
(TIDE), Tandem [81], the Collaborative Architectural Layout Via Immersive Navigation
(CALVIN) [82], the Narrative Immersive Constructionist/Collaborative Educational
Environments (NICE) [83], the Round Earth Project [84], and the Computer
Augmentation for Smart Architectonics (CASA) [85] which is a networked collaborative
environment designed to allow the prototyping of smart homes and environments in VR.
All of these projects require supercomputers and high-end visualization facilities.
The CAVE Research Network (CAVERN) [86], for example, is an alliance of industrial
and research institutions equipped with CAVE-based virtual reality hardware and highperformance computing resources, interconnected by high-speed networks, to support
collaboration in design, education, engineering, and scientific visualization.
CAVERNsoft [87] is the collaborative software backbone for CAVERN. CAVERNsoft
uses distributed data stores to manage the wide range of data volumes (from a few bytes
to several terabytes) that are typically needed for sustaining collaborative virtual
environments. Multiple networking interfaces support customizable, latency, data
consistency, and scalability that are needed to support a broad spectrum of networking
requirements. These diverse database and networking requirements have not been
exhibited by previous desktop multimedia systems but are common in real-time
immersive virtual reality applications.
19
The Collaborative Image Based Rendering (CIBR) [79] viewer is a tool for viewing
animated sequences of image-based renderings from volume data. CIBR Viewer is a
CAVERNsoft-based tool for viewing animated sequences of image-based renderings
from volume data. CIBR View was designed to allow DOE scientists to view volume
renderings composed of 2D image slices. CIBR View allows collaboration on a variety of
visualization platforms- from desktop workstations to CAVETMs.
LARA (Laboratory for Analyzing and Reconstructing Artifacts) [79] is an application for
developing collaborative walkthroughs of large virtual environments. LARA was
designed specifically to facilitate EVL's development of applications related to virtual
restorations and recreations of cultural and historic sites. LARA allows users to traverse
massive landscapes whose geometric parts may be located on distant distributed servers.
LARA also allows the creation of annotations that allows past visitors to leave fully
animated messages for new visitors. This same technology will allow world builders to
create interactive tours of the environments.
The Tele-Immersive Data Explorer (TIDE) [88] is a CAVERNsoft-based collaborative,
immersive environment for querying and visualizing data from massive and distributed
data-stores. TIDE is designed as a re-usable framework to facilitate the construction of
other domain-specific data exploration applications challenged with the problem of
having to visualize massive datasets. The TIDE framework allows groups of scientists
each at geographically disparate locations to collectively participate in a data analysis
session in a shared virtual environment.
Tandem [89] is a Distributed Interaction Framework for Collaborative VR applications. It
makes use of the CAVE library (CAVElib), which is a set of libraries designed as a base
for developing virtual reality applications for spatially immersive displays, for VR
projection display support, and CAVERNsoft for its networking. This framework allows
VR developers to spend more time developing the application content and less time
implementing generic VR requirements.
A sequel to LIMBO [90] (a simple collaborative program that allows multiple
participants represented as avatars to load and manipulate models in a persistent virtual
environment), Tandem provides a more flexible architecture for building rich TeleImmersive Environments. Tandem is the architecture on which TIDE and other
CAVERN applications are based.
The Collaborative Architectural Layout Via Immersive Navigation (CALVIN) [91]
system is an immersive multimedia approach to applying virtual reality in architectural
design and collaborative visualization emphasizing heterogeneous perspectives. These
perspectives, including multiple mental models as well as multiple visual viewpoints,
allow virtual reality to be applied in the earlier, more creative, phases of design, rather
20
than just as a walk-through of the finished space. CALVIN's interface employs visual,
gestural, and vocal input to give the user greater control over the virtual environment. A
prototype of CALVIN has been created and been used in the CAVETM virtual reality
theatre.
Narrative Immersive Constructionist/Collaborative Educational Environments (NICE)
[92] borrows and improves on the techniques developed in CALVIN. NICE is a project
that applies virtual reality to the creation of a family of educational environments for
young users. The approach is based on constructionism, where real and synthetic users,
motivated by an underlying narrative, build persisting virtual worlds through
collaboration. This approach is grounded on well-established paradigms in contemporary
learning and integrates ideas from such diverse fields as virtual reality, human-computer
interaction, storytelling, and artificial intelligence. The goal is to build an experiential
learning environment that will engage children in authentic activity. The system explores
the above ideas within the CAVETM virtual reality theater.
As a sequel to NICE, The Round Earth Project [93] investigates how virtual reality
technology can be used to help teach concepts that are counter-intuitive to a learner's
currently held mental model. Virtual reality can be used to provide an alternative
cognitive starting point that does not carry the baggage of past experiences. In particular,
two strategies are compared for using virtual reality to teach children that the Earth is
round when their everyday experience tells them that it is flat. One strategy starts the
children off on the Earth and attempts to transform their current mental model of the
Earth into the spherical model. The second strategy starts the children off on a small
asteroid where they can learn about the sphericality of the asteroid independent of their
Earth-bound experiences. Bridging activities then relate their asteroid experiences back to
the Earth. In each of the strategies, two children participate at the same time. One child
participates from a CAVETM while the other participates from an ImmersadeskTM. The
child in the CAVETM travels around the Earth or the asteroid to retrieve items to complete
a task, but cannot find these items without assistance. The child at the ImmersadeskTM,
with a view of the world as a sphere, provides this assistance. The children must reconcile
their different views to accomplish their task.
Computer Augmentation for Smart Architectonics (CASA) [94] is a collaborative VR
application to demonstrate the feasibility of designing "smart environments" in VR
depicting a house of the future. CASA is the predecessor to CALVIN. The environment
created involves remote CAVETM-to-CAVETM collaboration via The Information Wide
Area Year (I-WAY) [95], which is an experimental high-performance network linking
dozens of the country's fastest computers and advanced visualization environments. This
network is based on Asynchronous Transfer Mode (ATM) technology, an emerging
standard for advanced telecommunications networks.
21
Collaborative Virtual Environments (COVEN) [96] is a European project that seeks to
develop a comprehensive approach to the issues in the development of collaborative
virtual environments (CVEs) technology. The overall objective of the COVEN project is
to comprehensively explore the issues in the design, implementation and usage of multiparticipant shared virtual environments, at scientific, methodological and technical levels.
COVEN brings together twelve academic and industrial partners with a wide range of
expertise in Computer-Supported Co-operative Work (CSCW) [97], networked VR,
computer graphics, human factors, human-computer interaction, and telecommunications
infrastructures.
Another two European based projects are the Distributed Interactive Virtual Environment
(DIVE) [98] and the Model, Architecture and System for Spatial Interaction in Virtual
Environments (MASSIVE) [99]. DIVE is an internet-based multi-user VR system where
participants navigate in 3D space and see, meet and interact with other users and
applications. The DIVE software is a research prototype covered by licenses. Binaries for
non-commercial use, however, are freely available for a number of platforms. The first
DIVE version appeared in 1991. DIVE supports the development of virtual environments,
user interfaces and applications based on shared 3D synthetic environments. DIVE is
especially tuned to multi-user applications, where several networked participants interact
over a network. DIVE applications and activities include virtual battlefields, spatial
models of interaction, virtual agents, real-world robot control and multi-modal
interaction.
MASSIVE was developed as part of the on-going research into collaborative virtual
environments. This system allows multiple users to communicate using arbitrary
combinations of audio, graphics, and text media over local and wide area networks.
Communication is controlled by a so-called spatial model of interaction so that one user's
perception of another user is sensitive to their relative positions and orientations.
Virtue [100] is a collaborative virtual environment for a massively parallel project
developed at the University of Illinois Pablo Research Group. The objective of the project
is the development of virtual environments that allow software developers to directly
manipulate software components and their behavior while immersed in scalable,
hierarchical representations of software structure and real-time performance data.
The goal with Virtue is to eliminate the barrier that separates the real world of users from
the abstract world of software and its dynamics. In turn, this makes large-scale, complex
software and its behavior concrete entities that can be understood and manipulated in the
same ways as the physical environment.
2.7 Research focus and direction
22
The aim of this research is to develop a collaborative virtual environment for engineering
design communication. Various applications of CVE have been discussed in the
preceding sections. The applications are mainly used for educational, entertainment, and
general scientific visualization purposes. In addition, the CVE uses the state of the art in
high end VR systems or very expensive peripherals that are not available to common
designers (e.g. CAVETM and high-speed computers). In contrast, the research presented in
this dissertation extends the use of CVE in engineering design where geographically
distributed designers can communicate and interact in a single immersive virtual
environment. The scalable, heterogeneous approach developed here enables designers to
discuss engineering problems to enable more effective decision-making. The proposed
environment is intended to be multiple platforms, user-friendly, and available with
minimum hardware requirements. This means that users can run the application using
UNIX, IRIX and PC machines. Another aspect of this research is to show that Virtual
Reality (VR) is not necessarily limited to higher-end peripherals. The use of low-end
peripherals such as stereographic shutter glasses is sufficient to transform a 2D monitor
screen into 3D VR world. However, the use of more expensive peripherals such as a
CAVETM, an ImmersadeskTM, or a Head Mounted Display (HMD), will certainly enhance
the sense of reality in the virtual environment.
A multiple platform collaborative distributed virtual environment for engineering design
is developed in this research. The environment integrates virtual reality technology,
design sensitivity analysis, and distributed environments to facilitate a more efficient
communication among geographically disbursed designers.
23
REFERENCE
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Schroeder, Ralph. “Virtual Reality In The Real World: History, Applications and
Projections.” Futures, Nov 1993:963.
Larry, Steven, Virtual reality now: a look at today’s virtual reality, MIS Press,
New York, 1994.
Pearce, David (1998). BRAVE NEW WORLD?A Defence Of
Paradise-Engineering, http://www.huxley.net.
Seiler,
E
(2001).
Isaac
Asimov
Home
Page,
http://www.clark.net/pub/edseiler/WWW/asimov_home_page.html
Bianchi, Reinaldo Augusto da Costa (1995), Arthur C. Clarke Unauthorized
Homepage, http://www.lsi.usp.br/~rbianchi/clarke/.
Gibson, William. Neuromancer. Ace Books, New York, 1984.
Gibson, William. Count Zero. Arbor House, New York, 1986
Gibson, William. Mona Lisa overdrive, Bantam Books, Toronto, 1988.
Tong,
Andrew(1994).
Star
Trek:
The
next
generation.
http://www.ugcs.caltech.edu/st-tng/
Rheingold, H., Virtual Reality, Simon & Schuster, New York, 1991.
Arthur, Kevin W. et al.. “Evaluating 3D Task Performance For Fish Tank Virtual
Worlds.” ACM Transactions on Information Systems July, 1993:239.
Sutherland, I.E., "The ultimate display," Proceedings of the 1965 IFIP Congress,
Vol. 2, 1965, pp. 506--508.
Evans & Sutherland (2001), Evan & Sutherland Inc., http://www.es.com.
Myron Kruger, "VIDEOPLACE: A Report from the Artificial Reality
Laboratory," Leonardo, Vol. 18, No. 3 (1985), pp. 145-151.
C. Cruz-Neira, D.J.Sandin, and T. DeFanti. Surround-screen projection-based
virtual reality: The design and implementation of the CAVE. In Computer
Graphics (Proceedings of SIGGRAPH '93), pages 135--142. ACM SIGGRAPH,
August 1993.
Milgram, P., Takemura, H., Kishino, F., "Merging Real and Virtual Worlds",
Proc. First IEEE/IEICE Int'l Workshop on Networked Reality in
Telecommunications (NR94) Tokyo, May 1994.
Krueger, M., Artificial Reality II, Addison and Wesley, reading, MA 1991.
Adam, J. A., “Virtual Reality is for Real”, IEEE Spectrum, Vol. 30, No. 10, pp. 22-29, October
1993.
Rosenblum, L. J., “Distributed Virtual Reality: Supporting Remote Collaboration
in Vehicle Design”, IEEE Computer Graphics and Applications, Vol. 17, No. 2,
pp. 13-17, March-April 1997.
Mahoney, P. D., “Driving VR”, Computer Graphics World, pp. 22-33, May 1995.
Owen, J. V., “Making Virtual Manufacturing Real”, Manufacturing Engineering,
Vol. 113, No. 5, pp. 33-37, November 1994.
Bryson, S. and C. Levit, "The Virtual Windtunnel: An Environment for the
Exploration of Three-Dimensional Unsteady Fluid Flows", Proceedings of IEEE
Visualization '91, San Diego, CA. 1991.
24
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
van Liere, R., Mulder, J.D., van Wijk, J.J. “Computational steering”., Future
Generation Computer Systems, 12(5):441--450, April 1997.
Renambot, L., Bal,H. E., Germans, D., Spoelder, H. J.W. "CAVEStudy: an
Infrastructure
for
Computational
Steering
in
Virtual
Reality
Environments", Proceedings of the Ninth IEEE International Symposium on High
Performance DistributedComputing, Pittsburgh PA, August 2000, pages 57--61,
IEEE Computer Society Press.
M. Miller, C.D. Hansen, S.G. Parker, and C.R. Johnson. “Simulation Steering
with SCIRun in a Distributed Memory Environment”. Seventh IEEE International
Symposium on High Performance Distributed Computing (HPDC-7), July, 1998
(poster presentation).
Jablonowski, D., Bruner, J., Bliss, B., and Haber, R. VASE: The Visualization
and Application Steering Environment, Proceeding of Supercomputing ’93, pages
560-569, 1993.
Vetter, J., and Schwan, K., “High Performance Computational Steering of
Physical Simulations.” Proceedings of the 11th IPPS ’97, pages 128-132, 1997.
II, G. G., Kohl, J., and Papadopoulos, P.,”CUMULVS: Providing fault tolerence,
Vsualization, and Steering of Parallel Applications, Int. J. of Supercomputer
Applications and High Performance Computing, 3(11):224-235, 1997.
Rathmayer, S., Lenke, M., “ A Tool for On-line Visualization and Interactive
Steering of Parallel HPC Applications.” Proceedings of the 11th IPPS ’97, pages
181-186, 1997.
Van Wijk, J., Van Liere, R., “An Environment for Computational Steering.”
Computer Society Press, 1997.
Winer, E., Bloebaum, C. L., "Development of Visual Design Steering as an Aid in
Large-Scale Multidisciplinary Design Optimization - Part 1: Method
Development", accepted for Structural Optimization.
Lavalley
Lumber
(2000).
Kitchens
Virtual
Reality
Demo,
http://www.lavalleylumber.com/kitchen.cfm.
Johnstone, B., “Kitchen Magician”, Wired magazine, http://www.wired.com, July
1994.
Nomura, J., Ohata, H., Imamura, K., Schultz, R. J., “Virtual Space Decision
Support System and Its Application to Consumer Showrooms.” Matsushita
whitepaper, 1992.
Kheddar, A., Bouzit, M., Coiffet, P., "A VR System Devoted to Telerobotics and
Applications in Telesurgery", 3rd French-Israeli Symp. on Robotics, Gle Robotics
and Robotics in Medicine, pp. 36-41, Hertzelia, Israel, 22-23 May 1995.
Kim, W. S., P. S. Schenker, et al. (1993). "An Advanced Operator Interface
Design with Preview/Predictive Displays for Ground-Controlled Space
Telerobotic Servicing." Proceedings of SPIE Vol. 2057: Telemanipulator
Technology and Space Telerobotics 2057 : 96-107.
Milgram, P., S. Zhai, et al. (1993). "Applications of Augmented Reality for
Human-Robot Communications." Proceedings of 1993 IEEE/RSJ International
Conference on Intelligent Robots and Systems : 1467-1476.
25
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
Milgram, P., Rastogi, A., Grodski, J. J., "Telerobotic control using augmented
reality," Robot and Human Communication (RO-MAN) 95, Japan, 1995.
Feiner, S., B. MacIntyre, et al. (1993). Windows on the World: 2D Windows for
3D Augmented Reality. Proceedings of ACM Symposium on User Interface
Software and Technology . Atlanta, GA, Association for Computing Machinery:
145-155.
Uenohara, M. and T. Kanade (1995). Vision-Based Object Registration for RealTime Image Overlay. Computer Vision, Virtual Reality and Robotics in Medicine:
CVRMed '95 . N. Ayache. Berlin, Springer-Verlag: 14-22.
Feiner, S., T. Webster, et al. (1995). Architectural Anatomy,
http://www.cs.columbia.edu:80/graphics/projects.
Urban, E. C. (1995). "The Information Warrior." IEEE Spectrum 32 (11): 66-70.
Caudell, T. P. (1994). Introduction to Augmented Reality. SPIE Proceedings Vol.
2351: Telemanipulator and Telepresence Technologies .
Sims, D. (1994). "New Realities in Aircraft Design and Manufacture." IEEE
Computer Graphics and Applications 14 (2): 91.
Kesavadas, T. Ernzer, M. “Design of an interactive virtual factory using cell
formation methodologies”, American Society of Mechanical Engineers, Material
Handling Division, v. 5, 1999. p 201-208.
Ernzer, M. Kesavadas, T., “Interactive design of a virtual factory using cellular
manufacturing system”, Proceedings of IEEE International Conference on
Robotics & Automation. v 3 1999. p 2428-2433
Sadagic, A., Towles, H., Holden, L., Daniilidis, K., Zeleznik, B., "Tele-immersion
Portal: Towards an Ultimate Synthesis of Computer Graphics and Computer
Vision Systems", accepted for The 4th Annual International Workshop on
Presence, Philadelphia, USA, May 21-23, 2001
[48]
Disz, T. L., Papka, M. E., Pellegrino, M., and Stevens, R. "Sharing Visualization
Experiences among Remote Virtual Environments," Proceedings of the
International Workshop on High Performance Computing for Computer Graphics
and Visualization, pages 217-237, Springer-Verlag, 1995.
[49]
Molet, T., Aubel, A., Çapin, T., Carion, S., Lee, E., Thalmann, N. M., Noser, H,
Pandzic, I., Sannier, G., Thalmann, D. " Anyone for Tennis?", Presence, MIT
Press, Vol. 8, No. 2, April 1999, 140-156.
Capin, T.K., Pandzic, I.S., Thalmann, N., Thalmann, D., "Realistic Avatars and
Autonomous Virtual Humans in: VLNET Networked Virtual Environments"
Virtual Worlds in the Internet (R. Earnshaw and J. Vance, eds) IEEE Computer
Society Press, 1998, pp. 157-174.
Advanced Network & Services Inc. The National TeleImmersion Initiative Home
Page, http://io.advanced.org/tele-immersion/, 1997.
Hoffman H, Vu D. “Virtual reality: teaching tool of the twenty-first century.”
Academic Medicine. 1997;72:1076-1081.
Smith, R., “Current Military Simulations and the Integrations of Virtual Reality
Technologies”. Virtual Reality World, Vol. 2, No. 2, pp. 45-50 (1994)
[50]
[51]
[52]
[53]
26
[54]
[55]
[56]
[57]
[58]
[59]
[60]
[61]
[62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
Naval AirWarfare Center Training System Divisions (2001), Virtual Environment
for
Submarine
Ship
Handling
Training
(VESUB),
http://www.ntsc.navy.mil/Programs/Tech/Virtual/VESUB/project.htm
von Wiegand, T. E. , D. W. Schloerb, and W.L. Sachtler, “Virtual workbench:
Near-field virtual environment system with applications”, Presence:
Teleoperators and Virtual Environments, 8(5):492-- 519, October 1999.
Traub, D. C. , “Simulated World as Classroom: The Potential for Designed
Learning within Virtual Environments”. Virtual Reality: Theory, Practice and
Promise, 111-121, 1991.
Urban, E. C. (1995). "The Information Warrior." IEEE Spectrum 32 (11): 66-70.
Metzger, P. J. (1993). "Adding Reality to the Virtual." Proceedings of the IEEE
1993 Virtual Reality Annual International Symposium : 7-13.
C.J. Henri, A.C.F. Colchester, J. Zhao, D.J. Hawkes, D.L.G. Hill, and R.L. Evans.
“Registration of 3d surface data for intra-operative guidance and visualization in
frameless stereotactic neurosurgery”, In Proc. of Computer Vision, Virtual Reality
and Robotics in Medicine , pp. 47--58, Nice, France, April 1995. IEEE.
R J Hollands and E A Trowbridge (1996), "A PC-based virtual reality
arthroscopic surgical trainer", Proc. Simulation in Synthetic Environments, New
Orleans, pp. 17-22
Satava, R. M. “Virtual Reality Surgical Simulator: The First Steps, Surgical
Endoscopy 7 (1993) 203-205, and in: VR93: Proceedings of the Third Annual
Conference on Virtual Reality, London, April 1993. Meckler Ltd., London, 1995,
pp. 103-105.
Hodges, L. F., Kooper, R., Meyer, T. C., Rothbaum, B. O., Opdyke, D., de
Graaff, J. J., Willford, J. S., and North, M. M. “Virtual Environments for Treating
the Fear of Heights.”, IEEE Computer 28, 7 (July 1995) 27-34.
Chugh, K., T. Kesavadas, and J. Mayrose. “The atomic Unit Method: A
Physically Based Volumetric Model for Interactive Tissue Simulaton”, World
Congress on Medical Physics and Biomedical Engineering, 2000, Chicago.
Schmidt,
R.
(1996).
Blue
Screen
TV
Studios,
http://www.heathcom.no/~robert/blue.htm.
Pyros, G. G. and B. N. Goren (1995). "Desktop Motion Image Processing for
Episodic Television." Advanced Imaging 10 (7): 30-34.
Zorpette, G. (1994). "An eye-popping summer." IEEE Spectrum 31 (10): 19-22.
National Association of Broadcasters (1994). Princeton Electronic Billboard
Develops "Virtual Arena" Technology for Television. Washington, DC, National
Association of Broadcasters.
Virtual Reality Application Center (1998), Iowa State University,
http://www.vrac.iastate.edu/
Wood, J.D., H. Wright and K.W. Brodlie, “CSCV - computer supported
collaborative visualization”, In Proceedings of BCS Displays Group International
Conference on Visualization and Modeling, 1995.
IRIS Explorer 2.0, Technical Report, Silicon Graphics Computer Systems,
Mountain View, 1992.
27
[71]
[72]
[73]
[74]
[75]
[76]
[77]
[78]
[79]
[80]
[81]
[82]
[83]
[84]
[85]
[86]
Application Visualization System (AVS), Technical Overview, Advanced Visual
Systems, Oct 1992, Waltham, MA.
Khoral Research Inc., Khoros, 1999. At http://www.khoral.com/.
IBM Visualization Data Explorer User's Guide, IBM Corp., Sixth Edition, 1995.
Haber, R. B., McNabb, D. A., Visualization Idioms: A Conceptual Model for
Scientific Visualization Systems, Visualization in Scientific Computing, IEE, p.
74-93, 1990.
Maly, K., Wild, C., Overstreet, C. M., Abdel-Wahab, H., A. Gupta, A. Youssef,
E. Stoica, R. Talla, A. Prabhu, Virtual Classrooms and Interactive Remote
Instruction.
Molet, T., Aubel, A., Capin, T., Carion, S., Lee, E., Thalmann, N., Noser, H.,
Pandzic, I., Sannier, G., Thalmann, D., “Anyone for Tennis?”, Presence, Vol.8,
No. 2, April 1999.
Çapin, T.K., Pandzic, I.S., Noser, H., Thalmann, N. M, Thalmann, D., “Virtual
Human Representation and Communication in VLNET”, IEEE Computer
Graphics and Applications, Vol.17, No2, 1997, pp.42-53.
Siu, K. Y, and R. Jain, "A brief overview of ATM: protocol layers, LAN
emulation, and traffic management," Computer Communication Review, vol. 25,
no. 2, pp. 6-20, April 1995.
Electronic Visualization Laboratory (EVL) website, University of Illinois at
Chicago, 1995, http://www.evl.uic.edu/.
Johnson, A., Leigh, J., DeFanti, T., Brown, M., Sandin, D., "CAVERN: the
CAVE Research Network,." Proceedings of 1st International Symposium on
Multimedia Virtual Laboratory, Tokyo, Japan, March 25, 1998, p.p. 15-27.
B. A. Goldstein, “TANDEM: A Component-Bases Framework for Interactive
Collaborative Virtual Reality”, M.Sc. Thesis, University of Illinois at Chicago,
2000.
Leigh, J., Johnson, A. E., Vasilakis, C., DeFanti, T. A., “CALVIN: an
Immersimedia Design Environment Utilizing Heterogeneous Perspectives”, Proc.
IEEE International Conference on Multimedia Computing and Systems, 1996.
Roussos, M., Johnson, A., Moher, T., Leigh, J., Vasilakis, C., and Barnes, C.,
“Learning and Building Together in an Immersive Virtual World”, Presence, 8(3),
special issue on Virtual Environments and Learning; edited by Winn, William and
Moshell, Michale J., MIT Press, pp. 247-263, June 1999.
Johnson, A., Moher, T., Ohlsson, S., Gillingham, M., “The Round Earth Project:
Deep Learning in a Collaborative Virtual World”, Proceedings of IEEE VR99,
Houston TX, March 13-17, 1999, p.p. 164-171.
Vasilakis, C., CASA- Computer Augmentation for Smart Architectonics, Masters
of Fine Arts thesis and performance art piece at Electronic Visualization Event 4,
University of Illinois at Chicago, 1995.
Johnson, A., Leigh, J., DeFanti, T., Brown, M., Sandin, D., "CAVERN: the
CAVE Research Network,." Proceedings of 1st International Symposium on
Multimedia Virtual Laboratory, Tokyo, Japan, March 25, 1998, p.p. 15-27.
28
[87]
[88]
[89]
[90]
[91]
[92]
[93]
[94]
[95]
[96]
[97]
[98]
[99]
Park, K., Cho, Y., Krishnaprasad, N., Scharver, C., Lewis, M., Leigh, J.,
"CAVERNsoft G2: A Toolkit for High Performance Tele-Immersive
Collaboration," presented at ACM Symposium on Virtual Reality Software and
Technology, Seoul, Korea, 2000.
Sawant, N., Scharver, C., Leigh, J., Johnson, A., Reinhart, G., Creel, E., Batchu,
S., Bailey, S., Grossman, R., " The Tele-Immersive Data Explorer: A Distributed
Architecture for Collaborative Interactive Visualization of Large Data-sets", 4th
International Immersive Projection Technology Workshop, Ames, Iowa, June 1920, 2000.
B. A. Goldstein, “TANDEM: A Component-Bases Framework for Interactive
Collaborative Virtual Reality”, M.Sc. Thesis, University of Illinois at Chicago,
2000.
Leigh, J., Rajlich, P., Stein, R., Johnson, A. E., DeFanti T. A., "LIMBO/VTK: A
Tool for Rapid Tele-Immersive Visualization", Proceedings of IEEE Visualization
'98, Research Triangle Park, NC, October 18-23, 1998.
Leigh, J., Johnson, A. E., Vasilakis, C., DeFanti, T. A., “CALVIN: an
Immersimedia Design Environment Utilizing Heterogeneous Perspectives”, Proc.
IEEE International Conference on Multimedia Computing and Systems, 1996.
Roussos, M., Johnson, A., Moher, T., Leigh, J., Vasilakis, C., and Barnes, C.,
“Learning and Building Together in an Immersive Virtual World”, Presence, 8(3),
special issue on Virtual Environments and Learning; edited by Winn, William and
Moshell, Michale J., MIT Press, pp. 247-263, June 1999.
Johnson, A., Moher, T., Ohlsson, S., Gillingham, M., “The Round Earth Project:
Deep Learning in a Collaborative Virtual World”, Proceedings of IEEE VR99,
Houston TX, March 13-17, 1999, p.p. 164-171.
Vasilakis, C, CASA- Computer Augmentation for Smart Architectonics: Masters
of Fine Arts thesis and performance art piece at Electronic Visualization Event 4,
University of Illinois at Chicago, 1995.
Defanti, T., Foster, I., Papka, M.E., Stevens, R., and Kuhfuss, T., “Overview of
the I-WAY: Wide Area Visual Supercomputing”, International Journal of
Supercomputing Applications, 10(2), 1996.
The COVEN project: Exploring applicative, technical, and usage dimensions of
collaborative virtual environments, Presence, 8(2), pp218-236, 1999, MIT Press.
Rodden, T., and Blair, G., “CSCW and distributed systems: the problem of
control.” In Proceedings of the Second European Conference on Computer
Supported Cooperative Work - ECSCW '91, Amsterdam, 1991.
Carlsson, C., and Hagsan, O., “DIVE- A platform for multi-user virtual
environments.” Computers & Graphics, Vol. 17, No. 6, pp. 663-669 (1993)
Greenhalgh, C., and Benford, S., “MASSIVE: a Distributed Virtual Reality
System Incorporating Spatial Trading.” In 15th International Conference on
Distributed Computing Systems (DCS'95), Vancouver, Canada, May 30-June 2
1995, IEEE Computer Society Press.
29
[100] Eric Shaffer, Shannon Whitmore, Benjamin Schaeffer, and Daniel A. Reed
"Virtue: Immersive Performance Visualization of Parallel and Distributed
Applications," IEEE Computer, December 1999, pp. 44-51.
30
Download