Creating an Architectural Virtual Environment for the Visually Challenged
by
Min-Hank Ho
Submitted to the Department of Electrical Engineering and Computer Science
in Partial Fulfillment of the Requirements for the Degrees of
Bachelor of Science in Electrical [Computer] Science and Engineering
and Master of Engineering in Electrical Engineering and Computer Science
at the Massachusetts Institute of Technology
-May 22, 2000
@ 2000 Min-Hank Ho All rights reserved.
The author hereby grants to M.I.T. permission to reproduce and
distribute publicly paper and electronic copies of this thesis
and to grant others the right to do so.
Author
Department of Electrical Engineering and Computer Science
May 2, 2000
Certified by
(7
Stephen
i-. Den~I
Thesi Supervisor
Accepted by
Arthur C. Smift
Chairman, Department Committee on Graduate Theses
MASCUSETTSMIITUTE
OF TECHWCOgy
JUL 2 7 2000
UIBRARIES
Document Services
Room 14-0551
77 Massachusetts Avenue
Cambridge, MA 02139
Ph: 617.253.2800
Email: docs@mit.edu
http://libraries.mit.edu/docs
DISCLAIMER NOTICE
The accompanying media item for this thesis is available in the
MIT Libraries or Institute Archives.
Thank you.
A
2
Creating an Architectural Virtual Environment for the Visually Challenged
by
Min-Hank Ho
Submitted to the
Department of Electrical Engineering and Computer Science
May 22, 2000
In Partial Fulfillment of the Requirements for the Degrees of
Bachelor of Science in Electrical [Computer] Science and Engineering
and Master of Engineering in Electrical Engineering and Computer Science
ABSTRACT
New architectural settings pose a physical hazard for the visually challenged. This thesis
describes a project to create a multi-modal virtual environment that allows the visually challenged
to pre-visit a virtual duplicate of the site before going to the actual location. Using VRML as the
modeling platform in conjunction with the scanning laser ophthalmoscope and the PHANToM, the
resulting architectural virtual environment is tailored to the visually challenged by providing both
visual and haptic feedback.
Thesis Supervisor: Stephen A. Benton
Title:
Allen Professor of Media Arts and Sciences
Director, MIT Center for Advanced Visual Studies
3
4
This thesis is dedicated to my parents
5
6
Acknowledgements
Work on this project would be impossible without the aid and support of the following people and
organizations:
Elizabeth Goldring, for supervising and leading the development of the project.
Stephen A. Benton, for supervising the progress of this thesis.
Sensable Technologies, for providing a PHANToM 1.5 for the use of this project.
Canon Incorporated, for providing the SLO 101 for use of this project.
The Center for Advanced Visual Studies at MIT, for providing the computing facilities used in
this project.
7
An audio version of this thesis is included on CD for the use of visually challenged readers.
8
Table of Contents
Abstract
3
Acknowledgements
7
List of Figures
10
1. Introduction
11
2. Background
13
2.1
2.2
2.3
2.4
13
15
17
18
SLO and visual disorders
Virtual Reality Modeling Language
Haptics
Previous works
20
3. Design and implementation
20
22
3.1 Design requirements
3.2 Implementation details
3.2.1
3.2.2
3.2.3
3.2.4
22
25
25
29
Hardware
Software
Haptic browser
Navigation
31
4. Limitations
31
4.1 Collisions
31
32
33
4.1.1 Walking stick
4.1.2 Viewpoint and avatar
4.1.3 Elevation changes
33
4.2 Performance issues
34
34
4.2.1 Searches
4.2.2 Messaging
35
4.3 Dynamic environments
36
5. Evaluation
36
37
5.1 Technical evaluation
5.2 User evaluation
38
6. Future Work
39
40
6.1 Haptic browser
6.2 AVE package
7. Conclusion
41
References
43
9
List of Figures
Figure 2-1
An architectural model as seen through the SLO
10
Figure 3-1
The scanning laser ophthalmoscope
18
Figure 3-2
Workspace of the PHANToM 1.5
19
Figure 3-3
PHANToM 1.5 with stylus
20
Figure 3-4
Haptic browser component diagram
22
Figure 3-5
Haptic browser module diagram
23
10
1. Introduction
To the average person, visiting a department store that just opened or a library that has just been
remodeled may be an interesting or exciting experience. Certainly, "fear" and "anxiety" are the
last words that describe such an event. However, to the visually challenged, fear and anxiety
become very real emotions when they first step into an unfamiliar setting. So strong are these
fears that many will choose to surrender their independence in movement and travel to avoid any
new environments. While many visually challenged can eventually develop a sense of the places
that they frequent, nothing can take away the dread that accompanies "the first time", where
every uncertain step risks a trip or fall that places them in real physical danger.
Nothing may completely replace the ability to see, but situations such as the one described above
may be ameliorated by providing visually challenged people with a method of familiarizing
themselves with a new environment without having to face the hazards of the actual setting. A
common way is to have a close friend or relative guide them for the first few times through
relevant parts of the setting. Others are to listen to descriptions of the new setting itself or to feel
a Braille map of the location. These methods may be useful, but they cannot replace independent
exploration of the new environment. Ideally, visually challenged people should be able to move
about the new space and note details about the setting of concern to them, but without the hazard
of falling down or bumping into other people. In fact, this is possible through the use of a new type
of virtual environment (VE).
A variety of new technologies developed within the decade have made today's VEs much more
realistic and useful than before. These technologies include advances in graphic rendering,
environmental audio, and haptic output. The new applications for these advances range from
immersive simulations for training and recreation to virtual product displays for businesses and
corporations. Researchers have also found a variety of uses for VE technology in the medical
world. Some are using VEs to hasten physical rehabilitation [1]. Others have considered using VE
11
to enhance medical imaging [2]. While some have tinkered with trying to make computing
technology accessible to the visually challenged through audio and tactile senses, few have
considered using VEs to make entire architectural environments and settings accessible to the
blind, and perhaps not without reason.
Many researchers have developed realistic multi-modal VEs incorporating a mix of visual, audio,
and haptic modalities, but those VEs are invariably highly impressive graphic scenes
complemented by sound and touch. The current VE paradigm is built around the primacy of the
visual sense. And, although any number of the audio, tactile, olfactory, or taste senses may be
omitted in a VE, it must not be deprived of visual information. Thus, creating a VE for the visually
challenged may seem a futile task since no visual information may be relayed.
Besides the intrinsic fallacy of assuming that other modalities cannot make up for the lack of
visual descriptions, a contention that the visually challenged disprove in real life every day, a
visual effector does, in fact, exist that can be used by many visually challenged people. Known
among ophthalmologists as the Scanning Laser Ophthalmoscope (SLO), this device is capable of
allowing many visually challenged people to see a variety of images, provided that the person still
has some viable retina remaining. With aid of this device, a multi-modal VE tailored specifically
for the visually challenged becomes possible.
This thesis describes a project that created an architectural virtual environment (AVE) system to
assist the visually challenged in familiarizing themselves with new architectural environments.
Specific emphasis is placed on a haptic browser that provides the interface between the user and
the AVE. Because the haptic browser is responsible for coordinating everything the user senses
in the AVE, it may be considered the core of the AVE package. The next section will review
background information on relevant technologies and discuss previous similar research. Section
Three outlines the requirements on the AVE and presents the current implementation of the AVE
package and the haptic browser. Section Four discusses problems and unresolved issues of the
12
supply the retina, resulting in retinal scars that block out large areas of vision[4]. While both
diseases can cause blindness, they generally do not completely destroy the retina. The SLO
utilizes any remaining small viable portions of the retina to allow visually challenged persons to
see.
Both Webb and Goldring have discovered the key feature of the SLO that allows her to see- the
device's use of the "Maxwellian view." A Maxwellian view optical system [5] forms a small image
of the light source at the pupil of the eye, resulting in a high contrast image of uniform luminance
that is undisturbed by defects in the cornea and lens. In essence, the original optical mechanism
of the eye is replaced by one that literally brings images right into the eye, allowing the usable
portions of the retina to see the image. Other than offering a wide-angle Maxwellian view, the
SLO's graphic capabilities are unexceptional. The resolution and red-levels are equivalent to a
monochrome VGA monitor (see figure 2-1). However, most visual challenged people cannot
distinguish very many lines of resolution or levels of contrast. Thus, the SLO serves well as a
visual effector for the AVE for visually challenged people.
Figure 2-1. An architectural model as seen through the SLO.
14
current implementation, while Section Five presents an evaluation of the haptic browser and
suggestions for evaluating the final AVE package as a whole. Finally, Section Six discusses
possible improvements to the haptic browser and future extensions to the project, followed by the
conclusions in Section Seven.
2. Background
This section provides details of equipment, technology, and information relevant to the final
design and implementation of the AVE. It also presents some past research on the development
of virtual environments for the visually challenged.
2.1 The SLO and visual disorders
Dr. Robert Webb, the inventor of the SLO, originally created the SLO to allow ophthalmologists to
actively observe a patient's retina during its stimulation. Using a helium-neon laser and a complex
system of optics, it scans images straight into a patient's eye while an infrared camera uses the
same optical system in reverse to capture an image of the retina. When Elizabeth Goldring, a
visually challenged poet-artist working at the MIT Center for Advanced Visual Studies, went to her
ophthalmologist for an pre-surgical examination, she realized that this instrument allowed her to
exercise her visual sense for the first time since she lost her vision. Since then, she has become
an advocate of using the SLO as a "seeing-machine," hoping that it might allow many more
visually challenged people to once again use their sense of sight.
Goldring lost her vision to proliferative retinopathy and macular degeneration. Age-related
Macular Degeneration (AMD) is the leading cause of blindness in the United States today,
affecting more than 10 million Americans[3]. This incurable disease causes the macula (or fovea)
of the eye to progressively deteriorate, resulting in the loss of central vision. Proliferative
retinopathy is an advanced form of diabetic retinopathy that damages the blood vessels that
13
2.2 Virtual Reality Modeling Language
At the First International Conference on the World Wide Web (WWW), Mark Pesce and Tony
Parisi presented a prototypical 3-D web interface called Labyrinth. This presentation lead to the
standardization of a language to specify 3-D scenes on the WWW- the Virtual Reality Modeling
Language (VRML) [6]. VRML 1.0 was based on the Silicon Graphics Open Inventor language
using plain text to specify objects and their properties. Since then, VRML has undergone several
revisions and extensions. The current international standard, VRML 97 (VRML2.0), is the end
product of efforts by the VRML Architecture Group to improve the original language. With VRML
2.0 came the formation of the VRML Consortium and the beginnings of the External Authoring
Interface (EAI). The Consortium, now renamed the Web 3D consortium, provides support for and
improvements to VRML through a network of independent developers and working groups.
Among them is the EAI working group, which has submitted a specification for EAI for
standardization.
The External Authoring Interface is a conceptual description of an interface between the external
environment and the VRML scene [7]. The proposal submitted by the working group also included
binding details to the Java development environment, and the CosmoPlayer developers chose to
implement these bindings. Thus, in the context of this project, EAI describes a Java package that
allows developers to access a VRML scene viewed through CosmoPlayer, though any browser
implementing EAI bindings to Java can substitute for CosmoPlayer.
In addition to EAI, VRML also provides other advantages as a 3-D modeling platform. Because it
is an open language created for the purpose of distribution over the WWW, any VRML world may
easily be distributed and shared without expensive licenses or specialized software. Furthermore,
VRML97 intrinsically supports environmental audio. This allows for the addition of audio feedback
in future AVEs. Finally, VRML is highly extensible. The language may be extended through proto
nodes [8] that allow developers to add features to the language to suit specific needs. While proto
15
nodes are not part of the standard specification, the implementation of the node may be
distributed along with the VRML file.
The main drawback of VRML is the large processing overhead of current VRML browsers. Many
VRML browsers achieve low frame rates and fail to provide a smooth dynamic rendering of a
large VRML scene. However, this problem will soon be resolved with better implementations of
VRML browsers that take advantage of new 3-D acceleration technologies.
Another drawback is the difficulty of creating a realistic-looking environment. The language
provides an adequate syntax and vocabulary for 3-D modeling but falls short when trying to
duplicate a complex real-world scene. These problems are particularly noticeable when trying to
create realistic lighting effects, surface textures, or irregularly curved surfaces. A realistic VRML
scene is not impossible, but the language is unwieldy for anyone trying to create a realistic scene.
Furthermore, this problem is exacerbated by the fact that most VRML browsers only produce
barely-acceptable graphical renderings of a VRML scene. Of course, given their already dismal
performance, it is understandable why extra processing power would not be devoted to realistic
graphic renderings; however, the graphical limitations of the browser presents another obstacle to
developers who desire realism.
Finally, current VRML specifications fail to provide mechanisms to handle object-to-object
collisions. While an object may be animated and scripted to move and interact with the user, the
language has no convenient way of describing the interaction of two objects in the environment. A
way around this problem is conceivable, but not without considerable investment in coding and
processing. This limitation of the VRML language significantly limits the level of object interaction
that is possible within a VRML scene.
Despite these problems, VRML's balance of features, extensibility, and realism is adequate for
creating an AVE for the visually challenged. Because the visually challenged have limited visual
16
resolution and color distinction abilities, the emphasis of an AVE is not so much on details as on
the overall shape and location of doors, obstacles, and walls. In fact, early work with Goldring has
found that aesthetic details such as textures and decorations often only confuse the visually
challenged user, who has to interpret the mix of colors and patterns with only limited visual acuity.
This project requires a straightforward but extensible and portable modeling environment that
provides reasonably good accuracy in representing a locale, and VRML satisfies this
requirement.
2.3 Haptics
Haptics is a relatively new term used to describe the virtual sense of touch. It is derived from the
Greek verb "haptos" for "to grasp." The haptic modality introduces the ability to sense force
feedback and to feel objects in a VE. At a coarse level, haptic interface devices allow users to feel
obstacles and large shapes. At a finer level, these devices can provide the ability to feel the
textures and compliance of objects. While technologies for haptic devices have been available for
some time, they have found their way into relatively few VEs. A major reason for the delay in
incorporation in VEs may be that commercial haptic interface devices have been introduced only
within the past decade. Sensable Technologies, Inc., the maker of the PHANToM, wasn't
incorporated until 1993 [9], Immersion Technologies, the developer of the haptic feedback
technology found in Logitech's 3-D mouse and joysticks, was also founded that year [10].
At the operational level, haptic interface devices are both sensors and effectors. They provide
applications with information about their present location and orientation, and applications may
call on them to simulate forces, objects, or other special effects. While all haptic devices share
the property of being able to provide users with a sense of touch, their implementation and their
uses differ greatly. A force feedback joystick, for example, provides only a few degrees of force
feedback, indirectly corresponding to specific situations in a computer game environment. The
PHANToM, on the other hand, is a much more dynamic haptic interface device providing six
17
degrees of freedom (DOF) in movement and three degrees of force feedback. A six-degree
feedback model has recently become available, allowing force feedback along a ray rather than
only at a single point.
2.4 Previous works
Some research has already been done on the application of multi-modal interfaces in aiding
visually challenged people. These include a series of projects undertaken by the HumanComputer Interaction group at the University of York [11]. Although their research projects do not
concentrate on the use of the haptic modality, their argument for the use of multi-modal interfaces
should be noted. They point out that only 10% of visually challenged people are capable of using
Braille, citing the high cost of learning Braille as a barrier for most visually challenged people.
Thus any Braille-based system, or similar efforts to help the visually challenged, would benefit
only a minority. Furthermore, they note that researchers without intimate knowledge of the
disability are doing the most of the research, causing many of them to make incorrect
assumptions about a disability or the disabled person. A multi-modal interface would make the
interface accessible to the visually challenged by providing an intuitive way of interacting that
requires relatively little learning time investment. While the York researchers can only suggest
caution when making assumptions about disabled people, the present project benefits from the
direct input of Goldring to help set requirements for the AVE that are appropriate for the visually
challenged.
Lake Porter and Jutta Treviranus, from the Adaptive Technology Resource Centre at the
University of Toronto, wrote a paper discussing the feasibility and benefits of adding haptics to
VEs to aid those with disabilities[12]. In it, they describe important considerations in the
development of the haptic sense in a VE. Key among these considerations is the concept of
manipulation range. They present the idea only in context of a VE without the visual modality, so
that the haptic scene needs to avoid "haptic voids" by having objects within the manipulation
18
range at all times. Otherwise, users might lose track of their position and orientation. Because the
AVE in the present project has the SLO as a visual effector, the concept of manipulation range
can be extended to speed up haptic rendering and to filter out unnecessary haptic objects that
might confuse the user.
In his MIT Master's thesis [13], Evan Wies proposed a standard for the incorporation of haptic
descriptions in VRML nodes, introducing new fields to describe the compliance, mass, and other
haptic properties. These additions to VRML specifications may be added to VRML with little
effects on the implementation of current VRML browsers, which can simply ignore the extra
description data, while specialized haptic browsers can take advantage of these descriptions to
render a haptic scene along with the graphical scene. This approach has the advantage of
providing the creator of the VRML scenes a chance to specify the exact haptic properties of
objects in the world, resulting in highly accurate haptic scenes. The drawback of such an
approach is the extra work necessary to fully describe haptic objects. Many developers may
choose not to provide haptic descriptions in order to save authoring time. Thus, visually
challenged people would not be able to access these VRML worlds. Another drawback is the
need for standardization. The Web3D consortium has not yet made plans to adopt a haptic
standard, and even after a working group is formed, a standard will not emerge for several years.
British Telecom (BT) Labs has a project in progress that is trying to provide visually challenged
people with tactile access to VRML worlds available on the web[1 4]. In the BT Labs paper, the
researchers described two approaches they considered in adding the haptic modality to VEs. The
first is a modified VRML file that would support haptic functions-an approach similar to Wies'
thesis. The second is a modified VRML browser that provides haptic feedback without the benefit
of haptic descriptions. The BT Labs researchers ultimately chose to modify the browser, justifying
their choice by noting that most VRML worlds will be written with only visual display in mind.
Thus, to provide the visually challenged with access to as many VRML worlds as possible,
19
haptics should be implemented at the browser level. The AVE project described by this thesis
takes this approach.
3. Design and implementation
This section presents detailed requirements for the AVE and the haptic browser, as well as the
rationale behind these requirements. It also describes this project's current implementation of the
AVE and haptic browser, discussing some of the issues that have presented themselves and the
decisions made to address these issues.
3.1 Design requirements
The primary objective of the AVE package is to create an interactive virtual environment that
allows visually challenged people to pre-visit a building or similar architectural environment before
they actually set foot in the location. By allowing them to familiarize themselves with the
environment beforehand, they may better navigate through the environment when they arrive in
person. To achieve this objective, the AVE package is required to have an intuitive interface,
provide an accurate representation of the real world scene, and be easily distributed to users.
Finally, to accomplish these goals, additional requirements are also imposed on the haptic
browser architecture, including: modularity, extensibility, and portability. Performance is a
concern, but not a requirement of the haptic browser. Because the project is still in prototype
stage, more emphasis is placed on design than on speed. Furthermore, some performance
bottlenecks, such as the VRML browser, cannot be overcome by improvements in the haptic
browser code. Finally, the rapid improvement of computer hardware is making many performance
issues moot, especially because the haptic browser will not be run concurrently with other
programs.
20
3.2 Implementation details
This project's implementation of an AVE uses a variety of hardware and software, all of which are
commercially available. The SLO and PHANToM function as the visual and haptic effectors,
respectively. VRML serves as the modeling platform, while Java was chosen as the development
environment for the haptic browser. The haptic browser is responsible for coordinating the haptic
and graphic scenes. A virtual walking stick created by the haptic browser serves to provide the
user with a link between the position of the PHANToM in the haptic scene and the corresponding
position in the graphic scene. Further details about the hardware, software, haptic browser, and
VE navigation are discussed in the following subsections.
3.2.1 Hardware
i-igure ;-1. I ne scanning iaser opntnaimoscope.
22
The intuitiveness requirement addresses the issue of accessibility by the visually challenged user.
Specifically, an intuitive interface should allow a user to make use of the AVE after a brief period
or learning and instruction. While intuitiveness is a subjective property, this project makes the
assumption that a multi-modal interactive interface will provide a more accessible interface than
traditional approaches such as Braille maps or audio descriptions. Thus, the intuitiveness
requirement may be recast into a requirement that multiple modalities and interactions be
incorporated within the virtual environment.
The accuracy requirement relates directly to the effectiveness of the AVE in helping the visually
challenged familiarize themselves with an environment. The location, size, and shapes of objects
in the AVE must correspond to those of the real world objects in such a way that the visually
challenged can learn about the real environment through the AVE, independent of how they
choose to make use of the virtual environment. For example, if a visually challenged user
depends primarily on haptic cues to learn a new environment, the AVE must provide sufficient
haptic accuracy in the scene to allow the user to grow familiar with the environment.
The ease of distribution requirement is actually one of availability. While the hardware
components of the AVE must be supplied by the user or operating sites, the software portions of
the AVE must be easily shared among visually challenged users. An extension of this
requirement is the ease of creating virtual environments. Because the hope of the project is to
have entire libraries of AVEs available to users through the Internet, the modeling platform must
be one that many have access to and be willing to create VEs with.
Finally, modularity, extensibility, and portability are general engineering requirements for a good
design. In the case of the haptic browser, however, they are particularly important. In order for the
haptic browser to be easily shared among users, it must be able to function on multiple platforms.
If the haptic browser is not modular, extensible, and portable, it will only function on a single
platform. Any effort to port the browser would require considerable resources.
21
Figure 3-3. PHAN [oM 1.5 with stylus
One limitation of the PHANToM 1.5 is that it has only three degrees of feedback. Although the
device has six DOF of movement, it can only simulate force at a single point. Thus, only the tip of
the stick has the ability to "feel," while objects may pass right through the middle of the stick. This
limitation is attenuated by the fact that most objects in the architectural space, such as doors and
walls, are much larger than the stick itself, so only in rare situation will the stick have the
opportunity to pass through an object without making contact with the tip. Recently, a six-degree
feedback model of the PHANToM has been developed that can remove this limitation completely.
3.2.2 Software
As previously noted, the modeling platform for the virtual environment is VRML. The VEs are
created using CosmoWorld, a software product that has since been discontinued. Because VRML
is an open standard, any up-to-date VRML scene editor is capable of modifying or creating VEs
for the AVE package. So, the obsolescence of the modeling software has no effect on the longterm viability of the AVE.
24
HZ=
The SLO (Model 101) serves as the visual effector for the AVE (See fig. 3-1). The current unit
used is on loan from the Canon Corporation to the Center for Advanced Visual Studies at the
Massachusetts Institute of Technology. Video input to the SLO is in NTSC format, but at a
different sync rate than normal. Thus, to actually display the graphic scene through the SLO, the
computer's VGA output signal is first converted to NTSC video signal, and then passed through a
time-base corrector unit. Newer versions of the SLO are capable of accepting direct VGA input.
The haptic interface device chosen for this project is the PHANToM (Model 1.5) developed by
Thomas Massie, of MIT and Sensable Technologies. This version of the PHANToM has a
19.1 cm x 26.7 cm x 38.1 cm (7.5" x 10.5" x 15") workspace and a choice of a thimble or stylus
for an end effector (see fig. 3-2). For this project, a stylus was chosen to provide a scaled
physical representation the walking stick used in the haptic browser as part of the interface (see
fig. 3-3). The PHANToM was chosen for its ability to provide full six degrees of freedom (DOF) in
movement. Furthermore, Sensable Technologies provides a haptics API, named GHOST, for the
PHANToM that is capable of translating VRML objects directly into haptic objects for haptic
rendering.
37.5 cm (15 in)
19.5 cm (7.5 in)
Figure 3-2. Workspace of the PHANToM 1.5 (note: figure not drawn to scale).
23
Most of the haptic browser is coded in Java with JDK 1.2. The only exceptions are the VRML
browser and the GHOST API. Java was chosen as the software development environment for the
project because VRML provides internal support for Java and Java script through the "script"
node [8], and because the VRML browser used for this project, CosmoPlayer, supports EAI
through Java bindings. Furthermore, Java is an open and object-oriented platform that can run on
multiple platforms through a Java Virtual Machine (JVM). The drawback of using Java, however,
is the poor performance of current JVMs in the Windows environment, although advances in
personal computers as well as more efficient implementations of JVMs should remedy this
problem in time.
3.2.3 Haptic browser
Originally, the haptic browser was to be a mostly self-contained Java application, using GHOST
only to provide driver level access to the PHANToM. While this would have been a tedious task, a
ground-up browser could be customized specifically for use by visually challenged people by
balancing graphic and haptic detail against system performance. Initial attempts to use this
approach, however, proved to be more difficult than expected. Writing a completely new browser
introduced graphic rendering issues outside the scope of this project. Furthermore, more new
code tended to introduce more errors into the browser, resulting in many more bugs in the haptic
browser. Finally, by placing the implementation of both the graphic component and haptic
component in the same application, the browser becomes less robust and unable to adapt to
changes in the graphic and haptic technology landscape. The haptic browser would not be able to
take advantage of new graphic technologies without direct modification to the source code itself.
The same is true for new haptic technologies.
25
Virtual Environment
Direct Interaction
Haptic
Component
Graphic
Component
Future
Components
Message/Event Passing
Figure 3-4. Haptic browser component diagram.
The current implementation of the haptic browser separates the haptic component from the
graphic component. The two components are connected only by the virtual environment and a
messaging protocol used to maintain scene consistency (see fig. 3-4). Currently, the graphic
component of the haptic browser consists of a graphic applet and CosmoPlayer 2.1, a popular
VRML browser. Because CosmoPlayer is ultimately responsible for visually displaying the VRML
scene, it can be viewed as the graphic renderer. The graphic applet is responsible for inserting
the walking stick into the graphic scene and for ensuring consistency with the haptic scene. It
accomplishes this task through EAI. The haptic component of the haptic browser consists of a
haptic applet built on top of the GHOST API. The haptic renderer is merely the Java wrapper
class that provides access to GHOST. GHOST is ultimately responsible for interfacing with the
PHANToM to create the illusion of objects in space. Unlike CosmoPlayer, however, it is not a
standalone application in and of itself; instead, it provides the haptic applet with object level
access to the PHANToM and the haptic scene. Because the GHOST SDK consists of C++
libraries, the haptic applet must access the methods and functions through the Java Native
Interface (JNI). Figure 3-5 provides a module diagram of the haptic browser.
26
VRML File
Haptic Applet
Graphic Applet
External Authoring Interface(EAI)
Haptic Renderer
VRML Browser
Java Native Interface (JNI)-------------------------
Haptic Device API
Figure 3-5. Haptic browser module diagram.
In addition to rendering the haptic scene through GHOST, the haptic applet is also responsible for
translating VRML objects into haptic objects. At the initial stages of the project, the translation
was to be done at the level of collisions, where the haptic applet would generate a simple collision
response whenever the walking stick hit an object in the virtual space. Such an implementation
abstracted out all the details about the object and would require only information about the
location and surface-normal vector of the surface. If VRML supported object-to-object collisions,
this method of haptic rendering would be trivial to implement. Collision events between the
walking stick and objects in the VRML scene could be caught by the graphic applet and sent to
the haptic applet for processing. Even without object-to-object collision, this method may be
implemented through a function of the stick's location and the location of all objects in the VRML
world, similar to a basic collision-detection algorithm. Thus, with the aid of advanced algorithms,
collision-level haptic rendering is probably one of the more efficient methods of providing haptic
feedback. It requires little or no memory on the part of the haptic applet, and needs only to query
the VRML browser for information about the current state of the world and to produce the
appropriate haptic feedback on a collision. Animations and dynamic VRML scenes are
27
automatically supported in this approach because any changes in the state of the VRML scene is
automatically relayed to the haptic applet when it queries for the state of the world.
The drawback of collision-level feedback is the intractability of the mathematics when six DOF
haptic feedback is required. In such a case, the paradigm shifts from point-based to ray-based
force feedback. Rather than calculating the force for a single point, an infinite number of points
along the walking stick may now touch multiple objects in a haptic scene, introducing a separate
set of rotation mechanics problems that must be solved. While the implementation of a six
degrees of feedback AVE is still possible, the amount of work necessary to perform this upgrade
is far from trivial. Another drawback of collision-level feedback is the difficulty of supporting future
VRML models that may include haptic properties for objects. Because many haptic properties are
functions of the entire object, these properties cannot be simulated without knowledge of the
object itself. So, a collision-level feedback renderer must violate the collision-level abstraction and
query the VRML file for details about the object in order to simulate its haptic properties.
The current implementation of the haptic applet performs object-level force feedback. This
approach is complemented by high level GHOST classes that describe specific haptic shapes
such as cylinders, cones, spheres, and cubes. The haptic applet renders a haptic scene by
placing haptic objects in positions in the haptic scene corresponding to the position of its
counterpart in the visual scene. This approach requires the haptic renderer to be fully aware of
the shape and properties of all objects in the virtual world and to store these objects and their
properties in memory for efficiency. Because the graphic component of the haptic browser also
stores the AVE objects and properties in memory, the copy in the haptic applet may be seen as
redundant. However, the extra memory usage is readily justified by the ability to implement haptic
properties and six degrees of haptic feedback.
In addition to object-level haptic rendering, the haptic applet also improves performance by
rendering only those objects within manipulation range. In the physical world, a person cannot
28
possibly reach all the objects in a given environment. Similarly, a user will not be able to touch all
the objects in the AVE. Furthermore, if the entire environment was mapped to the PHANToM's
limited workspace, the scale factor between the haptic scene and the graphic scene would be too
large to create a reasonable correspondence. Thus, the haptic applet currently uses GHOST to
render objects only within a given manipulation range of the user, so that at any given point in
time only a haptic frame of the entire virtual scene is rendered. To facilitate fast haptic frames, the
boundary extremes of each object are also stored in memory. Each time the user moves, the
haptic applet searches through its stored table of objects and renders only those objects with
boundaries that intersect the manipulation space.
3.2.4 Navigation
The choice of navigation scheme is vital to the intuitiveness of the AVE interface. Bowman et al at
the Georgia Institute of Technology have studied several navigation techniques in a virtual
environment [15], although only with interfaces that have separate controls for view orientation
and movement direction. To avoid the use of multiple input devices, this project uses the
PHANToM to provide a form of gaze-directed navigation mechanism. Users may only move
forward or backward in the direction they are facing, or turn in place. No strafing ability is
provided. Because the PHANToM is also used as the effector, a conflict in the interface arises.
Assuming that all movement in the virtual environment is performed on a plane, two degrees of
freedom in movement is required to address the full range of motion. The PHANToM currently
provides six DOF in movement, but all six degrees are being used for the purpose of sensing
objects in the haptic scene. Thus, movement through the VE remains unaddressed. The simplest
way to increase the addressable space of the PHANToM is to create separate modes of
operation for navigation and for sensing. By creating two operating modes, the PHANToM can
now double its space of addressable movement. The problem now becomes one of how to
optimally switch between the two modes.
29
This project considered two methods of specifying mode changes, implicit and explicit. With
implicit switching, the haptic browser initiates the mode switch. The browser predicts the user's
intentions and switches between modes without requiring a direct effort on the user's part. A
simple implementation of implicit mode switching is a mode switch boundary in the PHANToM
workspace. If the user leaves this sensing-mode box, such as by reaching far ahead or to the
side, the browser can assume that the user means to navigate the environment rather than to feel
an object in the current haptic scene. Accordingly, the browser automatically switches to
navigation mode, allowing the user to move about the AVE. In this implementation, only the
space inside the sensing-mode box is mapped to the manipulation space of the haptic browser.
Movements outside of the box will not move the walking stick in the VRML browser nor will there
be haptic feedback.
Explicit switching, on the other hand, requires the user to initiate the mode switch. It also requires
a mechanism that allows users to switch modes. The PHANToM provides such a mechanism in
the form of a push-switch on the stylus. The switch returns a Boolean value in software that
directs the haptic browser to switch modes. Explicit switching is simpler to implement than implicit
switching, but it requires the user to learn an action that has no direct correspondence in real life
or in the virtual scene. This may make the interface more difficult for users to adjust to.
Nonetheless, the current haptic browser uses explicit mode switching to change between modes.
This choice is justified by the lack of information necessary to implement truly transparent implicit
switching. What one user does to imply a desire to move may be different from another user.
Using explicit mode switching makes this distinction clear and avoids any confusion.
4. Limitations
This section discusses some of the drawbacks of the current haptic browser as a means of
accessing the virtual environment and some of its problems. While the design is believed to be
30
generally sound, many vexing issues specific to the browser's implementation remained to be
addressed. Methods of resolving many of the issues are discussed in the section on future works.
4.1 Collisions
Collisions are the cause of most of the problems in the current implementation of the haptic
browser and for VEs in general. While simple collision detection is a matter of routine algorithms,
the haptic browser needs to coordinate detection in two separate rendering components.
Furthermore, it also needs to account for a walking stick as a movable part of the avatar that can
collide with objects in the virtual environment. The interaction of all these parts create many new
issues in collision detection and object interaction in the virtual environment.
4.1.1 Walking stick
The walking stick creates problems in collision in two ways. The first is the result of the hardware
limitation of the PHANToM 1.5 used in this project. Because the device only has three degrees of
force feedback, it can only sense objects on a single point represented by the tip of the stylus.
Translated into the graphic scene, this implies that only the tip of the walking stick can touch and
collide with objects. The large size of walls, doors, and other objects tend to obscure this defect,
but the problem can become very apparent in corners and when trying to make contact with
smaller objects. When trying to use the walking stick to feel a convex corner, the body of the stick
may appear to pass through the wall visible to the user, while the PHANToM provides feedback
of the wall around the corner that the user cannot see. Needless to say, this proves to be very
confusing. Because a hardware limitation is the cause of the problem, any effort to modify the
haptic browser to accommodate it would be futile. The only solution to this problem is upgrading
to a six degree of feedback device.
31
The second collision problem related to the walking stick occurs during mode switching. When
the user switches to navigation mode, the walking stick may be left in the last known location or
reset to the origin position. In either case, it may be possible to approach an object in the AVE in
a manner such that the walking stick will pass through the object. Because the stick is an addition
to the avatar implemented by the graphic applet through EAI, commercial browsers do not include
the stick in their native collision detection for the avatar. Furthermore, because VRML does not
support object-to-object collision, the walking stick may pass through any object without sending
any events to notify the applet of a collision. While in the sensing mode, the constraints of the
haptic scene prevent the user from pushing the stylus, and thus the walking stick, through an
object. But while in navigation mode, the PHANToM is purely being used for movement and no
haptic feedback prevents the user from pushing the walking stick into an object. The user can
move straight up to a wall and switch back to haptic mode, and the stick will end up on the other
side. Fortunately, this problem can be solved through an addition of a separate collision-detection
function for the walking stick. The current haptic browser does not yet support this due to the
large processing overhead required.
4.1.2 Viewpoint and avatar
In VRML, the current field of view is represented by a viewpoint object bound to the avatar. A
movement in the avatar is matched with a corresponding movement in the viewpoint object.
When using CosmoPlayer's native interface to move around the virtual world, the
correspondence is automatically maintained. However, EAI does not provide the current graphic
applet with access to the avatar. Instead, EAI gives the applet access to the viewpoint object, and
the applet must move the user by moving the viewpoint. Because the viewpoint is bound to the
avatar, CosmoPlayer automatically updates the avatar position with a corresponding movement.
A problem arises when the avatar collides with an object in the virtual world. Because the current
haptic browser doesn't perform any collision detection while in navigation mode, it is possible for
the viewpoint position to move through an object. But, when CosmoPlayer attempts to move the
32
avatar position to match, its collision detection mechanisms inhibit the update. As a result, the
viewpoint and avatar position become skewed with respect to each other. Because both the
walking stick and the haptic coordinate frames are based on the position of the viewpoint object,
the haptic scene will no longer correspond to the graphic scene. The result after several collisions
is an increasingly large discrepancy between the two scenes that will eventually become
noticeable to the user. A simple solution to this problem is to implement a separate collisiondetection mechanism for the viewpoint in addition to the one internal to CosmoPlayer. While
redundant, it will ensure scene consistency after collisions.
4.1.3 Elevation changes
To simulate real walking, the avatar is capable of climbing small changes in elevation. The height
of objects the user may climb is a property that may be set in the VRML file, and the VRML
browser automatically accounts for this property in its internal collision detection mechanisms by
adjusting the graphic scene accordingly. This adjustment in height, however, is not accompanied
by any signals or events. Thus, it remains transparent to the graphic applet. This means that the
haptic scene will be rendered assuming the viewpoint is at the original elevation while elevation of
the graphic scene will have been shifted to account for "stepping up". To resolve this problem,
collision detection will have to be implemented not only for the viewpoint object, but also for the
normal line segment connecting the viewpoint and the ground.
4.2 Performance issues
The haptic browser makes use of several technologies considered to be relatively slow and
inefficient compared to their peers. Java, for example, is slowed down by an extra layer of
software, the JVM, to provide multi-platform capability. While VRML is a good modeling platform
for easy distribution, most VRML browsers achieve dismal performance compared to the 3D
graphics used in common entertainment programs today. The PHANToM itself requires at least a
33
300 MHz Pentium machine in order to provide adequate haptic feedback. Since all these
performance issues are caused by the technologies on which the haptic browser is built, they will
have to be resolved by improvements in the technologies themselves. There are, however,
separate performance issues specific to the implementation of the current haptic browser,
specifically in the area of searching and messaging.
4.2.1 Searches
Currently, the haptic renderer performs a linear search to locate objects within manipulation
range. Because this is the only search performed by the current implementation of the haptic
browser, a linear search algorithm is sufficiently fast, given adequate processing hardware.
However, as new collision detection mechanisms are added to improve the haptic browser in the
future, the haptic browser will need to perform many more searches initiated on every user
movement in the AVE. In such a case, it may be wise to consider more efficient search
algorithms, such as bubble or binary search, to help reduce the latency between haptic frames.
4.2.2 Messaging
Due to security issues in the Netscape Java Runtime Environment (JRE), it is very difficult to use
native code in the manner necessary for this project. Instead, the haptic applet must run in a Java
plug-in that uses the JRE on the system. Unfortunately, the Java plug-in does not yet support the
necessary Java to Javascript communication to provide direct EAI support, and so the graphic
applet must run in the Netscape JRE. Due to the separate memory space of the applets,
messages cannot be passed directly in memory. As a workaround, messages are currently being
passed through the file system. This results in significantly slower performance if the operating
system doesn't automatically cache disk access in memory. This problem is likely to be resolved
in future version of Netscape and the Java Plug-in.
34
4.3 Dynamic environments
Currently, the haptic applet assumes that all objects in the AVE are stationary. Thus, all haptic
frames created by the haptic applet are static. If the AVE incorporates moving or animated
objects, the current haptic browser will not be able to match the haptic scene with the graphic
scene. The assumption of a static haptic frame is not an unreasonable one, given that an
architectural environment is generally static with fixed walls and doorways. However, some
details that are important to visually challenged people about an environment may be obscured
by this assumption. These details include such things as the direction that a door opens, or the
presence and direction of an escalator. These details may be scripted in VRML to reflect their
behavior in the graphic scene, but the current haptic applet does not render a dynamic haptic
representation for these objects.
The current haptic applet stores only those object details that are relevant for the creation of a
static haptic scene, such as shapes and dimensions. Details about animations or scripted
movements are ignored. Thus, when an animated object moves in the graphic scene, the object
remains stationary in the haptic scene. Even if the haptic applet stored animations with objects
and provided a mechanism to implement the animation in the haptic scene, the rate of movement
between the haptic scene and graphic scene may end up being very different. Because dynamic
environment support is important in creating a realistic AVE, this issue must eventually be
addressed. And a possible solution to this problem is proposed in the future works section.
5. Evaluation
To evaluate the AVE package, the package is evaluated at both the technical level and at the
user level. At the technical level, the haptic browser is examined to see if it meets the
requirements set forth at the onset of the project. At the user level, the entire AVE package has to
be evaluated operationally to see if it does successfully fulfill its intended goal of aiding visually
35
challenged people in familiarizing themselves with a new architectural environment. While this
user level evaluation has not yet been done, a proposed approach to user level evaluation is
presented.
5.1 Technical evaluation
The original requirements of the project required the AVE to be intuitive, accurate, and easily
distributed. In addition, the haptic browser must also be modular, extensible, and portable.
While the definition of intuitive is subjective, the haptic browser attempts to satisfy the
intuitiveness requirement by introducing a walking stick into the graphic scene. This stick helps
users quickly make a mental correlation between what they see in the graphic scene and what
they feel of the haptic scene. Furthermore, the implementation of the interface was done with
consultation from Goldring to ensure no grossly invalid assumptions about the needs of the
visually challenged audience were made.
Assuming that the original VRML model is physically correct, the haptic browser meets the
accuracy requirement through messaging between the haptic and graphic components. Both the
haptic applet and the VRML browser parse information about objects in the AVE straight from the
original VRML model, thus the objects within a given scene are always correctly rendered with
respect to other objects in the scene. Messages passed between the graphic applet and the
haptic applet coordinate the user position so that the two scenes will correspond with each other.
With the exception of certain special cases involving collisions, the correspondence between the
two scene is always maintained.
To meet the distribution requirement, VRML was chosen as the modeling platform and Java as
the development platform. By using VRML, an open standard for describing 3-D scenes on the
World Wide Web, the model can be distributed without fees or permissions. By using Java as the
development environment, the haptic browser is guaranteed to run on a wide range of platforms.
36
As long the JVM is correctly implemented on the user's platform, the haptic browser can simply
be distributed with the model and run without any intervention on the part of the user.
Finally, the implementation of the haptic browser is modular, extensible, and portable. The haptic
browser is separated into haptic and graphical components, with each component built on top of
existing commercial products. The haptic component consists of a haptic applet using the Ghost
API, while the graphic component contains a graphic applet interfacing with CosmoPlayer. These
commercial modules are treated as black boxes; thus, the haptic browser has no dependency on
their implementation. As long as future upgrades to these modules continue to meet their
currently published specifications as a minimum, any changes made to these commercial
modules will be transparent to the haptic browser. Because the haptic browser is implemented in
modules and components, new functionality may be added to the browser simply by adding a
new module or component. Lastly, all of the code for the haptic browser is written in Java, and
thus the browser is portable to a number of platforms with little or no effort. Given the modularity,
extensibility, and portability of the haptic browser, it may be fair to say that the browser also
meets the robustness requirement, although this remains to be demonstrated in extensive use.
5.2 User evaluation
User evaluation of the AVE will require formal clinical trials that review results from multiple users.
In order to assess the success of the AVE package, the trials must test how well the AVE helps
users acclimate themselves to new environments and how accessible and intuitive the interface
to the AVE is in practice.
One method of testing the effectiveness of the AVE package in helping users familiarize
themselves with new environments would be a controlled experiment comparing the behavior and
mobility of visually challenged people who have used the AVE with those who have not. Objective
measurements of their progress through the environment could be recorded on video. Users
37
would also fill out a subjective evaluation of the AVE, answering questions about the AVE's
effectiveness in preparing them for a visit to the environment and in attenuating their anxieties.
The clinical trials might test the accessibility of the AVE by observing how quickly users learn to
navigate and interact with the virtual environment. A series of tasks in the virtual environment can
be given to users who will, without any instruction, try to use the interface to accomplish these
goals in the shortest amount of time. These times can be compared against learning times for
average adults in other tasks and the intuitiveness of the interface may be extrapolated. A
subjective survey of the users about the interface may also contribute to the evaluation of the
interface.
6. Future work
Future work on the project can be divided into two categories. The first is work to improve the
haptic browser or to address problems in the current implementation. The second is work to
extend the VE package to better aid the visually challenged. Work on the browser involves efforts
to correct current deficiencies including issues in collision detection, performance, and rendering
dynamic scenes. Work on the AVE package includes the addition of the audio modality and extra
features that may help the visually challenged familiarize themselves with the environment.
6.1 Haptic browser
The current implementation of the haptic browser depends on the intrinsic collision detection
mechanisms in CosmoPlayer and in the GHOST API to deal with occasions when contact occurs.
As previously mentioned, these collision detection mechanisms are not sufficient. To resolve
collision issues, two further collision detection routines should be added to future versions of the
haptic browser, one for the walking stick and another for the viewpoint object and the line
segment connecting it to the ground. These routines need only take advantage of the existing
38
table of objects and their boundaries in the haptic applet to determine if a collision has taken
place. They should run just before the creation of a new haptic frame so that the applet may void
the action that caused the collision and keep the positions of the walking stick and the haptic
scene consistent with those of the graphic scene. CosmoPlayer's internal collision detection
mechanism should automatically keep the graphic scene correct.
When specifications for describing haptic properties in VRML become standardized, a future
version of the haptic browser may be extended to render objects with those haptic properties. To
do so, the haptic applet must be modified to extract the haptic properties of VRML objects at
initialization and store these properties with each object in the object table. When rendering the
object, the applet should translate these properties into API level description and add the haptic
properties at the point of rendering. This assumes that the haptic interface device and its API
support the reproduction of haptic properties. The PHANToM and GHOST do in fact support
multiple haptic properties including friction, mass, and compliance. Thus, the system can easily
be adapted to provide the haptic browser with haptic property support.
Finally, a future version of the haptic browser may be extended to handle dynamically changing
scenes. The current implementation requires the graphic and haptic components to coordinate
the positions of the avatar, viewpoint, and walking stick. In order to support dynamic
environments, each component must send an event to the other components upon any change in
the environment. In the case of a VRML-initiated animation, the activation of the animation results
in an event that may be detected by the graphic applet. Once the graphic applet is aware of the
presence of a dynamic object, it should repeatedly query the VRML browser for the latest position
of the object and relay the information to the haptic applet. The haptic applet should then make
the necessary adjustments to the haptic scene to maintain scene consistency until the end of the
animation. With a sufficiently high refresh rate, the user should not notice any discrepancies
between the haptic scene and the graphic scene. A similar approach may be used to address
changes in the AVE that are initiated in the haptic scene. In such a case, the haptic applet will be
39
responsible for repeatedly updating the graphic applet with the new position of the moving object,
and the graphic applet will make changes in the graphic scene accordingly. However, such haptic
scene initiated changes will not be possible until the implementation of haptic properties, so as to
provide the haptic applet with information about which objects can be moved and how quickly.
6.2 AVE package
The current AVE supports both the visual and haptic modalities. Users may see and touch
objects in the virtual world they are traveling through. While this is adequate for describing the
key features of a building to the visually challenged, future versions of the AVE may also include
the addition of audio feedback to help reinforce the experience. One way to add audio to the AVE
is through VRML's sound node. VRML browsers that support the sound node should deliver a
reasonable directional sound output representing the origin of the sound source. Advanced VRML
browsers may even take advantage of the environmental audio hardware becoming available
today to deliver truly realistic environmental audio. If the implementation of audio feedback
through VRML is inadequate, the haptic browser is extensible enough that an audio component
may be added there with relative ease.
Future versions of the AVE may also provide the user with extra information upon request. For
example, visually challenged people often have problems dealing with locations of elevator
buttons. The AVE may be extended to allow users to request a close up visual detail of the button
arrangements of an elevator. These additional functions may be implemented by adding separate
components to the haptic browser to handle these requests. The results may be presented to the
user discretely or in a manner that temporarily interrupts the current scene depending on the
user's preferences.
40
7. Conclusion
Through the use of architectural virtual environments (AVEs), the visually challenged may no
longer have to face a completely unfamiliar setting ever again; however, creating an AVE to suit
the needs of the visually challenged is not a trivial task. The difficulty of the task does not stem
from a lack of technology for creating VEs; the myriad of VE applications today is a testament to
how VE technologies have come of age. Instead, the difficulty comes from making use of the
technology in a manner that is appropriate for visually challenged users. Most VE applications
today strive mainly for visual realism, but a VE for the visually challenged must emphasize multimodal feedback that makes up for their lack of visual acuity. Most developers intend their VE
applications to be sold as a commercial product on the market, but the developers of this project
feel that AVEs for the visually challenged should be made available with little or no expense and
easy distributed over the Internet. Finally, most current VE applications have interfaces designed
with a fully-sighted user in mind-the AVE must provide an interface that is accessible to the
visually challenged user. Otherwise, the investment of time needed to learn the interface will limit
the number of visually challenged people able to take advantage of the AVE.
To create an AVE appropriate for visually challenged users, this project has created an AVE
package that made use a variety of existing technologies including the SLO as a visual effector,
the PHANToM as a haptic interface, and VRML as the modeling platform. All of these
technologies are linked together by a haptic browser that coordinates a virtual scene described by
VRML with the haptic scene created by the PHANToM and with the visual scene displayed by the
SLO. The haptic browser itself makes use of existing modules. Its core is written Java but uses
CosmoPlayer to provide graphic rendering capabilities and the GHOST API to provide haptic
rendering capabilities. To facilitate an intuitive interface, the haptic browser dynamically adds a
walking stick to the visual scene. This walking stick corresponds to the position and orientation of
the stylus on the PHANToM, allowing users to see a visual representation of their actions in the
AVE.
41
By incorporating both the visual and haptic modalities, and eventually the audio modality, the
AVE allows visually challenged users to reinforce the visual scenes that they see through the
SLO with corresponding feedback from other senses. Because the modeling platform is VRML
and the haptic browser written mostly in Java, the AVE may be distributed to visually challenged
users with little cost, even if they use different computing platforms. Finally, the use of a walking
stick to help the user understand the correspondence between their physical movement in the
real world and their movement in the virtual scene help provide a intuitive interface that makes
the AVE more accessible to users. While a final version of the AVE package is still a distance
away, the current implementation is robust and extensible enough that improvements and
additions can be made with relatively little effort. This prototype created in this project has shown
that an AVE for the visually challenged is not only possible, but well on its way of becoming a
reality. Hopefully, the fruition of this project will result in an AVE package what will allow visually
challenged people to navigate new settings and environments without dread or fear, an ability
that fully-sighted people too often take for granted.
42
References:
[1]
G.M. Prisco, C.A. Avizzano, M. Calcara, S. Ciancio, S. Pinna, and M. Bergamasco, "A
virtual environment with haptic feedback for the treatment of motor dexterity disabilities,"
In Proc. 1998 IEEE International Conference and Robotics & Automation, 1998, pp.
3271-3276.
[2]
C. Krapichler, M. Haubner, A. Losch, and K. Englmeier, "A human machine interface for
medical image analysis and visualization in virtual environments," In Proc. ICASSP-97,
1997, pp. 2613-2616.
[3]
American Macular Degeneration Foundation, "What is macular degeneration?" [online
document], Available HTTP:
http://www.macular.org/thedisease.html
[4]
The Diabetic Retinopathy Foundation, "Diabetic Retinopathy," [online document],
Available HTTP:
http://www.retinopathy.org/infoOl.htm
[5]
R.S. Longhurst, Geometrical and Physical Optics, London: Longmans, Green and Co
LTD: 1967.
[6]
Web3d Consortium, "History of the VRML specification," [online document], Available
HTTP:
http://www.web3d.org/aboutus/historyspec.htm
[7]
External Authoring Interface Working Group, "External Authoring Interface Working
Group," [online document], Available HTTP:
http://vrml.org/WorkingGroups/vrml-eai/
[8]
R. Carey and G. Bell, The Annotated VRML97 Reference, [online document] Available
HTTP:
http://www.best.com/-rikk/Book/Book.html
[9]
Sensable Technologies, "Sensable Technologies: Company History," [online document],
Available HTTP:
43
http://www.sensable.com/history.htm
[10]
Immersion Corporation, "Immersion: Press Release Novemeber 12, 1999," [online
document], Available HTTP:
http://www.immerse.com/news/991112.html
[11]
A.D.N. Edwards, "Research on computers and users with disabilities in an academic
environment," lEE Colloquium on Computers in the Service of Mankind: Helping the
Disabled, 7 March 1997, pp 5/1 - 5/3.
[12]
L. Porter and J. Treviranus, "Haptic applications to virtual worlds," [online document],
Available HTTP:
http://www.utoronto.ca/atrc/rd/vrm1/haptics.htm
[13]
E. Wies, The addition of the haptic modality to the virtual reality modeling language,
M.Eng thesis, Massachusetts Institute of Technology, pp. 6-44.
[14]
A. Hardwich, S. Furner and J. Rush, "Tactile Access for blind people to virtual reality on
the world wide web," In Proc. lEE Colloquium on Developments in Tactile Displays, 1997,
pp 9/1- 9/3.
[15]
D. A. Bowman, D. Koller, and L. F. Hodges, "Travel in immersive virtual environments: an
evaluation of viewpoint motion control techniques," In Proc. IEEE 1997 Virtual Reality
Annual International Symposium, 1997, pp. 45 - 52.
44