Integrate the BlindAid system in a traditional orientation Please share

advertisement
Integrate the BlindAid system in a traditional orientation
and mobility rehabilitation program
The MIT Faculty has made this article openly available. Please share
how this access benefits you. Your story matters.
Citation
Lahav, O., D.W. Schloerb, and M.A. Srinivasan. “Integrate the
BlindAid system in a traditional orientation and mobility
rehabilitation program.” Virtual Rehabilitation International
Conference, 2009. 2009. 37-41. ©2009 Institute of Electrical and
Electronics Engineers.
As Published
http://dx.doi.org/10.1109/ICVR.2009.5174202
Publisher
Institute of Electrical and Electronics Engineers
Version
Final published version
Accessed
Thu May 26 06:25:41 EDT 2016
Citable Link
http://hdl.handle.net/1721.1/59523
Terms of Use
Article is made available in accordance with the publisher's policy
and may be subject to US copyright law. Please refer to the
publisher's site for terms of use.
Detailed Terms
Integrate the BlindAid system in a Traditional
Orientation and Mobility Rehabilitation Program
Orly Lahav1, 2
David W. Schloerb2, Mandayam A. Srinivasan2
2
1
The Touch Lab, RLE
MIT
Cambridge, MA
School of Education, and The Porter School of
Environmental Studies
Tel Aviv University, Tel Aviv, Israel
lahavo@post.tau.ac.il
applying it during navigation. Over the years, information
technologies have been developed to help people who are blind
build cognitive maps and explore real physical spaces. There
are two types of O&M aids: (1) pre-planning aids that provide
the user with information before his or her arrival to the
environment and (2) in-situ planning aids that provide the user
with information about the environment in-situ. Pre-planning
aids include a verbal description of the space, tactile maps, strip
maps, physical models, and Talking Tactile Tablets (TTT) that
use tactile maps [4-7]. Over the past few years, in-situ planning
aids that have been developed include Sonicguide [8], Kaspa
[9], Talking Signs, embedded sensors in the environment [10],
activated audio beacon by using cell phones technology [11],
and personal guidance systems based on satellite
communication [12]. Global Positioning System (GPS) based
devices are manufactured by StreetTalk from Freedom
Scientific, Trekker from HumanWare, and the BrailleNote GPS
from Sendero Group. However, there are a number of
limitations in the use of these pre-planning and in-situ planning
devices. For example, the limited dimensions of tactile maps
and models may result in poor resolution of the provided
spatial information (e.g., lack of precise topographical features
or of accurate dimensions and location for structural objects)
and difficulties in producing and acquiring updated spatial
information. As a result of these limitations, people who are
blind are less likely to use pre-planning devices in everyday
life. The major limitations of the in-situ planning devices are
that they can be used only in the explored space.
Abstract— In the process of becoming blind, newly- blinded
people participate in a rehabilitation program, which includes
different skills that a newly- blinded person needs to adapt as a
result of his or her lost of vision. The virtual system, the
BlindAid, involves active collaboration between orientation and
mobility instructors from the Carroll Center for the Blind in
Newton, Massachusetts and engineers and cognitive scientists at
the MIT Touch Lab in Cambridge, Massachusetts. The two
teams collaborated in the integration of the BlindAid system in
the traditional orientation and mobility rehabilitation program.
In this study we will describe the model of the integration. The
three main goals of this model were to study: (1) cognitive
mapping process of newly-blinded when using the virtual
environment; (2) mental support of the BlindAid system to the
newly- blinded; and (3) enhancement of the BlindAid system for
the orientation and mobility instructors. The findings supply
strong evidence that interaction with the BlindAid system by
people who are newly-blinded provides a robust foundation for
the participants’ development of comprehensive cognitive maps
of actual unknown spaces during their rehabilitation program.
Keywords- Blind; Cognitive processing; Rehabilitation;
Orientation; Mobility; Virtual simulation; Haptic devices
I.
INTRODUCTION
People in the process of becoming blind face great
difficulties in performing basic daily skills. For most of them,
walking in an unknown environment can be unpleasant and
uncomfortable. The traditional orientation and mobility (O&M)
rehabilitation program supports the acquisition of spatial
mapping and orientation skills by supplying two main type of
information: perceptual and conceptual. The perceptual level is
based on the work of Amendola [1], the pioneer in the field of
sensory training [2, 3]. Sensory training is based on
systematically collecting information through all of the senses
from the immediate environment. The three primary senses
used in restoring orientation function are audio, smell, and
touch. Kinesthesia is also used for restoring orientation
function. At the conceptual level, the focus is on supporting the
development of appropriate strategies and orientation solving
problems for an efficient cognitive mapping of a space and
Over the past few years, the use of virtual reality in
domains such as simulation-based training, gaming, and
entertainment has been on the rise. It has also been used for
rehabilitation and learning environments for people with
disabilities (e.g., physical, mental, and learning) [13, 14].
Advanced technology, particularly haptic interface technology,
enables blind individuals to expand their knowledge by using
artificially made reality through haptic and audio feedback.
Research on the implementation of haptic technologies within
virtual navigation environments has yielded reports on the
potential of haptic technology for supporting rehabilitation
training with sighted people [15], as well as with people who
are blind [16, 17]. Related research on the use of haptic devices
by people who are blind includes: identification of textures and
object shapes [18], mathematical learning environment and
This project was supported by the National Institutes of Health, National Eye
Institute. Grant No. 5R21EY16601-2.
978-1-4244-4189-1/09/$25.00 ©2009 IEEE
37
exploring mathematical graphs [19], exploring geography maps
using audio and tactile feedback [20], and construction of
cognitive maps [21, 22].
headphones. The audio feedback provides the user with
descriptive information as well as sound cues for various
artifacts he comes into contact with in the VE. To simplify the
signal processing, the Head Related Transfer Functions
(HRTFs) were used for this virtual 3D audio processing.
In this paper we will describe the integration of the
BlindAid system in a traditional O&M rehabilitation program.
The BlindAid system acts as a pre-planning aid to help newlyblinded people develop awareness about the environments in
which they are traveling, in a safe and relaxed manner, as well
as provide tools for O&M instructors to evaluate the progress
of their clients, while in-situ, during their rehabilitation
program.
The BlindAid system comprises two modes of operation—a
developer mode and a learning mode.
A. The Developer Mode
A key feature of the developer mode is the program's ability
to import AutoCad (dxf) 2D drawing files to aid in the creation
of new virtual maps. This makes it possible, for example, to
import a floor plan so that the locations of all the walls of a
building are accurately replicated in the VE. After creating a
new virtual map, the researcher can add structure components
(door, windows, rod, etc.), or objects as furniture (tables,
chairs, doormat, etc.). Each component in the virtual map can
be represented by geometric properties (size, shape, location),
haptic feedback (tactile and or stiffness properties), and audio
feedback (hearcon, world, long description).
The three main goals of this model were to study:
• Cognitive mapping process of newly-blinded when
using the virtual environment.
•
Mental support of the BlindAid system to the newlyblinded.
•
Enhancement of the BlindAid system to the
orientation and mobility instructors.
II.
THE BLINDAID SYSTEM
B. The Learning Mode
The learning mode, within which the user and the
researcher works, includes two interfaces:
Developed by the Touch Lab at MIT, the BlindAid system
consists of a software package that provides a virtual
environment (VE) for people who are blind and a haptic
device. The system was designed at the MIT Touch Lab, with
assistance from an O&M instructor from the Carroll Center for
the Blind (CCB). Fig. 1 depicts a schematic of the BlindAid
system.
User Interface – The users’ station consisted of a tabletop
space with an arm support, a computer, keyboards, a set of
stereo headphones, and the Phantom device through which the
subject interacted with the VE. The user interface consisted of
the VE that simulates real physical rooms and objects to be
navigated by the users using the Phantom (Fig. 2). In addition
to the Phantom, the user was able to interact with the VE by
using the numeric keyboard. The user interface includes four
tools. The first is ‘a second audio tool’ tool that gives the
subject additional audio information. The second tool includes
the ability to pause the movement of the Phantom and take a
break anytime during the exploration. The last two tools were
developed mainly to assist the user to navigate by themselves
in the VEs. The ‘return to the starting point’ tool allows the
user to return to the starting point in case of confusion about
current location in the VE. The fourth tool, ‘active landmarks,’
allowed the researcher and the user to install a landmark in the
VE. Landmarks are basically audio locators that are placed in
the VE by the researcher or by the user.
Figure 1. Schematic Diagram of the BlindAid System
Using this human-machine interaction, the user gets haptic
and audio feedback. The haptic interface allows the user to
interact with the VE and provides two functions: (1) it moves
the avatar (a human representative in VE) through the VE, and
(2) it provides force feedback to the user that gives clues about
the space similar to those generated by the white cane. In this
study we used the Phantom, a haptic interface with high fidelity
in force generation and position tracking. Although there are
many haptic devices, the Phantom met the present need for
functionality and commercial availability because it produces a
mechanical force during a maneuver in the VE if the tip of the
stylus attached to it penetrates any virtual object in the VE,
such as a wall or a table.
In addition to haptic feedback through the Phantom, the
user was also given 3D audio feedback through stereo
Figure 2. User Interface
38
Researcher Interface - The researchers’ station consisted of
a flat panel computer monitor, a mouse, and an additional
keyboard that was used as an aid for researchers to observe the
progress of the subjects in the VE during the experiments. The
researcher interface included several features that serve
researchers during and after the learning session. On-screen
monitors present updated information on the user’s navigation
performance, such as position or objects already reached. The
researcher interface allows researchers to record and review a
user’s exploration activities in the VE. During an experiment
session, the computer records the avatar’s position and
orientation within the VE in a text file, along with any
command actions. Subsequently, the data file may be viewed
directly or replayed by the system like a video recording. (Fig.
3).
(unaided vision, vision plus low vision aids, human guide, long
cane, guide dog, public transportation, independent travel); and
stated mobility goals. The second part of the rehabilitation
program includes a 14-plus-week program to assist clients in
achieving or maintaining personal independence.
The campus-based program addresses a variety of needs:
O&M skills, counseling, communications skills, daily living
skills, health management, and low vision skills.
The O&M program includes three parts: orientation,
mobility, and cane technique. The orientation part has six
components: (i) the use of sensorial landmarks and clues, for
example: hearing (direct sound to identify objects, distance or
direction, and echolocation), kinesthesia (elevation), touch
(object texture and ground texture), and smell; (ii) the use of
visual scanning and audible signal to cross a street; (iii) the use
of landmarks and clues, such as buildings, street names, and
door numbers; (iv) the use of cardinal directions for travel; (v)
ability to recover when disoriented (problem solving); and (vi)
the manufacture of a mental map for a route. The mobility part
has four components: (i) basic mobility skills that include a
human guide, for example, to negotiate doorways, stairs,
narrow passageways and elevator, ability to travel with a
human guide when using a white cane; hand trailing; and
protective techniques; (ii) travel indoors in the CCB buildings,
for example, how to travel independently to and from each
location on specific floors and buildings; (iii) campus
navigation, for example, travel to and from different parking
lots to different buildings; recognition of east, west, north
driveways; walking from different locations on CCB campus to
Sargent Street or Centre Street and to the bus stops; (iv)
community travel, for example, ability to listen to verbal
directions, repeat directions, execute simple and complex
routes, travel on a bus or subway independently, being able to
ask for help when needed, and interacting with public
appropriately. The third part of the O&M program focuses on
cane technique. During the 14 weeks of the campus-based
program each client had three to four O&M sessions each week
for 50 minutes each.
Figure 3. Research Interface
III.
ORIENTATION AND MOBILITY REHABILITATION
PROGRAM
The CCB was founded in 1936. Today, the CCB offers a
vast array of services to individuals aged 14-90, with an
average age of 36 years. The CCB practitioners followed
Amendola’s thoughts and developed a sensory training
methodology and curriculum for O&M skills for people who
are blind [2-3]. This methodology is based on systematically
collecting information through all of the senses from the
immediate environment. The three primary senses used in
restoring orientation function are: audio, smell, touch and
kinesthesia. The CCB O&M rehabilitation program has two
main programs: community-based and campus-based. The
campus-based program, which is 16 weeks long, is an intensive
and comprehensive rehabilitation course designed primarily for
newly-blinded adults to make the physical and emotional
adjustments to living with blindness. The first two weeks
include functional assessment encompassing activities of daily
living, travel skills, use of remaining vision, information
management, personal health care, and adjustment to vision
loss. At the end of these two weeks the CCB O&M specialists
evaluate the client’s condition and write their evaluation in a
report. This report is a broad evaluation about the clients,
including issues such as the clients’ eye condition; present
vision; medical conditions (hearing, balance, stamina);
previous O&M training; current functioning in mobility
IV.
THE BLINDAID INTEGRATED IN THE TRADITIONAL O&M
REHABILITATION MODEL
The integration of the BlindAid system into the CCB
traditional O&M rehabilitation program happened in two
stages. The first stage, which included planning and design,
happened before the use of the BlindAid system by the clients,
and the second stage, which happened after the clients started
using it, involved tailoring the program so it is responsive to
the specific needs of the client as well as the program. The first
stage was based on three issues:
Design and Development VE Models
Ten spaces were chosen by the CCB O&M instructors and
the researcher to be modeled as learning VEs. The ten models
were parts of the Main building, the dormitory, St. Paul
building, and the outdoors main campus. The Main building,
the dormitory, and the CCB campus are the most used areas for
training at the CCB during the O&M rehabilitation program.
Nine VE models had structures and object components of the
39
four floors of the Main building, of the four floors of the
dormitory, and one model of the CCB campus, which included
all buildings, paths, parking lots, streets, and objects that
surroundings the campus (see Fig. 4). The tenth model, an area
that is not part of the O&M rehabilitation program, was of the
main floor of St. Paul building. It was used at the end of the
rehabilitation program to understand the clients’ ability to
explore unknown spaces by using only the BlindAid system
and to study how clients applied their spatial knowledge in the
real physical space.
go back to location A; and (iii) pointing tasks in which the
client was asked to stand at the starting point facing the same
direction as in the VE and to point with his or her finger to the
location of five different objects.
TABLE 1: ORIENTATION TASKS IN THE VE AND IN THE REAL PHYSICAL SPACE
In the
VE
In the
Real
Physical
Space
(i)
Task
Verbal description
(ii)
(iii)
Exploration
Object-oriented
(iv)
Object-oriented
(v)
Perspective
(i)
Object-oriented (similar
to the VE object
oriented task)
Object-oriented –
reverse
Object-oriented – new
(ii)
Figure 4. A Blue Print of CCB Campus
Design and Development of O&M Tasks
The O&M tasks were designed by the researcher and later
evaluated by the CCB O&M specialists for safety. The O&M
tasks were divided into two categories: O&M tasks in the VE
and O&M tasks in the real physical space. Table 1 presents
examples. The O&M tasks in the VE were: (i) verbal
description of the space; (ii) exploration tasks in which the
exploration time was limited by the O&M instructor according
to their estimation of the time required for exploring the
parallel physical space; (iii) object-oriented task, which
occurred after the exploration task, in which the clients were
asked to find the VE starting point and to find five objects in
the VE; (iv) object-oriented task, in which the client was asked
to create his or her path from the VE starting point to one of the
VE objects and then asked to describe his or her path; (v)
perspective task, in which the client was asked to create his or
her path from one of the VE landmarks (A) to another VE
landmark (B) and after which he or she was asked to describe
their path. There were three types of O&M tasks in the physical
space: (i) two object-oriented tasks, in which the client was
first asked to perform the same object-oriented task as in the
VE and then to go back to the starting point, followed by a
second task in which the client was asked to go from the
starting point to another object in the same space and
afterwards to go back to the starting point; (ii) two perspective
tasks, similar to the above task, in which the client was asked to
perform the same perspective task required in the VE and then
asked to go back, followed by the client being asked to go from
location A to another object in the space (C) and then asked to
(iii)
Object-oriented –
reverse
Perspective (similar to
the VE object-oriented
task)
Perspective – reverse
Perspective – new
Perspective – reverse
Hand pointing tasks
from the VE starting
point
Example
‘Go to the starting point /
bathroom / iron board /
Dina’s office / refrigerator /
elevator’
‘Create your path from the
starting point to the stairs’
‘Describe your path from the
starting point to the stairs’
‘Create your path from the
elevator to the bathroom’
‘Describe your path from the
elevator to the bathroom’
‘From the starting point, go
to the stairs’
‘Go back to the starting
point’
‘From the corner of the
library corner, go to the
kitchen sink’
‘Go back to the starting
point’
‘From the elevator, go to the
bathroom’
‘Go back to the elevator’
‘From the elevator, go to the
Low Vision Department’
‘Go back to the elevator’
‘Point on the location of:
Administration Office /
kitchen / bathroom’
Clients’ O&M evaluation
Before integrating the BlindAid system in the traditional
O&M program, each O&M instructor was asked to evaluate
their clients’ behavior on four criteria. The instructors were
first asked to define their client’s abilities in orientation,
mobility, and spatial memory on a scale of one to five ((1) very
good; (2) good; (3) it depends, some times; (4) has some
difficulties; (5) not at all). Next, the instructors were asked to
give some background on their client’s O&M abilities and
skills based on their evaluation report and information on their
14-weeks O&M rehabilitation program. The third assessment
included an evaluation of how well the client knew the ten
research spaces (four floors of the Main building, four floors of
the dormitory, the CCB campus, and St. Paul main floor), using
the same scale. For the last assessment, the instructors were
asked to recommend special O&M methods for the clients to
use in the VE training.
The second stage of the integration of the BlindAid system
into the CCB traditional O&M rehabilitation program took
place after the clients started using the BlindAid system. The
40
researcher met with clients two to three times each week for 50
minutes. The first three meetings included reviewing the
consent form and getting the participant’s signature, answering
the O&M questionnaire, and training on how to operate the
BlindAid system. At the fourth session, the clients started
exploring the VE models that represented the CCB buildings
and campus. Each session was dedicated to a different building
floor, from simple spaces to complex spaces and included
O&M tasks in the VE and in the real physical space, as
described above in Table 1. Once a week, the researcher met
individually with the O&M instructors. By using the researcher
interface the instructor could see the client’s exploration
performance by using the user’s log (see Fig. 3), and they could
assess which exploration strategy their client used during his or
her exploration and whether a special O&M method was
needed to be addressed in the VE training. In addition to
gathering information about the client exploration in the VE,
the instructor learned how his or her client was performing
orientation tasks in the physical space.
[8]
[9]
[10]
[11]
[12]
[13]
[14]
The model described above for integrating the BlindAid
system into CCB’s O&M rehabilitation program was studied
with an experimental group (n=10) and a control group (n=10).
At the time of the conference, detailed results from the actual
study as well as preliminary conclusions will be presented.
[15]
[16]
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
R. Amendola, “Touch kinesthesis and grasp (haptic perception)”.
Unpublished Manuscript, held by The Carroll Center for The Blind, MA,
1969.
N. Campbell, Sensory training. In R. Rosenbaum (Ed.). “The sound of
silence”. The Carroll Center for The Blind, Newton, MA, 1992.
N. Campbell, Mapping. In R. Rosenbaum (Ed.). “The sound of silence”.
The Carroll Center for The Blind, Newton, MA, 1992.
M. A. Espinosa, E. Ochaita, “Using tactile maps to improve the practical
spatial knowledge of adults who are blind,” Journal of Visual
Impairment and Blindness, 92(5), pp. 338-345, 1998.
J.F. Herman, T.G. Herman and S.P. Chatman, “Constructing cognitive
maps from partial information: A demonstration study with congenitally
blind subjects,” Journal of Visual Impairment and Blindness, 77(5), pp.
195-198, 1983
[6] J.J. Rieser, “Access to knowledge of spatial structure at noval
points of observation,” Journal of Experimental Psychology: Learning,
Memory, and Cognition, 15(6), pp. 1157-1165, 1989.
S. Ungar, M. Blades, and S. Spencer, “The construction of cognitive
maps by children with visual impairments,” in: Portugali, J. (ed.), The
construction of cognitive maps. Kluwer Academic Publishers,
Netherlands, 1996, pp. 247–273.
[17]
[18]
[19]
[20]
[21]
[22]
41
D.H. Warren and E.R. Strelow, Electronic spatial sensing for the blind,
Martinus Nijhoff Publishers, MA, 1985.
R.D. Easton and B.L. Bentzen, “The effect of extended acoustic training
on spatial updating in adults who are congenitally blind,” Journal of
Visual Impairment and Blindness, 93(7), pp. 405-415, 1999.
W. Crandall, B. L. Bentzen, L. Myers and P. Mitchell, Transit
accessibility improvement through talking signs remote infrared signage,
a demonstration and evaluation, San Francisco, CA: The SmithKettlewell Eye Research Institute, Rehabilitation Engineering Research
Center, 1995.
S. Landau, W. Wiener, K. Naghshineh and E. Giusti, “Creating
Accessible Science Museums With User-Activated Environmental
Audio Beacons (Ping!),” Assistive Technology, 17, pp. 133–143, 2005.
R. Golledge, R. Klatzky and J. Loomis, “Cognitive mapping and
wayfinding by adults without vision,” in: Portugali, J. (ed.), The
construction of cognitive maps. Kluwer Academic Publishers,
Netherlands, 1996, pp. 215–246.
M. T. Schultheis and A.A. Rizzo, “The application of virtual reality
technology for rehabilitation, “ Rehabilitation Psychology, 46(3), pp.
296-311, 2001.
P.J. Standen, D.J. Brown and J.J. Cromby, “The effective use of virtual
environments in the education and rehabilitation of students with
intellectual disabilities,” British Journal of Education Technology, 32(3),
pp. 289-299, 2001.
C. Giess, H. Evers and H.P. Meinzer, “Haptic volume rendering in
different scenarios of surgical planning,” presented at the Third
PHANToM Users Group Workshop, M.I.T., Massachusetts, 1998.
G. Jansson, J. Fanger, H. Konig and K. Billberger, “Visually impaired
persons’ use of the Phantom for information about texture and 3D form
of virtual objects,” presented at the Third Phantom Users Group
Workshop, M.I.T., Massachusetts, 1998.
[17] C. Colwell, H. Petrie, and D. Kornbrot, (1998), “Haptic virtual
reality for blind computer users,” presented at the Assets ‘98
Conference,
Los
Angeles,
CA,
Available:
http://phoenix.herts.ac.uk/sdru/pubs/VE/colwell.html
C. Sjotrom and K. Rassmus-Grohn, “The sense of touch provides new
computer interaction techniques for disabled people,” Technology
Disability, 10(1), pp. 45-52, 1999.
A.I. Karshmer and C. Bledsoe, “Access to mathematics by blind students
- introduction to the special thematic session,” presented at the
International Conference on Computers Helping, 2002.
P. Parente and G. Bishop, “BATS: The blind audio tactile mapping
system,” ACMSE. Savannah, GA, 2003.
O. Lahav and D. Mioduser, “Exploration of Unknown Spaces by People
who are Blind, Using a Multisensory Virtual Environment (MVE),”
Journal of Special Education Technology, 19(3), 2004.
S.K. Semwal and D.L. Evans-Kamp, “Virtual environments for visually
impaired,” presented at the 2nd International Conference on Virtual
worlds, Paris, France, 2000.
Download