by Jason T. Richards B.A., Psychology (1997)

advertisement
Three-dimensional Spatial Learning in a Virtual Space Station Node
by
Jason T. Richards
B.A., Psychology (1997)
B.S., Mathematics (1998)
University of the Pacific
Submitted to the Department of Aeronautics and Astronautics
in Partial Fulfillment of the Requirements for the Degree of
Master of Science in Aeronautics and Astronautics
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
September 2000
© 2000 Massachusetts Institute of Technology
All Rights Reserved
Signature of Author______________________________________________________________
Department of Aeronautics and Astronautics
August 22, 2000
Certified by____________________________________________________________________
Charles M. Oman
Director, Man-Vehicle Laboratory
Thesis Supervisor
Accepted by___________________________________________________________________
Professor Nesbitt W. Hagood
Chair, Departmental Graduate Office
Department of Aeronautics and Astronautics
2
Three-dimensional Spatial Learning in a Virtual Space Station Node
by
Jason T. Richards
Submitted to the Department of Aeronautics and Astronautics on
August 21, 2000 in Partial Fulfillment of the Requirements for the
Degree of Master of Science in Aeronautics and Astronautics
ABSTRACT
Astronauts find it difficult to recognize their orientation while facing any of the viewing directions in 6ported space station node modules. Our previous experiments tested the spatial memory of human
subjects in 1-G in an analogous cubic virtual environment and showed that humans are able to learn to
orient when instructed to imagine different body orientations while facing in two different directions. Can
subjects do the task when facing in all 6 directions? Does training help? Does spatial memory depend on
the direction of remembered targets relative to the body? Does performance depend on the subject’s
ability to rotate himself mentally and use imagery? How long is ability retained after training? 3D spatial
learning was studied in two virtual cubic chambers, in which a picture of an animal was drawn on each
wall. Through trial-by-trial exposures to a virtual chamber, subjects (n=24) had to memorize the spatial
relationships among the 6 pictures around them and learn to predict the direction to a specific picture if
they were facing any wall in any roll orientation. After learning in one chamber, the procedure was
repeated in a second. Before being tested, subjects received computer-based instructions and practice.
Half of subjects were taught to remember logical picture groupings (strategy), while the remaining
(control) subjects were free to do the task as they saw fit. Subjects’ retention of configurational
knowledge (both chambers) and spatial ability (second chamber only, without feedback) were re-tested 1,
7, and 30 days after initial training. Response time (RT) and percent correct (% correct) learning curves
were measured on all four days, while configurational knowledge was tested on the last three. All subjects
ultimately learned to do the task within 36 trials in either test environment, but performed faster in the
second environment than in the first (especially the strategy-trained group). The strategy group showed
superior % correct and RT for above/behind targets and generally better configurational knowledge.
Retention of configurational knowledge and spatial ability for both groups was good over 30 days. The
subjects who reported using mental imagery (n=8) had higher scores on figure rotation tests and % correct
for left/right targets. Performances by the control group on the experimental tasks were significantly
correlated with those on conventional tests of field independence and 2/3D figure rotation ability.
Strategy training helped those who had poorer mental rotation skills, and those who could not use mental
imagery.
Supported by NASA Cooperative Agreement NCC9-58 with the National Space Biomedical Research Insitute, USA
Thesis Supervisor: Charles M. Oman
Title: Director of the Man-Vehicle Laboratory
3
4
Acknowledgements
–
To Charles “Chuck” M. Oman, PhD, thesis supervisor, and Director of the Man-Vehicle Laboratory,
for giving me the opportunity to come to MIT and be a part of such a prestigious academic
environment. The patience and confidence with which you guided me through the process made it
much easier to handle than it would have been otherwise. I cannot imagine having a better match for
an advisor. Remember to root for my buddy Reichert and the Royals when they’re in town!
–
To Alan Natapoff, PhD, for all of your generosity of knowledge, spirit, and wisdom in supporting me
throughout this project. You lifted my spirits when I was frustrated and helped me maintain
perspective when the goal seemed too far away to see. Your expert advice has been and always will
be of great encouragement and value to me. It was an absolute joy for me to have been blessed with
such charitable mentoring. Once again, thank you. (p.s. “You hang in”… and save my seat!)
–
To Andy Beall, PhD, now at UC-Santa Barbara: The man who made the transition from
undergraduate school in California to graduate school at MIT as smooth as possible. I don’t know if
it’s possible to ask more questions than I asked you in the fall of 1998. But, you were always
incredibly patient, and you always had an answer that worked. Only a Californian could have done all
this without ever being in a bad mood! See you in sunny, SoCal.
–
To Dr. Wayne Shebilske (formerly of Texas A&M, now at Wright State University) for playing a
major role in conceiving the original node experiment described in Sect.2.3.
–
To Hilda Gutierrez, distinguished MVL UROPer, for doing much of the preliminary programming
and modeling for the original node experiment before I arrived as a first-year graduate student.
Thanks for being so compassionate when times were rough.
–
To Rex “Rex-Dog” Wu, former MVLer and roommate for 3 months: I now know how you felt last
year at this time. Thanks for the positive support and companionship, and the hilarious memories of
watching the Terrapins of Maryland play basketball (and lose) against Kathy Sienko’s Kentucky
Wildcats!
–
To Joe Saleh, for the help you so selflessly offered when I was most overwhelmed by the mystique of
MIT. I might have never made through that first semester without your study sessions.
–
To my mother and father, Kathy and Larry Richards, for getting me here. It never would have been
possible without the loving support of you two amazing individuals throughout the years… special
thanks to the notorious and often-used “Bank of Parents” – don’t worry, jobs are on the way. Thank
you so much. I love you dearly!
–
To Ms. Melanie Heather Wright, for making the last 3 months of my life in Boston utterly blissful.
Your heart and spirit are what every man needs. Words will never be able to describe how fortunate I
feel about the intersection of our paths. Mahalo, Honey Girl!
–
To Professors Dava Newman and Larry Young for their guidance, encouragement, and generous
hosting of MVL outings.
The present research was supported by NASA Cooperative Agreement NCC9-58 with the National Space
Biomedical Research Insitute, USA.
5
6
Table of Contents
ABSTRACT
3
ACKNOWLEDGEMENTS
5
TABLE OF CONTENTS
7
LIST OF FIGURES
10
LIST OF TABLES
11
CHAPTER 1: INTRODUCTION
13
1.1 Sense of Static Orientation in 0-G
13
1.2 Problems Related to Space Station Structures
14
1.3 Importance of Spatial Memory in Emergency Situations
15
CHAPTER 2: BACKGROUND
19
2.1 Spatial Memory and Navigation in 1-G
19
2.2 Previous Visual Orientation and Spatial Memory Training
2.2.1 Virtual Environment Training for 1-G Applications
2.2.2 Spatial Orientation Countermeasures for 0-G Applications
21
21
21
2.3 Space Station Node Experiment 1
22
2.4 Space Station Node Experiment 2: The Present Experiment
23
CHAPTER 3: METHOD
25
3.1 Subjects
25
3.2 Materials and Apparatus
3.2.1 Virtual Environment Generator
3.2.2 Virtual Cubic Chamber and Object Arrays
25
25
27
3.3 Procedure
3.3.1 Experiment Timeline
3.3.2 Spatial Learning Trials
3.3.3 Paper-and-Pencil Spatial Ability Tests
3.3.4 Strategy Training and Control Training
3.3.5 Retention Tests
3.3.6 Exit Interviews
289
29
31
33
33
34
35
7
3.4 Experiment Design and Data Analysis
CHAPTER 4: RESULTS
36
38
4.1 Evidence of Learning
38
4.2 Transfer of Learning from the First Array to the Second
38
4.3 Effect of Strategy Training
42
4.4 Effect of Using Mental Imagery
46
4.5 Retention Testing
4.5.1 Spatial Ability Test
4.5.2 Effect of Layoff on Spatial Ability
4.5.3 Configurational Knowledge Test
50
50
51
54
4.6 Predictors of Task Performance
58
4.7 Exit Interview Responses
59
4.8 Relative Target Direction
62
4.9 Array Presentation Order
66
CHAPTER 5: DISCUSSION
69
CHAPTER 6: CONCLUSIONS
72
REFERENCES
77
APPENDIX A: PYTHON/VRUT CODE
79
A.1 Training Day Script for Control Group
79
A.2 Training Day Script for Strategy Group
99
A.3 Script for the Configurational Knowledge Retention Test
104
APPENDIX B: SUBJECT HISTORY QUESTIONNAIRE
121
APPENDIX C: PAPER AND PENCIL TESTS
124
APPENDIX D: INSTRUCTION SLIDE SHOWS
125
D.1. Strategy Group Instructions
125
D.2. Control Group Instructions
134
8
APPENDIX E: TRIAL SEQUENCE AND COUNTERBALANCING
137
APPENDIX F: SCORING CONVENTION FOR CONFIGURATION TEST
139
APPENDIX G: SUBJECT CONSENT FORM
145
9
List of Figures
FIGURE 1.1. MIR SPACE STATION SCHEMATIC
FIGURE 1.2. INTERNATIONAL SPACE STATION SCHEMATIC
17
18
FIGURE 3.1. HEAD-MOUNTED DISPLAY SYSTEM
FIGURE 3.2. INTERSENSE IS600-MARK 2 TRACKING SYSTEM
FIGURE 3.3A. WIDE-ANGLE VIEW OF THE INTERIORS OF THE THREE OBJECT ARRAYS,
AS SEEN FROM THE INITIAL, OR “BASELINE,” SIMULATED ORIENTATION
FIGURE 3.3B. SUBJECT WEARING HMD AND PERFORMING EXPERIMENTAL TASK
FIGURE 3.4. EXPERIMENT TIMELINE
FIGURE 3.5. SCHEMATIC OF 3D SPATIAL LEARNING EXPERIMENT TIMELINE FOR
EACH TRIAL.
26
27
FIGURE 4.1. MEAN PERFORMANCE FOR ALL TARGETS, BY SET, WITHIN TRAINING
GROUP ON THE TRAINING DAY
FIGURE 4.2. MEAN PERFORMANCE FOR LEFT/RIGHT TARGETS, BY SET, WITHIN
TRAINING GROUP ON THE TRAINING DAY
FIGURE 4.3. MEAN PERFORMANCE FOR ABOVE/BEHIND TARGETS, BY SET, WITHIN
TRAINING GROUP ON THE TRAINING DAY
FIGURE 4.4. MEAN % CORRECT, BY SET, WITHIN IMAGERY GROUP ACROSS TRAINING
GROUP ON THE TRAINING DAY
FIGURE 4.5. MEAN % CORRECT FOR NON-IMAGERY SUBJECTS, BY SET, WITHIN
TRAINING GROUP
FIGURE 4.6. MEAN PERFORMANCE FOR ALL TARGETS, BY SET, OVER DAYS WITHIN
TRAINING GROUP DATA FOR TRAINING DAY (SETS 4-6) USING SECOND ARRAY
FIGURE 4.7. MEAN PERFORMANCE FOR LEFT/RIGHT TARGETS, BY SET, OVER DAYS
WITHIN TRAINING GROUP DATA FOR TRAINING DAY (SETS 4-6) USING SECOND
ARRAY
FIGURE 4.8. MEAN PERFORMANCE FOR ABOVE/BEHIND TARGETS, BY SET, OVER DAYS
WITHIN TRAINING GROUP DATA FOR TRAINING DAY (SETS 4-6) USING SECOND
ARRAY
FIGURE 4.9. MEAN PERFORMANCE FOR THE STRATEGY GROUP BY RELATIVE-TARGET
DIRECTION OVER DAYS
FIGURE 4.10. MEAN PERFORMANCE FOR THE CONTROL GROUP BY RELATIVE-TARGET
DIRECTION OVER DAYS
FIGURE 4.11. MEAN RT FOR RELATIVE-TARGET-DIRECTION GROUPS WITHIN ARRAY
PRESENTATION ORDER ACROSS TRAINING, BY SET, OVER DAYS
28
28
30
32
43
44
45
47
49
52
53
54
64
65
68
FIGURE E.1. COMBINATIONS, ORDER, AND NOTATION CONVENTION USED FOR TRIALS 138
FIGURE F.1. OBJECT-POSITION CODE IN THE BASELINE ORIENTATION
140
10
List of Tables
TABLE 1-1. GROUP SCORES ON PAPER AND PENCIL TESTS
TABLE 4-1. SIGNIFICANT TRANSFER IN MEAN RT AND % CORRECT BY TRAINING, TARGET
GROUP, AND SET
TABLE 4-2. TRENDS IN PERFORMANCE BY TRAINING, ARRAY PRESENTATION ORDER,
AND SET ON THE TRAINING DAY FOR ABOVE/BEHIND TARGETS
TABLE 4-3. TRENDS IN PERFORMANCE MEANS BY TRAINING, ARRAY PRESENTATION
ORDER, AND SET ON THE TRAINING DAY FOR LEFT/RIGHT TARGETS
TABLE 4-4. TRAINING EFFECTS ON PERFORMANCE BY RELATIVE TARGET DIRECTION,
ARRAY, AND SET ON THE TRAINING DAY
TABLE 4-5. EFFECT OF CLAIMED USE OF MENTAL IMAGERY ON % CORRECT BY SET
TABLE 4-6. EFFECT OF STRATEGY TRAINING ON % CORRECT BY SET AMONG THE
NON-IMAGERY GROUP
TABLE 4-7. CONFIGURATION KNOWLEDGE FOR TRAINING GROUPS BY DAY AND
OBJECT ARRAY
TABLE 4-8. CONFIGURATION TEST MEASUREMENTS FOR THE FIRST ARRAY BY SUBJECT
AND DAY
TABLE 4-9. CONFIGURATION TEST MEASUREMENTS FOR THE SECOND ARRAY BY
SUBJECT AND DAY
TABLE 4-10. SPEARMAN CORRELATION COEFFICIENTS FOR PAPER AND PENCIL TESTS
TABLE 4-11. SUMMARY OF TRAINING DAY EXIT INTERVIEW RESPONSES FOR TRAINING
GROUPS
TABLE 4-12. SUMMARY OF RETENTION DAY EXIT INTERVIEW RESPONSES FOR
TRAINING GROUPS
TABLE 4-13. EFFECT OF RELATIVE-TARGET DIRECTION ON STEADY-STATE PERFORMANCE
BY DAY
TABLE 4-14. ORDER EFFECT ON RT PERFORMANCE, BY RETENTION DAY AND SET, WITHIN
RELATIVE-TARGET DIRECTION
TABLE F-1. UNIQUE RANK-ORDER SCORES
TABLE F-2. DOUBLE COMBINATIONS
34
40
41
41
42
48
50
56
57
58
59
60
61
66
67
141
143
11
12
Chapter 1: INTRODUCTION
Human space flight has been a reality for almost 40 years now. In the beginning, the concern with
space travel was focused mostly on the ability of the human body to withstand exertion of high G-loads
during launch and re-entry. Thanks to the efforts of a multitude of talented engineers, many short-duration
shuttle missions have been implemented with a high rate of success. Humans have also been successfully
sent into orbit to live on space stations like Skylab and Russia’s MIR for up to a year without fatal harm.
Several problems related to spatial orientation and spatial memory have been identified during these
longer missions that have yet to be solved.1
1.1 Sense of Static Orientation in 0-G
Here on Earth, knowing where we are and where we need to go is almost second nature for many
of us. If we are sitting in our living room at home, most of us can point to places of interest in our
communities and describe how to get there with relative ease. However, astronauts have found it much
more difficult to locate major landmarks while working in microgravity. Why is it so much easier to
orient on Earth than in orbital flight? In daily life on Earth, “up” and “down” are defined for us by
gravity, and we always see people and objects upright relative to us. Information from our otolith organs
and other gravireceptors also provide us with salient cues regarding the static orientation of our bodies
relative to the gravitational vertical. Astronauts flying on Skylab (Cooper, 1976) and Shuttle (Oman et al,
1984) missions, however, said that their sense of static orientation is unstable in orbital flight due to the
absence of an intrinsic gravitational “down.” When floating with their feet toward the space station (or
shuttle) “floor” (whose identity is transferred visually from 1-G simulations), crewmembers rarely seem
to have a problem with their sense of self-orientation.
When they work upside down (relative to the learned upright orientation in 1-G training
modules), however, or upright while viewing another crewmember who is working upside down,
1
Some sections of this introduction have been adapted from a research proposal submitted to the National Space
Biomedical Research Institute 16, June, 2000 by Oman, et al.
13
astronauts frequently experience a sudden change in direction of the perceived vertical of the module,
called a “visual reorientation illusion” (VRI) (Oman et al., 1986). In this compelling illusion, the
surrounding walls, “floor,” and “ceiling” seem to exchange subjective identities. VRIs are much like
figure reversal illusions (e.g., the Necker cube), except it is one’s own subjective orientation that changes.
The sudden change in perceived orientation (unaccompanied by normal vestibular motion cues) can
trigger space sickness, cause reaching errors, and make it difficult to recognize important landmarks.
VRIs occur because the surface below one’s feet is always a floor on Earth, and because other people
(and some objects) usually appear in a visually upright orientation relative to the observer. Unweighting
of gravireceptors and headward fluid shift also contribute to the problem, making some astronauts feel as
if they are continuously upside down (“inversion illusion”) (Gazenko, 1964; Matsnev et al 1983; Oman et
al, 1986; Lackner, 1992).
Individual differences in susceptibility to VRIs have been found between crewmembers on the
Neurolab Shuttle mission to whom scenes of spacecraft interiors were presented at various tilt angles
using a virtual reality display (Oman, et al., 2000). VRIs are experienced on Earth, but usually only occur
about the gravitational upright, such as when one exits an unfamiliar building and finds he or she is facing
an unexpected direction. These illusions have, however, been induced experimentally about the
gravitational horizontal in 1-G through the use of real and virtual tumbling rooms (Howard and
Childerson, 1994; Oman and Skwersky, 1997).
1.2 Problems Related to Space Station Structures
US astronauts living on the Russian MIR space station have reported spatial orientation
difficulties stemming from the complex architecture of the structure. MIR is comprised of 6 different
modules connected at right angles from one another at a central hub called “the node” (see Figure 1.1).
An astronaut floating inside the node is surrounded by six different portals, each leading to one of the
modules. A brown “floor,” a blue “ceiling” and tan walls mark the rectangular interior of, and define a
visual vertical for each module. One of the main problems with this configuration is that the different
14
modules’ visual axes are not co-aligned. Two of the modules’ visual axes are actually rotated 180 degrees
from one another: For the sake of analogy, imagine walking from a normal room in your house into one
with all the furniture bolted upside down to the ceiling! This makes it hard for astronauts to mentally
visualize other parts of the station, despite claiming to know the entire physical arrangement. Without the
ability to visualize effectively, crewmembers develop route knowledge via declarative rules that link
important landmarks, much as we do on Earth (Anderson, 1982). With practice, the procedures associated
with these rules eventually become nearly automatic. For example, one crewmember recalled: “I learned
that to go into Priroda, I needed to leave the base block upright, go through the hatch to my left, and then
immediately roll upside down so that Priroda would be right side up.”
Navigating is especially difficult when one must pass through the node. The International Space
Station (ISS) will have up to six nodes with up to six modules connected to each of them in the same
orthogonal manner as the one on MIR. Learning and remembering spatial relationships both between
modules and between nodes in such a complex arrangement will probably be quite difficult. Recently
implemented NASA human factors standards (e.g., 3000/8.4.3), however, require only that internal visual
verticals be consistent within a module. Fortunately, the SM, FGB, USLab, JEM, and COF all have
parallel axes and are connect at a common node module (see Figure 1.2). Although travelling through
arrangements like this will probably not be as difficult as on MIR, US nodes and modules have a square
interior cross section and similarly colored equipment and stowage drawers on all 4 surrounding surfaces.
This increases visual directional ambiguity and fails to alleviate susceptibility to the previously described
disorienting illusions. Meanwhile, the increased number of nodes and modules in the ISS structure adds
complexity to crewmembers’ spatial orientation tasks.
1.3 Importance of Spatial Memory in Emergency Situations
Maintaining spatial memory is important especially in emergency situations when crew have to
make spatial judgments in darkness, or with smoke-obscured cabins. The majority of Shuttle, MIR, and
Skylab crewmembers claim to depend heavily on visual cues for orientation and consider these cues very
15
important when confusion occurs. Spatial orientation difficulties encountered in the node on MIR,
especially by Shuttle visitors, prompted Russian cosmonauts to place red arrows made of Velcro on the
walls pointing toward the Shuttle adapter hatch, creating their own global directional markings. A
location coding system has been recommended by long-duration crewmembers and used in FGB and
Node 1 modules on ISS (Novak, personal communication). Coding systems in use now, however, are not
spatially consistent between modules. These visual aids are helpful in normal working conditions, but
become virtually useless when visibility is compromised.
What happens when crewmembers are unable to see helpful visual cues? A fire on MIR in 1997,
filled the modules with smoke and reduced visibility to dangerously low levels (Burroughs, 1998).
Although crewmembers did not have trouble finding escape routes, the experience alerted NASA to the
potential hazard that might be posed by spatial orientation difficulties during operational crises aboard
space stations.
16
Figure 1.1. MIR Space Station Schematic
17
Figure 1.2. International Space Station Schematic2
2
ISS assembly schedule and planned configuration are subject to change.
18
Chapter 2: BACKGROUND
The following sections provide a brief outline of previous research regarding spatial memory on
Earth and mention related navigation research. Previous visual orientation and spatial memory
countermeasures for disorientation and motion sickness problems experienced by astronauts are also
discussed.3
2.1 Spatial Memory and Navigation in 1-G
On Earth, humans are able to keep track of their orientation and location in environments by a
process of spatial updating (Pick & Reiser, 1982). Proprioceptive cues allow humans to perform this
process reliably and without difficulty when actual physical movement is allowed (Ivanenko, Grasso,
Israel, & Berthoz, 1997; Loomis, Da Silva, Fujita, & Fukusima, 1992; Loomis, Klatzky, Golledge,
Ciccinelli, Pellegrino, & Fry, 1993; Mittelstaedt & Glasauer, 1991; Rieser, 1989; and Rieser, Guth, & Hill
1986). By contrast, spatial updating is considerably less reliable and error-prone when imagined
movement relative to an environment is required (Klatzky, Loomis, Beall, Chance, & Golledge, 1998;
Farrell & Robertson, 1998). The frame of reference used during spatial updating needs to be considered as
well. One can imagine oneself moving relative to the environment (viewer-based), or vice versa (objectbased). Most studies have shown that a viewer-based approach is advantageous for imagined spatial
updating of small-scale environments involving rotations (Wraga, Creem, & Proffitt, 1999; Simons &
Wang, 1998).
Rather than studying the process of learning, the majority of studies on imagined spatial updating
have considered the situation in which novel configurations are encountered, or once asymptotic
performance has been reached. When people encounter a novel environment, they first learn to identify
salient landmarks, and with experience, to associate them with specific actions between connecting routes
(Siegel & White, 1975). The sequence of landmarks and actions is eventually learned as route knowledge,
3
Several sections of this chapter were adapted from a research proposal submitted to the National Space Biomedical
Research Institute 16, June, 2000 by Oman, et al.
19
which at first is based on declarative rules (Anderson, 1982) such as “Turn right at the church,” but
becomes automatic with practice. With more experience, people develop survey (configurational)
knowledge of an environment, which is characterized by an ability to take shortcuts or to describe an
environment as it appears from different viewpoints. The idea that landmark, route, and survey
knowledge are developed in stages is widely accepted (McDonald and Pellegrino, 1993), but many
believe they develop concurrently rather than sequentially. Route and survey knowledge of an
environment can be acquired through direct experience (“primary knowledge”) or learned by studying
maps (“secondary knowledge”). Although both techniques are often used, the former is detailed and
nearly automatic, while the latter is thought to involve mental rotation of a cognitive map, which makes it
harder to retrieve. Survey knowledge can involve processes of mental imagery and mental rotation (e.g.,
Reiser, 1989) that may activate the same brain structures as in direct visual perception (e.g., Kosslyn et
al., 1993). Mental imagery can also affect the subjective sense of orientation relative to gravity via topdown processing (Mast, Kosslyn, & Berthoz, 1999). Spatial information has also been shown, however,
to be processed and organized according to hierarchies and categories: Sadalla et al. (1980) showed that
humans employ spatial “reference points” as organizing loci for adjacent places in an environment.
Franklin and Tversky (1990) have argued that people keep track of object arrangements according to a
“spatial framework” in which body axes are used to establish referent directions for categorizing object
locations, and that they transform the model appropriately when imagining the environment from new
points of view. While imagining relative body orientations, subjects who learned relative object locations
via narrative descriptions had better spatial memory for objects aligned with asymmetric (e.g.,
front/behind) body axes than for those aligned with symmetric (left/right) body axes (Franklin and
Tversky, 1990; Bryant and Tversky, 1999; Bryant et al., 1992). Bryant et al. (2000) conducted a similar
study in which subjects learned the spatial relationships among objects located to all six sides of their
bodies via direct observation rather than narratives. They found evidence that subjects employ the spatial
framework model when locating objects from memory, but when locating objects from observation a
20
“physical transformation” model is applicable which predicts longer retrieval times for longer distances
from front. They noted that the mental transformations required to imagine an environment from a new
perspective in daily life usually only involve rotations about a single (gravitational) axis.
2.2 Previous Visual Orientation and Spatial Memory Training
2.2.1 Virtual Environment Training for 1-G Applications
Spatial orientation training on Earth is preferably accomplished via direct observation of the
environment. Virtual environments (VEs), however, have emerged as potential tools for teaching spatial
tasks (e.g., Regian, Shebilske, & Monk, 1992), especially when the actual environment is inaccessible
(e.g., space station). Does spatial learning in a VE transfer to the actual environment? Wilson et al. (1997)
showed that subjects were able to acquire spatial information of a real multistory building by exploring a
to-scale desktop computer simulation of the same building. Experience in an equivalent virtual setting has
been found to produce better configurational knowledge of the actual environment than does a descriptive
narrative or map study (Koh, 1997; Witmer et al., 1996). Bliss et al. (1995) showed that civilian
firefighters could follow a route through a building to perform a rescue and exit via a different route after
training in a VE model of that building.
Relevant to the concerns of this thesis, preliminary results (Witmer et al., 1996) suggest that VE
training can also enhance the performance of shipboard firefighters under low visibility conditions.
2.2.2 Spatial Orientation Countermeasures for 0-G Applications
The need for preflight visual orientation practice for astronauts was realized after Apollo and
Sklylab. Countermeasures for spatial orientation difficulties in microgravity currently include experiential
training in real and virtual mockups and parabolic flight. Crews routinely rehearse Extra Vehicular
Activity (EVA) in neutral buoyancy facilities but have recently (since 1993) added virtual reality
techniques. The SAFER training program in the Virtual Reality Development Center at Johnson Space
Center provides Shuttle and MIR crewmembers with mission-specific experience in highly detailed
virtual mockups, including a view of the earth. The latter cannot be simulated in water tanks.
21
Crewmembers say the practice they get visually orienting to the virtual mockups of Shuttle Payload Bay
or the MIR Station is extremely valuable (J. Hoffman, personal communication to C. Oman). This
facility, however, has not yet been used for Intra Vehicular Activity (IVA) training.
Parker, Reschke, and Harm were the first to develop formal preflight orientation training using
the DOME, a simulator that projected visual scenes on a quarter of the interior surface of a 12-ft. sphere.
The DOME was part of a Preflight Adaptation Training (PAT) research effort whose main goal was to
mitigate visuo-vestibular conflicts and to develop a laptop computer-based method for reporting
subjective self-orientation. A group of astronauts also used the DOME to practice moving about a
Spacelab interior while in gravitationally unfamiliar simulated body orientations. The effectiveness of this
training on orientation performance was not formally evaluated. Parker and Harm (1993) conducted a
retrospective study that suggests that PAT-trained astronauts may have had reduced incidence of space
sickness.
2.3 Space Station Node Experiment 1
How well can humans imagine and orient in three dimensions in situations where they are free to
turn completely upside down? Our first series of experiments (Oman et al., 2000; Shebilske et al., 2000)
studied how quickly and well subjects inside a 6 walled node chamber or a virtual equivalent could learn
to predict the relative direction of a target object while imagining a different roll orientation (including
upside down), and/or viewing direction. This involved training subjects to locate target objects from
imagined body orientations relative to a previously seen environment. Subjects were able to achieve
relatively high accuracy within 20 exposures from a given viewpoint, regardless of roll orientation. They
learned to orient from a second viewpoint with equal or greater ease. Subjects were believed to have used
a combination of learned memorization strategies and practice with visual imagery from the first
viewpoint, and transferred that knowledge to the second. Task performance measures correlated
significantly with scores on conventional paper-and-pencil tests of field independence and 2/3D figure
rotation ability. Despite limitations in field of view, resolution, tracking delays and other factors, head-
22
mounted virtual reality displays were shown to be as effective as a real environment for this type of
spatial memory training. Changing the physical orientation of the subjects with respect to gravity had
only minor effects on performance in either the real or the virtual environment. However, we recognized
several significant limitations of this first investigation: 1) Because of time constraints, subjects were
tested facing only two of the six walls of the chamber. It remained to be demonstrated that subjects could
maintain their orientation and spatial memory when facing any of the six walls. 2) Subjects were given
no specific advice as to how to remember the location of the target objects. 3) Subjects were not tested in
a novel second environment, to see whether they had “learned how to learn”. 4) Disoriented astronauts
arguably have to infer their orientation based on the relative direction of remembered targets, whereas in
these experiments, our subjects were required to do the reverse: they were told their orientation, and then
had to predict the direction of remembered targets.
2.4 Space Station Node Experiment 2: The Present Experiment
For astronauts entering a 6-sided space station node, spatial orientation tasks are “forward” in
nature – that is, their orientation must be quickly inferred from the visual surround. We wanted to
demonstrate the effectiveness of virtual reality as a display medium for teaching generic strategies for
memorizing and visualizing 3D spatial arrangements in any possible imagined body orientation. With
regard to this goal, the following questions are of interest: Can subjects learn to locate target pictures
while facing any of the six node surfaces? Does learning in one environment accelerate learning in a
second environment (i.e. does one “learn how to learn”)? How long is spatial memory retained? Does
strategy training help? Do certain memorization strategies correlate significantly with paper-and-pencil
test scores and/or task performance? Does 2&3D mental rotation ability correlate significantly with task
performance? Is spatial memory better for particular target locations relative to imagined body orientation
than others? If this experiment is successful, it could lead directly to a cost-effective countermeasure for
in-flight spatial disorientationin the form of preflight spatial orientation training with a virtual reality
display. By allowing astronauts to practice effective spatial memory strategies in multiple virtual
23
environments, both generic and/or mission-specific, they could develop a generalizable approach to 3D
spatial orientation in microgravity.
The objective of the current experiment is to study whether humans can learn to correctly identify
target directions, regardless of relative body orientation, while facing any of the six surfaces in a virtual
cubic chamber. Subjects were trained in a 3D array of objects on a forward task in which they must infer
their relative orientation and then quickly locate a target object relative to their body. The task was then
repeated with a different array of objects to see if experience in the first array helped subjects learn faster
in the second. Subjects returned 1, 7, and 30 days afterward to see how long configurational knowledge
and spatial memory task ability was retained. Performance of one subject group who received computerbased strategy training was compared with that of a control group who were free to use whatever
strategies they wanted to perform the task, and with the results of conventional paper-and-pencil test
assessments of visual dependence and 2/3D figure mental rotation ability.
24
Chapter 3: METHOD
3.1 Subjects
Twenty-seven students (24 MIT engineering students, 2 Harvard law students, and 1 spouse of an
engineering student: ages 18-47) were recruited who had no history of visual, vestibular, or auditory
disease. Subjects were paid $10 per hour for participation, and those for whom participation ran over the
hour were compensated in a pro-rated manner. Data for three subjects were discarded because they did
not finish all three days of testing: One did not finish because she got a headache and felt slightly
disoriented after the training day; two simply did not return for testing. The protocol was reviewed and
approved by the MIT Committee on the Use of Humans as Experimental Subjects (COUHES).
3.2 Materials and Apparatus
3.2.1 Virtual Environment Generator
The experiment was conducted using a Virtual Environment Generator (VEG) system comprised
of three subsystems: a 3D graphics computer, a head-mounted display (HMD), and a head tracker.
Stereoscopic visual scenes were rendered by a graphics-accelerator-equipped workstation (two
Pentium II 400MHz processors, two Intergraph GLZ-13 graphics accelerator cards, Gateway PC) using
Python/OpenGL/VRUT software (see Appendix A for program code used).
The HMD was a color-stereo Kaiser Proview-80 consisting of an eyepiece with small LCD
displays, one for each eye, with an adjustable interpupillary distance, nominally 6 cm (see Figure 3.1).
The workstation rendered the graphics images in RGB mode at the 640 (horizontal) by 480 (vertical)
resolution of the HMD. This RGB signal was sent to an electrical box where it was converted to NTSC
format before being sent to each eye of the HMD. The field of view of each eye was 65° (horizontal) X
50° (vertical) with 100% binocular overlap. Each display was refreshed in full color at 60 Hz in fieldsequential mode (60 Hz for each of red, green, and blue fields). The average graphics update rate for each
eye was approximately 30 Hz, but the update rate varied between 29 Hz and 31 Hz depending on the
complexity of the scene. The angular resolution of the Kaiser HMD was about 6 arc minutes.
25
Figure 3.1. Head-Mounted Display system
The third subsystem was an acoustical-intertial head tracking system (IS-600 Mark 2, Intersense,
Inc., Burlington, MA) and is shown in Figure 3.2. The tracker measured head orientation with a
miniature, solid-state inertial measurement unit (IMU or InertiaCube) that senses angular rate of rotation
and linear acceleration along three perpendicular axes. (The linear position tracking capability of the IS600 Mark 2 was not needed or used in this experiment.) The angular rates were integrated to obtain the
orientation (yaw, pitch, and roll) of the IMU with an angular accuracy of 0.25º RMS for pitch and roll and
0.5o RMS for yaw. The IMU had an angular resolution about all axes of 0.10º RMS. The IMU was
mounted on the top of HMD while continually communicating the 3-degree of freedom orientation of the
HMD to the workstation over an RS-232 serial line at up to 115,200 baud. This information was used to
update the user’s binocular imagery to coincide with the direction in which he or she was facing in the
virtual environment. The tracker software incorporated a feature that predicted angular motion 10 ms into
the future in order to compensate for graphics rendering delays and to minimize simulator lag.
26
Figure 3.2. Intersense IS600-Mark 2 Tracking System
3.2.2 Virtual Cubic Chamber and Object Arrays
The environment was a virtual cubic chamber that had simulated dimensions of 1.22 m3 (or 4 ft.3)
with a black interior. Subjects sat erect at all times. All simulated orientations inside the virtual chamber
were such that the subject’s line of sight was centered with the surface ahead and 10 cm away from the
surface behind. During successive portions of the training and experiments, subjects saw three object
arrays as shown in Figure 3.3: (1) Practice array, (2) Array A, and (3) Array B. Each “object” consisted of
four identical pictures of a familiar animal (except for those in the practice array, as shown in Figure 3.3)
rotated by multiples of 90 degrees and symmetrically clustered so the aggregate could be easily
recognized from any orientation. The symmetry of the objects prevented any one object from providing
pointer cues to the other five. The spatial relationships between objects were constant. The walls and
objects were rotated together by computer; subjects were asked to interpret this as a self-orientation
change.
27
Practice Array
Array A
Array B
Behind:
Figure 3.3a. Wide-angle view of the interiors of the three object arrays, as seen from the initial, or
“baseline,” simulated orientation. Arrays were named based on the fact that one contained all animals
(A) and the other included a butterfly (B) with all animals otherwise. Animals in Array A included a frog
(above), a bluebird (below), a snake (left), a deer (right), an elephant (ahead), and a rooster (behind).
Animals in Array B included a turtle (above), a lion (below), a parrot (left), a trout (right), a butterfly
(ahead), and a giraffe (behind). The practice array contained pictures of a star (above), a pound sign
(below), hearts (left), diamonds (right), spades (ahead), and clubs (behind).
Figure 3.3b. Subject Wearing HMD and Performing Experimental Task
28
3.3 Procedure
3.3.1 Experiment Timeline
Before the experiment session, subjects completed a medical history questionnaire (see Appendix
B), three different paper-and-pencil tests related to visual field dependence and mental rotation ability
(Sect 3.3.3 and Appendix C), and on the basis of their performance on these tests, assigned to the
“strategy training” or “control training” group, as detailed in Sect 3.3.4 below. At the start of the first test
session, subjects were trained using a computer-aided procedure, (Sect 3.3.4 and Appendix D). At the end
of instructions, both subject groups were allowed 5 trials in the practice array to get used to the procedure.
Each subject returned 1, 7, and 30 days later for skill retention testing, described in Section 3.3.5,
consisting of a test of their memory of the configurations of the test environments, and how well they
could still accomplish the spatial task. At the end of each session, subjects were interviewed, as described
in Sect. 3.3.6. Figure 3.4 shows a graphical overview of subject activities with cross-references to
relevant sections.
29
Figure 3.4. Experiment Timeline
30
3.3.2 Spatial Learning Trials
The basic experimental task required the subjects to memorize the location of objects around
them. The “strategy training” subjects (Sect. 3.3.4) were given suggestions on how to learn to do this,
while the “control training” group was not. In each trial, they were shown the chamber in a different
orientation with only two objects (ahead and below their body) in view. By recognizing the objects, they
had to infer their orientation, and then predict the direction to a third specific target relative to their body
from this inferred orientation as quickly as possible while in darkness. Response time (RT) and indicated
target direction were measured during this phase of each trial using a 4-button keyboard. (Targets were
never presented ahead or below the subject). Next, to allow the subject to learn the environment and
recognize indication errors over successive trials, the subject was shown all objects as they would appear
from the tested orientation. The subject was instructed to look around, find the correct location of the
target, and then use the brief time remaining before the next trial to study/memorize the configuration of
the object array.
The successive views seen by the subject during each trial are shown schematically in Figure 3.5.
First, a picture of the target object was shown. Next, the orientation to be inferred was specified by
showing two surfaces ahead and below. Subjects could indicate relative target direction in body axes as
soon as these surfaces appeared using “above”, “behind”, “left”, or “right” buttons on the keyboard, but
they had up to seven more seconds after they disappeared if needed. On the first trial, the subject was
completely naïve but, in subsequent trials, usually learned the relative location of the objects to one
another via a trial-by-trial learning process. Pilot experiments showed that the task in any possible
orientation was too difficult to do without prior exposure involving simpler transformations in the object
arrays. Therefore, we had subjects do 12 “training” trials, which were meant solely to allow subjects time
to learn at least a few of the target relationships before their ability to learn was tested in any orientation.
The first 4 trials were in a baseline configuration without mental rotations; i.e. the same pictures were
presented ahead and below. For the next eight trials, the subject faced a second surface in various roll
31
orientations. Finally, the subject performed 24 trials facing all six surfaces in various roll orientations.
Presentation order was pseudorandomized but balanced by surface and relative target direction (see
Appendix E), since Bryant and Tversky’s experiments suggested the latter was important (as reviewed in
Sect. 2.1).
When we oriented the subject, we revealed two picture locations instead of just one as in our first
experiment.4 This increased the probability of correctly predicting target direction from 1/5 to 1/4 chance,
which made our % correct measure slightly more quantized and less powerful statistically than it was in
the previous design. Since we could not reliably determine which combinations of surface and relative
target direction were most difficult, we were concerned about how well sets of 4 trials would be balanced
for perceived difficulty. We averaged data across sets of 8 trials in the present experiment instead of just
4 in order to compensate for these properties.
Following a short break, the subject repeated an identical sequence of 36 trials in a second,
different environment. The order of presentation of the environments was reversed for half the subjects to
Figure 3.5. Schematic of 3D Spatial Learning Experiment timeline for each trial. Subject was shown
target picture, pictures ahead and below, then during “Memory” phase had 7 more seconds to indicate
relative target direction in body axes. Finally, during “Study” phase, all 6 pictures are shown. Subject
found target and then studied array for remainder of time available.
4
In our first experiment, subjects imagined entering the node through a specified hatch (the picture shown to them)
in a particular body orientation, defined by a clock hand.
32
control for intrinsic differences in “learnability” of the object arrays.
3.3.3 Paper-and-Pencil Spatial Ability Tests
In our earlier experiments (Sect.2.3), a correlation between performance on three conventional
tests of visual field dependence (Group Embedded Figures Test (GEFT) (Witkin et al., 1971), and two
and three dimensional mental rotation ability (Card Rotations and Cube Comparisons Tests; Eckstrom et
al., 1976; Appendix C) was found. In the GEFT, the subject is asked to identify perceptually a target
geometric figure embedded within an irrelevant stimulus content. In the Card Rotation tests, a drawing of
a card cut into an irregular shape is depicted. To its right are eight other drawings of the same card,
sometimes merely rotated and sometimes turned over to its other side. Subjects must indicate for each
drawing whether the card has been rotated (i.e., it is the same as the original) or turned over (i.e., it is
different from the original). In the Cube Comparison test, subjects are presented with two drawings of a
cube. Then, assuming no cube can have two faces that are alike, the subject must indicate whether the two
drawings could possibly be of the same cube or could not be of the same cube. We expected that these
tests might also be predictive of performance in the present experiments, and provide a useful metric for
balancing strategy and control training groups.
3.3.4 Strategy Training and Control Training
Based on experience piloting previous experiments, we hypothesized subjects might perform
better if a) they were instructed to remember the targets from a “baseline” orientation, b) to memorize
opposite pairs of objects, and c) to remember the relative orientation of object-pairs by memorizing at
least one object “triad”. This approach was incorporated into a (Microsoft Powerpoint presentation
manager based) set of computer-based written instructions, shown in detail in Appendix D.1. Half the
subjects used this “strategy training” presentation. The other half used a “control training” presentation
(see Appendix D.2), which did not introduce the “baseline orientation”, nor the logical grouping of
objects using “pairs” and “triads” concepts, but as in the strategy training group instructions, encouraged
the subjects to use mental imagery, and instructed them how to report their responses. The training
procedure typically took 15 minutes to complete. Subjects were assigned to the strategy or control group
33
based on their scores on the Card rotation and Cube comparison tests, so that the two groups were
approximately balanced (see Table 3.1). The strategy group received 5 more orientation demonstrations
than the controls.
Table 1-1 Group Scores on Paper and Pencil tests
Training
Card Rotation
Test
Cube Comparison
GEFT
Median
Range
Median
Range
Median
Range
Strategy
121.5
82.0
24.0
34.0
18.5
16.0
Control
114.5
110.0
25.0
36.0
21.0
13.0
*Note: Some scores were discarded as mentioned in Sect. 3.1.
3.3.5 Retention Tests
Subjects returned 1, 7, and 30 days later (retention days) for retesting. We first wanted to know
how well they could reconstruct the relative orientation of the objects in the array. So during each
“retention” test session, we first tested the accuracy of their configurational knowledge of both previously
learned environments, in the order in which these environments were learned. Subjects were asked to
“pick-and-place” the target objects from a palette onto the surfaces of the virtual chamber to reconstruct
the appearance of that chamber in the baseline orientation. They were free to take as much time as they
needed to place the objects in the order they chose, and then to revise their choices.
Configuration tests were scored using a convention that classified each response in terms of its
“geometric distance” from the correct response (i.e. the size of its error; see Appendix F). Responses that
required geometrically more complex paths from the correct response were considered more erroneous
and given higher (i.e. worse) scores. For example, a perfect reconstruction would be given a score of 0. A
deviation by single rotation was given a better (i.e. lower) score than an inversion or two rotations, and
any single transformation (rotation or inversion) was given a better score than any combination of itself
with another. A double rotation was given a better score than a double inversion. Responses that
contained more complex transformations (e.g., exchange of adjacent surfaces) beyond combinations of
34
simple rotations and inversions were classified as “other”. Any rotation (roll) about the z-axis received a
better score than any yaw (y-axis), and any yaw scored better than any simple pitch (x-axis). Similarly,
simple inversions through the z, y, and x (roll, yaw, and pitch) axes were considered more geometrically
different from the baseline orientation than inversions through axes earlier in that list.
Once their configurational knowledge was tested, their spatial ability for the second array was
then retested using a procedure similar to that employed in the original spatial learning trials. Subjects,
however, were not shown the object configuration between trials. Thus, we showed subjects the correct
configuration of the second array before this test began in order to prevent poor memory of the
configuration from confounding our measurement of retained spatial ability. We did not show subjects the
correct configuration of the first array because we wanted to see how long their configurational
knowledge would last when no feedback or practice was given.
On each retention day, each subject twice completed a block of 24 pseudorandomly ordered trials
(6 sets) identical to those encountered on the training day (i.e. 48 trials during which all six surfaces were
faced twice in each roll orientation). Subjects completed 48 trials instead of just 24 because we thought
spatial ability might continue to improve with extra practice, despite not ever getting to view the full
object array between trials.
3.3.6 Exit Interviews
After the tasks for each day were completed, subjects were asked multiple-part, open-ended
questions regarding the learning strategies they used and the relative difficulty experienced between
environments (training day) and between days (retention days). The investigator interviewed subjects
separately on the training day and on each of the retention days.
Training Day Questions:
(1) “Do you think your ability to do the spatial memory tasks improved in the second
training environment relative to the first?”
35
(2) “What strategy (or strategies) did you use in the first environment? Did you use these
same strategies in the second environment? If not, what was different about the strategies
you used in the second environment?”
(3) “Were you ever able to mentally visualize the pictures around you without the help of
any rules? I.e. Could you ‘see’ the pictures in your ‘mind’s eye’?”
Retention Day Questions:
(1) “What was the relative difficulty of today’s experience compared to that of the previous
day? What was harder/easier?”
(2) “How do you think the time away affected your ability to perform the tasks?”
(3) “Did you use the same strategies as you did on previous days?”
3.4 Experiment Design and Data Analysis
For the spatial learning trials, dependent variables were percent correct (% correct) indications
and response time (RT) during the memory phase of each trial. Independent variables set, environment,
and day were varied within subject, and training (strategy vs. control) was varied between subject (order
was counterbalanced). On the training day, we divided sets into the first (sets 1-3) and second (sets 4-6)
arrays seen.5 To assess effects, measures were collapsed across array presentation order within training
group. For the retention session configuration tests, median score and mean time to completion were
calculated, by array, across presentation order within training group.
The quantized % correct measure was inappropriate for statistical tests requiring normally
distributed data, so we used nonparametric Friedman ANOVA and Kruskal-Wallis tests to assess
differences within and contrasts between groups in % correct performance, respectively. Stronger training
effects might have been observed if we had shown only one picture, but showing only one symmetrical
object does not provide adequate orientation information for the “forward” task we used. Learning trends
36
were analyzed via the Page test (Conover, 1999), which detects increasing or decreasing monotonic trends
for group data across sets. Configuration test scores were compared within and between group via the
aforementioned nonparametric tests.
On the other hand, we used the stronger statistical tests (repeated measures ANOVA, paired and
two independent sample t-tests) to evaluate improvement within and contrasts between group in RT
because those set means were approximately normally distributed.
Statistical analysis was performed using conventional packages (Systat v9.0, SPSS, Inc.;
StatXact-4, Cytel, Inc.).
5
We were mainly interested in performance in the last 24 trials, and therefore, we omitted the 12 training trials from
analyses.
37
Chapter 4: RESULTS
4.1 Evidence of Learning
Figures 4.1 - 4.3 show the mean % correct and RT, respectively, by set and target group, within
array, for both strategy and control groups on the training day. The 12 trials (4 upright and 8 facing a
second surface) before testing in each environment (i.e. before set 1 and between sets 3 and 4) were
essentially training in the new array and were not included in the analyses.
When all targets were taken together, a Friedman test revealed a significant difference in %
correct performance between sets in both the first and second arrays for the control group (first array: χ2 =
9.80, df = 2, exact p < .01; second array: χ2 = 6.78, df = 2, exact p < .05). [Note: The StatXact software
computes many tests exactly rather than by the more usual asymptotic approximation by the chi-squared
distribution. When the indication “exact” is noted below, it refers to this property of the StatXact
calculation.] Although no significant improvement was found in % correct between sets in either array for
the strategy group (p = .05 criterion), the Page test, whose characteristic statistic is denoted “pa(x)”
below, revealed a significant increasing trend across sets in the first array for this group [pa(x) = 1.83, df
= 2, exact p < .05]. A repeated measures ANOVA on RT performance revealed significant main effects of
array and set [F(1,21) = 16.87, p < .01 and F(2,42) = 6.35, p < .01, respectively]. Training, array
presentation order, and interaction effects were not significant.
Most subjects learned to do the task within 36 trials in either test array. The control group showed
learning in % correct in both arrays, while the strategy group showed it in the first array only. Both
groups showed learning in RT in both environments.
4.2 Transfer of Learning from the First Array to the Second
As we chose to define it, transfer of learning, or “learning how to learn,” required that learning
occur faster (i.e. faster rise or fall time) in the second environment, but not necessarily that a higher level
of asymptotic performance be achieved (since one array might be intrinsically more difficult to learn than
the other, despite our intention to make them similar). One indication of transfer of learning, then, is that
38
subjects perform better, by set, in the earlier sets when training in the second environment than they did in
the first. Table 4.1 indicates with asterisks the sets in which performance significantly improved from the
first object array to the second array.
We contrasted performance between array, by first (sets 1 vs. 4), second (set 2 vs. 5) and third
(set 3 vs. 6) sets. When all targets were taken together, paired t-tests revealed that RT performance was
better in the second array in the first and second sets for the strategy group [first RT t(11) = 3.1, p < .01
and second RT t(11) = 2.7, p < .05], but only in the second set for the control group [t(11) = 3.67, p <
.01]. Significant differences in % correct performance between arrays could not be demonstrated in any
set for either training group when all targets were taken together.
When relative-target-direction groups were taken separately (see section 4.8), a Friedman test
revealed that % correct performance for above/behind targets in the second set of the second array was
better than in the first for the strategy group (χ2 = 6.0, df = 1, exact p = .0313). Paired t-tests revealed that
RT for above/behind targets in the first and second sets was significantly shorter in the second array than
in the first for the strategy group [first set: RT t(11) = 2.9, p < .05 and second set: RT t(11) = 3.0, p < .05],
but this effect appeared only in the second set for the control group [t(11) = 2.3, p < .05]. No significant
transfer in performance for left/right targets was seen in either measure for either training group.
We also tested for improving trends that continued across the change in array, suggesting that
learning transferred from the first array to the second. When all targets were taken together, Page tests
across all six sets within training groups revealed significant improving trends in both % correct and RT
performance for both training groups [control % correct: pa(x) = 2.6, exact p < .01 and RT: pa(x) = -4.6,
exact p < .0001; strategy % correct: pa(x) = 2.3, exact p < .01 and RT: pa(x) = -3.5, exact p < .001].
When relative-target-direction groups and array-presentation-order subgroups were taken
separately, interesting differences emerged. Table 4.2 and 4.3 summarize set means and significance of
trends found on the training day by subgroup. Shaded regions correspond to sets across which significant
trends were found. Page tests revealed that three out of four groups had significant trends of
39
improvement in both % correct and RT for above/behind targets – no significant trends in either measure
for above/behind targets were seen for the control group who saw array order 2 (Array A first). Only the
strategy group who saw Array A first had significant trends of improvement that continued across the
change in array in both % correct and RT for left/right targets.
In summary, both training groups showed significant transfer in RT performance for
above/behind targets, but not for left/right targets. The transfer, however, occurred in earlier sets and in
more sets for the strategy group than for the control. Corresponding transfer in % correct was found for
above/behind targets in the second set for the strategy group. The strategy-trained subjects who saw Array
A first had the most compelling transfer: they were the only ones to show significant learning trends in
both % correct and RT for left/right targets that continued across the change in environment. This is a
sign of the effects that training was intended to produce.
Table 4-1 Significant Transfer in Mean RT and % Correct by Training, Target Group, and Set
Target Group
Measure
Training
Group
Above/Behind
All
First
Second
Third
First
Second
Third
Set
Set
Set
Set
Set
Set
Strategy
*
*
*
*
Mean RT
Control
*
Strategy
*
*
% Correct
Control
* : Performance was significantly (p < .05) better in the second array than in the first.
40
Table 4-2 Trends in Performance by Training, Array Presentation Order, and Set on the Training Day for
Above/Behind Targets
Percent Correct
Mean Response Time (sec)
Training Array
Order
First Array
Second Array
First Array
Second Array
1
2
3
4
5
6
1
2
3
4
5
6
54%
64%
83%
71%
76%
88%
5.88
5.84
5.11
4.79
4.41
4.11
3.51
3.84
3.10
2.86
2.61
3.18
2.29
2.40
1
pa(x) = 2.4
Exact p < .005
Control
65%
70%
87%
65%
2
pa(x) = -3.4
Exact p < .0005
73%
77%
4.63
4.12
3.53
--
-79%
83%
91%
1
93%
100%
98%
4.80
4.07
Pa(x) = 2.4
Exact p < .005
Strategy
80%
83%
100%
2
95%
3.88
pa(x) = -2.1
Exact p < .05
100%
100%
3.76
4.66
Pa(x) = 1.9
Exact p < .005
3.91
2.43
pa(x) = -2.2
Exact p < .05
Table 4-3 Trends in Performance Means by Training, Array Presentation Order, and Set on the Training
Day for Left/Right Targets
Training Array
Percent Correct
Mean Reaction Time (sec)
Order
First Environment
Second
First
Second
1
2
3
4
5
6
1
2
3
4
5
6
39%
57%
71%
64%
50%
64%
5.65
6.50
5.65
6.43
6.04
5.59
1
pa(x) = 1.7
exact p < .05
--
pa(x) = -1.9
exact p < .05
--
Control
40%
50%
45%
40%
55%
55%
6.33
5.34
2
43%
43%
50%
Strategy
46%
43%
46%
6.89
6.84
6.56
45%
55%
4.32
4.20
6.37
6.67
6.23
6.59
7.15
--
-45%
2
4.49
pa(x) = -3.0
exact p < .001
-1
5.08
55%
pa(x) = 2.2
exact p < .01
75%
85%
7.55
7.12
6.68
pa(x) = -1.8
Exact p < .05
6.95
--
41
4.3 Effect of Strategy Training6
A Kruskal-Wallis test across array presentation order revealed that the strategy group had
significantly better % correct performance for above/behind targets than the control group in the first and
second sets of the second array (set 4 % correct: χ2 = 5.3, exact p < .05; set 5 % correct: χ2 = 9.1, exact p
< .01). A corresponding training effect was found in RT performance in the second set of the second array
(χ2 = 7.0, exact p < .01). The control group had better RT performance for left/right targets than the
strategy group in the last set of both arrays (set 3: χ2 = 4.1, exact p < .05; set 6: χ2 = 4.3, exact p < .05).
As shown in Table 4.4, strategy training helped for above/behind targets in the early sets of both
test environments. Strategy training, however, hindered RT performance for left/right targets in the last
set of both environments.
Table 4-4 Training Effects on Performance by Relative Target Direction, Array, and Set on the Training
Day
Relative-Target Direction
Measure
Array
Above/behind
Left/right
First Set Second Set Third Set First Set Second Set Third Set
C
First
Mean RT
S
Second
C
First
% Correct
Second
S
S
S = Stragety-trained subjects had significantly better performance than control.
C = Control subjects had significantly better performance than strategy-trained.
6
There was 1 degree of freedom for all statistical results reported in this section.
42
% Correct for All Targets on the Training Day
100
First Array
90
Second Array
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
0
0
1
2
3
4
SET
5
6
7
Mean RT for All Targets on the Training Day
Mean Response Time (sec)
10
9
8
First Array
Second Array
7
6
5
4
3
Training Group
2
Control
Strategy
1
0
0
1
2
3
4
5
6
7
Set
Figure 4.1. Mean Performance for All Targets, by Set, within Training Group on the Training Day Upper
panel: % correct, lower panel: RT; Error bars are +/- 1 SEM.
43
% Correct for Left/Right Targets on the Training Day
100
90
Second Array
First Array
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
0
0
1
2
3
4
SET
5
6
7
Mean RT for Left/Right Targets on the Training Day
Mean Response Time (sec)
10
9
First Array
Second Array
8
7
6
5
4
3
Training Group
2
Control
Strategy
1
0
0
1
2
3
4
5
6
7
Set
Figure 4.2. Mean Performance for Left/Right Targets, by Set, within Training Group on the
Training Day Upper panel: % correct, lower panel: RT; Error bars are +/- 1 SEM.
44
% Correct for Above/Behind Targets on the Training Day
First Array
Second Array
100
90
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
0
0
1
2
3
4
SET
5
6
7
Mean RT for Above/Behind Targets on the Training Day
Mean Response Time (sec)
10
9
8
First Array
Second Array
7
6
5
4
3
Training Group
2
Control
Strategy
1
0
0
1
2
3
4
5
6
7
Set
Figure 4.3. Mean Performance for Above/Behind Targets, by Set, within Training Group on the Training
Day Upper panel: % correct, lower panel: RT; Error bars are +/- 1 SEM.
45
4.4 Effect of Using Mental Imagery7
Whether or not subjects used mental imagery apparently influenced the subjects performance,
and their response to training: In response to Question 3 of the Exit Interview, four subjects from each
training group (n = 8) reported using mental imagery in at least one of the environments on the training
day. We referred to this group as the imagery group. The remaining subjects (n = 16) claimed to use
mental rules instead or that they found it too hard to use mental imagery. We called this group the nonimagery group. Figure 4.4 shows mean % correct for left/right targets, by set, within imagery group
across training groups. Figure 4.5 shows mean % correct for above/behind targets, by set, within training
group, among non-imagery subjects. Table 4.5 summarizes training effects found among non-imagery
subjects.
Kruskal-Wallis tests revealed that the imagery group had significantly higher scores on all three
paper-and-pencil tests than the non-imagery [Card rotations: χ2 = 5.7, p(Monte Carlo estimate) < .05;
Cube comparisons: χ2 = 8.2, p(Monte Carlo estimate) < .005; GEFT: χ2 = 4.0, p(Monte Carlo estimate) <
.05]. When relative-target-direction groups were taken separately, the imagery group had significantly
higher % correct performance for left/right targets than the non-imagery group in the last set of the first
array and in all three sets of the second array (set 3: χ2 = 4.0, exact p < .05; set 4: χ2 = 4.5, exact p < .05;
set 5: χ2 = 5.6, exact p < .05; set 6: χ2 = 8.7, exact p < .005). No significant differences in performance for
above/behind targets between imagery groups were found. When all targets were taken together, the
imagery group had significantly better % correct performance in all three sets of the second array [set 4:
χ2 = 5.2, p(Monte Carlo estimate) < .05; set 5: χ2 = 5.4, p(Monte Carlo estimate) < .05; set 5: χ2 = 9.0,
p(Monte Carlo estimate) < .005].
7
There was 1 degree of freedom for all statistical results reported in this section.
46
% Correct for Above/Behind Targets for Imagery Groups
First Array
Second Array
100
90
80
% Correct
70
60
50
40
30
Group
20
Non-Imagery
Imagery
10
0
0
1
2
3
4
5
6
7
Set
% Correct for Left/Right Targets for Imagery Groups
First Array
Second Array
100
90
80
% Correct
70
60
50
40
30
Group
20
Non-Imagery
Imagery
10
0
0
1
2
3
4
5
6
7
Set
Figure 4.4. Mean % Correct, by Set, within Imagery Group across Training Group on the Training Day
Upper panel: above/behind targets, lower panel: left/right targets; error bars are +/- 1 SEM.
47
Table 4-5 Effect of Claimed Use of Mental Imagery on % Correct by Set
Target Group
Array
First
1
Left/Right
All
2
Second
3
4
5
6
*
*
*
*
*
*
*
* : The imagery group had significantly [p < .05] better % correct performance than the non-imagery group.
We conducted a post-hoc analysis of the effect of training among imagery groups. Among nonimagery subjects, those who received strategy training had better % correct performance than the control
subjects in the first and second sets of the second array (Kruskal-Wallis, set 4: χ2 = 9.1, exact p < .01; set
5: χ2 = 4.8, exact p < .05). Similarly, when relative-target-direction groups were taken separately, those
who received strategy training had better % correct performance for above/behind targets than the control
subjects in the first and second sets of both arrays (Kruskal-Wallis, set 1: χ2 = 4.5, exact p < .05; set 2: χ2
= 4.5, exact p < .05; set 4: χ2 = 5.1, exact p < .05; and set 5: χ2 = 8.4, exact p < .01). We found no effect
of training among non-imagery subjects for left/right targets.
48
% Correct for Above/Behind Targets for Non-Imagery Subjects
First Array
Second Array
100
90
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
0
0
1
2
3
4
5
6
7
Set
% Correct for Left/Right Targets for Non-Imagery Subjects
100
90
80
First Array
Second Array
% Correct
70
60
50
40
30
Training
20
Control
Strategy
10
0
0
1
2
3
4
5
6
7
Set
Figure 4.5. Mean % Correct for Non-imagery subjects, by Set, within Training Group Upper panel:
above/behind targets, lower panel: left/right targets; error bars are +/- 1 SEM.
49
Table 4-6 Effect of Strategy Training on % Correct by Set among the Non-imagery Group
Target Group
Array
Above/Behind
Set 1
First
Set 2
*
*
All
Set 3
Set 4
Second
Set 5
*
*
*
*
Set 6
* = The strategy group had significantly [p < .05] better % correct performance than the control group
among the non-imagery subjects.
4.5 Retention Testing
4.5.1 Spatial Ability Test
Figures 4.6 – 4.8 show % correct and RT performance for the second array, by set, over days
within training groups. Friedman ANOVA tests by day and set yielded a significant difference in %
correct between days in set 3 and 6 for the control group (set 3: χ2 = 7.6, df = 3, exact p < .05 and set 6: χ2
= 8.4, df = 2, exact p < .05). Similarly, a significant difference in % correct was revealed between days in
set 1 and 6 for the strategy group (set 1: χ2 = 9.7, df = 3, exact p < .05 and set 6: χ2 = 8.9, df = 2, exact p <
.01). Results for RT performance were analyzed via repeated measures ANOVA by training, array
presentation order, retention day, and set. Significant main effects of set and array presentation order were
found [set: F(5,105) = 8.38, p < .001; array presentation order: F(1,21) = 7.20, p < .05]. The overall
improvement in RT performance for above/behind targets between day 1 and 30 was significant [F(3,21)
= 3.09, p < .05].
Page tests across all sets (7-24) of the retention days within training group revealed that both
groups had a significant trend of improvement in % correct across all eighteen sets of retention testing
[control group: pa(x) = 3.3, p(Monte-Carlo estimate) < .0001 and strategy group: pa(x) = 3.3, p(MonteCarlo estimate) < .0005]. Similarly, the strategy group had a corresponding significant trend of
improvement in RT over all eighteen sets of retention days [pa(x) = -5.7, p(Monte-Carlo estimate) <
.0001], but the control group did not. When days were taken separately, the strategy group had a
50
significant trend of improvement in % correct across all six sets of both day 1 and day 30 [day 1: pa(x) =
1.8, exact p < .05 and day 30: pa(x) = 1.9, exact p < .05], but not on day 7. By contrast, the control group
had a significant trend in % correct over all six sets on day 30 only [pa(x) = 2.1, exact p < .05]. The
strategy group had significant trends of improvement in RT across all six sets on each retention day [day
1: pa(x) = -2.1, exact p < .05; day 7: pa(x) = -2.8, exact p < .005; day 30: pa(x) = -2.9, exact p < .005]. By
contrast, the control group had similar trends only on day 7 and day 30 [day 7: pa(x) = -3.0, exact p <
.005 and day 30: pa(x) = -3.8, exact p < .0001].
Spatial ability improved over days in both % correct and RT within both groups, especially
between day 1 and 30. There was consistent improvement in RT over days and within each day for the
strategy group, while the control group showed it only within days 7 and 30. Spatial ability remained
strong after 30 days.
4.5.2 Effect of Layoff on Spatial Ability
To assess the effect that layoff between days had on performance, we compared performance on
the last set of one day with the first set of the next (set 6 vs. 7, 12 vs. 13, and 18 vs. 19). When all targets
were taken together, paired t-tests revealed that the control group had significantly longer RT in the first
set on day 30 than in the last set on day 7 [t(11) = -4.72, p < .005]. Similarly, the strategy group showed
significantly lower % correct in the first set on day 1 than in the last set on the training day [Friedman, χ2
= 6.40, df = 1, exact p < .05]. When relative-target-direction groups were taken separately, a paired t-test
revealed that the strategy group had significantly shorter RT for left/right targets in the first set on day 1
than in the last set on the training day [t(11) = 3.21, p < .01]. Significant differences in % correct could
not be demonstrated for either target group separately.
51
% Correct for All Targets Over Days
100
90
Training
Day
Retention Day
1
7
30
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
03
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Mean RT for All Targets Over Days
Mean Response Time (sec)
10
9
Training
Day
Retention Day
1
7
30
8
7
6
5
4
3
Training Group
2
Control
Strategy
1
03
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Figure 4.6. Mean Performance for All Targets, by Set, over Days within Training Group Data for training
day (sets 4-6) using second array. Upper panel: % correct, lower panel: RT; error bars are +/- 1 SEM.
52
% Correct for Left/Right Targets Over Days
100
90
Training
Day
Retention Day
1
7
30
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
03
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Mean RT for Left/Right Targets Over Days
Mean Response Time (sec)
10
9
Training
Day
Retention Day
1
7
30
8
7
6
5
4
3
Training Group
2
Control
Strategy
1
03
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Figure 4.7. Mean Performance for Left/Right Targets, by Set, over Days within Training Group Data for
training day (sets 4-6) using second array. Upper panel: % correct, lower panel: RT; error bars are +/- 1
SEM.
53
% Correct for Above/Behind Targets Over Days
Training
Day
Retention Day
1
7
30
100
90
80
% Correct
70
60
50
40
30
Training Group
20
Control
Strategy
10
03
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Mean RT for Above/Behind Targets Over Days
Mean Response Time (sec)
10
9
Training
Day
Retention Day
1
7
30
8
7
6
5
4
3
Training Group
2
Control
Strategy
1
03
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Figure 4.8. Mean Performance for Above/Behind Targets, by Set, over Days within Training Group Data for
training day (sets 4-6) using second array. Upper panel: % correct, lower panel: RT; error bars are +/- 1 SEM.
54
4.5.3 Configurational Knowledge Test
As shown in Tables 4.7 – 4.9, subjects’ retained configurational knowledge for the second
environment better than they did for the first. This is probably because subjects received practice and
were shown the correct configuration for the second array. Without feedback or practice, the strategy
group’s configuration test scores and completion times for the first array degraded over time, while the
control group had poor scores on all three days. Strategy-trained subjects maintained configurational
knowledge better over time – especially when the first training environment contained Array A. In
general, both groups’ configurational knowledge was worse on day 7 and/or day 30 than on day 1.
Both groups still had good configurational knowledge for the second array after 30 days. The
strategy group’s configuration scores worsened over days for the first array [Page; pa(x) = 1.8, df = 2,
exact p < .05], but they had better scores than the control group for this array when all days were taken
together (Kruskal-Wallis; χ2 = 4.2, df = 1, p < .05). The strategy group’s time to completion for the
second array improved significantly over the three retention days [Page; pa(x) = -2.6, df = 2, exact p <
.005]. We also separated array presentation order subgroups. Among subjects who saw Array A first,
those who received strategy training had better configuration test scores for both arrays than control
subjects when all days were taken together (first array: χ2 = 7.0, df = 1, p < .01; second array: χ2 = 5.1, df
= 1, p < .05). We saw no such effect among subjects who saw Array B first.
Only two subjects (both in the control group) had configuration responses for the second array on
any day that were too complex for our scoring system, and they were among those who had the hardest
time learning the task.8 On the other hand, half of all (but twice as many control than strategy-trained)
subjects gave configuration test responses that could not be classified on at least one of the days. In
general, most subjects’ erroneous responses were consistent over days, while some got worse on day 7
and/or day 30 after correctly reconstructing the configuration on day 1.
8
This was readily apparent to the investigator via direct observation during experimental sessions.
55
Table 4-7 Configuration Knowledge for Training Groups by Day and Object Array
Retention
Training
Median SCORE
Mean RT (sec)
Day
First Array
Second Array
First Array
Second Array
1
7
30
Strategy
0.0
0.0
109.4
91.7
Control
115.0
4.5
93.5
67.7
Strategy
12.0
0.0
70.2
61.3
Control
130.0
0.0
83.8
51.9
Strategy
14.5
0.0
95.1
45.1
130.0
5.0
86.8
54.2
Control
Note: Possible scores ranged from 0 (a perfect reconstruction) to 130 (other). Of the total number of scores
given, 21.5% of them were classified as “other”.
56
Table 4-8. Configuration Test Measurements for the First Array by Subject and Day
Completion Time (sec)
Score
Training
Group
Control
Strategy
Subject
Day
1
7
Day
30
1
7
30
1
35.331
30.236
68.985
0
0
0
2
58.222
36.266
69.079
129
130
0
3
165.343
86.313
51.423
130
130
130
4
97.249
120.785
63.966
130
130
130
5
206.044
235.139
107.689
101
130
130
6
33.109
49.032
33.844
0
0
0
7
71.623
50.735
63.907
130
130
130
8
66.689
34.608
26.844
1
1
1
9
195.078
184.545
151.749
130
130
130
10
89.893
57.1229
59.268
0
0
130
11
82.826
91.908
195.263
130
130
130
12
20.563
28.982
150.233
0
0
0
13
14
15
16
17
18
19
20
21
22
23
24
111.87
46.37
125.89
121.45
70.3
44.467
266.797
57.015
108.2
114.455
176.438
69.514
64.203
56.422
65.548
86.534
65.17
41.062
134.125
35.203
42.311
35.545
117.875
98.124
86.142
130.405
65.984
82.841
65.845
110.609
80.252
218.984
70.984
41.44
66.439
120.934
0
0
0
1
0
0
130
0
0
17
130
0
12
0
12
25
0
0
130
0
0
17
130
102
12
0
12
25
0
0
45
130
130
17
130
12
57
Table 4-9 Configuration Test Measurements for the Second Array by Subject and Day
Completion Time (sec)
Score
Training
Group
Control
Strategy
Subject
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Day
1
30.593
34.534
73.128
113.372
161.952
41.639
64.206
35.141
129.673
34.043
79.268
14.64
55
43.3
112.71
119.27
60.33
80.612
211.829
57.375
86.892
44.843
61.672
166.127
7
44.376
23.048
52.092
78.858
91.597
33.234
53.485
32.92
62.55
21.828
88.421
41.063
56.671
28.003
41.468
275.294
42.436
30.331
45.877
32.906
38.577
21.937
68.436
54.172
Day
30
44.422
37.796
47.721
51.971
68.546
93.797
17.86
18.925
68.719
22.983
151.111
26.328
64.188
31.688
87.219
40.578
58.847
36.671
42.017
22.219
33.266
30.609
53.827
41.159
1
0
9
0
130
130
9
0
0
11
0
49
0
0
0
0
9
0
0
48
0
11
0
102
24
7
0
9
12
130
130
0
0
0
11
0
0
0
0
0
0
9
0
0
129
0
0
0
102
1
30
0
35
11
0
130
10
0
0
11
0
10
0
0
0
0
9
0
0
7
0
0
0
11
24
4.6 Predictors of Task Performance
The correlation between steady-state task performance and paper and pencil tests was assessed
for both arrays on the training day. As shown in Table 4.10, the control group’s steady-state % correct
results in both arrays correlated significantly (t test, df = 10, p < .05) with results of the GEFT, Card
rotation, and Cube comparison tests. The strategy group’s steady-state % correct in the first array did not
correlate significantly with any of the paper and pencil tests, but their steady-state % correct for the
second array correlated significantly with their GEFT scores only. The strategy group’s steady-state RT in
58
both arrays correlated significantly with the Card rotation test, but not with the GEFT or the Cube
comparison test.
Table 4-10 Spearman correlation coefficients for paper and pencil tests; *** p < .005, ** p < .01, * p < .05
df = 10
GEFT
Card
Cube
% Cor.
% Cor.
RT 1
Rot.
Comp.
1
2
Strategy
Control
.49
1
1
.75***
.61*
.37
.15
.68**
.55*
RT 1
RT 2
GEFT
Card
Rot.
Cube
Comp.
% Cor.
1
% Cor.
2
Strategy
Control
Strategy
Control
1
1
.26
Strategy
Control
.87***
1
1
.30
.75***
.39
.64*
.27
.86***
.11
-.08
-.58*
.29
.04
-.65*
Strategy
Control
.65*
1
1
.46
.82***
.14
-.35
.11
-.29
-.44
.06
-.44
Strategy
Control
.93***
1
1
-.64*
-.20
-.02
-.20
1
1
-.27
-.10
.05
-.21
.63*
.87***
Note: Although array-order subgroups were not intentionally balanced for mental rotation ability, KruskalWallis tests did not reveal any significant differences in paper-and-pencil test scores between array-order
groups or between subgroups.
4.7 Exit Interview Responses
Table 4.11 and 4.12 show the general response patterns and frequency of each within training
groups for the training day and retention day interviews, respectively. Exit Interview responses were
retrospectively categorized. The parts of Question 2 for the training day were taken together due to a lack
of interesting data for them separately. General responses corresponding to all three questions were
reported for the retention days because they were not interesting when separated. Some of the control
subjects reported using a “ring” strategy, which involved choosing four objects whose locations relative
to the body lay in a common plane (e.g., left-ahead-right-behind or above-ahead-below-behind), and
memorizing the sequence of objects going around the ring. This ring was not always perpendicular to a
59
“floor/ceiling” axis. The ring strategy might have been encouraged by the orientation changes
experienced in the block of 8 trials facing a second surface since they only occurred in roll.
Table 4-11 Summary of Training Day Exit Interview Responses for Training Groups
Training Group
Questions
Strategy (N=12)
Control (N=12)
YES
1. Do you think your
YES
*
ability to perform the
11
11
task improved in the
“got used to thinking about the spatial relationships”
second training
“familiarity with the task”
environment relative to
the first?
NO
NO
*One control subject
claimed that he had to
“change his mental model
of where certain colors
were in the 2nd set”
relative to his body in the
baseline orientation
2. What strategies did
you use [during
training]?
*The same control subject
said he “never figured out
that snakes and deer were
opposites [in Array A]
because of color
similarity”
3. Were you ever able to
mentally visualize the
pictures around you
without the help of any
rules?
•
•
2
helmet discomfort distracted her
thought she did better on the 1st
array
•
•
2
got tired in the 2nd set, so it was
harder to use her ring strategy
already mastered the task in the
1st array
Pairs + Visualization
4
Pairs + Visualization
4 (one said he “mentally rotated the
room around him”)
Pairs + Baseline + Triads (successful
use of right hand rule)
5
Pairs + Baseline Orientation
5
Pairs + Baseline + Triads (trouble
implementing right hand rule)
4
Floor/Ceiling + Ring
3
NO: 9
(dependent on rules)
NO: 8
(dependent on rules)
YES: 1
(1st array only: helmet distracted her
in the 2nd array)
YES: 2
(used visualization the entire time
with baseline orientation as
reference)
YES: 3
(2nd array only)
YES: 2
(2nd array only)
60
Table 4-12 Summary of Retention Day Exit Interview Responses for Training Groups
Training Group
Question
Day
Strategy (n = 12)
Control (n = 12)
Seemed the same or Easier today (8)
More visualization (4)
1. What was the
More difficult to remember
Easier or same (8)
relative difficulty of
1
configuration, hard to visualize (2)
today’s experience
Harder without feedback (2)
compared to that of
the previous day?
Answered faster, easier (2)
Seemed easier or same (8)
Even more difficult remembering
Harder to remember configs. (3)
configuration, but visualizing was
More visualization (2)
easier after seeing configs.
2. How do you think
Harder today than day 1, forgot targets
7
the time away
(3)
affected your ability
Similar to last time (6)
to perform the
[4 – Visualization used in unfamiliar
tasks?
orientations, or was easier]
Harder to remember config., but trials Harder to remember configs, but
were not hard after seeing it (5)
trials were ok after seeing it (7)
3. Did you use the
same strategies as
you did on previous
days?
30
took longer to orient (4)
Got lost part way through because of
lack of feedback
Easiest of all, after a few trials (3)
Took longer to orient or less
confident (3)
Came up with a new strategy (2)
Not all of the 8 subjects who said they were able to visualize were able to do so consistently in
both arrays. Equal number of members (9) of both the strategy and control groups used the baseline
orientation and pairs and visualization. The control group presumably discovered these strategies on their
own. Several members of the control group developed a different “floor/ceiling/ring” strategy, which
apparently they used as an alternative to the pairs/triads strategy taught to the other group. On day 7, 4
members of the strategy group and 2 members of the control group said they used visualization more or
that it was easier than on previous days. On day 30, 5 members of the training group and 7 members of
the control group said they found it harder to remember the configuration. After seeing the correct
configuration, however, these subjects said that it only took a few trials before they felt confident in their
spatial ability again.
61
4.8 Relative Target Direction
Some target positions relative to the subject’s body were associated with better performance.
Franklin and Tversky (1990) found that subjects who were trained to criterion on a 3D spatial
arrangement of objects had shorter response times (~ 0.8 sec) for those above or below the body posture
of an imagined, third person observer when the posture of that observer was gravitationally erect. When
the imagined body posture was gravitationally supine, response times were shortest for objects located
ahead or behind. They found that, in any case, response times for objects located on the left or right of
imagined body orientations were longest (Bryant and Tversky, 1999; Bryant et al., 1992; Bryant et al.,
2000). Based on these previous results and expected differences in spatial memory strategies used to find
them, we resolved the 4 possible relative target directions into two categories:
Above/Behind: a target that is located above or behind the simulated viewpoint. We expected
these RT for these targets to be faster because they are aligned with body axes that have salient
asymmetries due to body alignment (head/feet axis) or natural interactive experience
(front/behind axis). Moreover, remember that during the orientation phase of each trial, objects
were always presented in front and below of the subject. If the subject used a strategy of
memorizing opposite pairs of objects, the object opposite to the ones presented could presumably
be recalled without requiring any visualization, so subjects using a “pairs” strategy may have
found “above/behind” objects easier.
Left/Right (L-R): a target that appears on the subject’s left or right. These targets are aligned
with a body axis with the least salient asymmetry and require a more complex spatial memory
strategy that may include visualization. The objects appearing in front or below the subject
provided no assistance in recognizing left/right objects to subjects who had memorized pairs.
We assumed subjects’ performance had asymptoted by the third and sixth set of the training day (steady
state of the two arrays). To be consistent we tested for differences in performance between these two
62
relative-target direction groups via a Mann-Whitney test by the third and sixth (or steady state) set of
every day. Figures 4.9 and 4.10 show performance by relative-target direction over days for the strategy
and control groups, respectively. Table 4.13 shows the effect of relative target direction by steady-state
set and day within training groups. Strategy subjects had significantly shorter RT and higher % correct for
above/behind targets than they did for left/right targets, in the last set of both environments on the training
day, and in the third and sixth sets on every retention day (overall mean differences: 28% and 3.2 sec).
Control subjects showed a similar effect in % correct only for the second array on the training day, but we
were not able to demonstrate the effect in RT on the training day. The control group showed effects
similar to those revealed for the strategy group on day 1 and 7 in both performance measures, but only in
RT on day 30 (overall mean differences: 19% and 1.7 sec).
In summary, RT for above/behind was shorter than that for left/right targets, and more
consistently so in the strategy group. The superior RT of the strategy group on above/behind was
probably to be expected, since they likely used pairs to locate those targets.
63
% Correct for the Strategy Group by Target Direction
Training Day
Retention Day
1
7
30
100
90
80
% Correct
70
60
50
40
30
Target-Direction
20
Left/Right
Above/Behind
10
00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Mean RT for the Strategy Group by Target Direction
Mean Response Time (sec)
10
9
8
Training Day
Retention Day
1
7
30
7
6
5
4
3
Target-Direction
2
Left/Right
Above/Behind
1
00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Figure 4.9. Mean Performance for the Strategy Group by Relative-Target Direction over Days Upper Panel:
% correct, lower panel: RT; error bars are +/- 1 SEM.
64
% Correct for the Control Group by Target Direction
Training Day
Retention Day
1
7
30
100
90
80
% Correct
70
60
50
40
30
Target-Direction
20
Left/Right
Above/Behind
10
00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Mean RT for the Control Group by Target Direction
Mean Response Time (sec)
10
9
8
Training Day
Retention Day
1
7
30
7
6
5
4
3
Target-Direction
2
Left/Right
Above/Behind
1
00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Figure 4.10. Mean Performance for the Control Group by Relative-Target Direction over Days Upper
Panel: % correct, lower panel: RT; error bars are +/- 1 SEM.
65
Table 4-13 Effect of Relative-Target Direction on Steady-State Performance by Day; df = 1 for all results
% Correct
Training
Strategy
Training
Day
Set 3
Set 6
Day 1
Day 7
Day 30
Set 3
Set 6
Set 3
Set 6
Set 3
Set 6
χ2 = 10.3
7.4
5.7
11.2
8.2
7.0
7.0
7.5
p < .005
.01
.05
.005
.005
.01
.01
.01
4.7
4.1
4.8
.05
.05
.05
Control
Mean RT
Training
Strategy
Control
Training
Day
Set 3
Set 6
Day 1
Day 7
Day 30
Set 3
Set 6
Set 3
Set 6
Set 3
Set 6
χ2 = 9.0
15.4
15.4
13.6
16.8
13.6
14.0
15.8
p < .005
.001
.001
.001
.001
.001
.001
.001
5.3
9.3
9.7
5.8
14.0
.02
.005
.005
.05
.001
4.9 Array Presentation Order
The spatial ability test response time during retention testing depended on the order in which
environments were learned on the training day. Table 4.14 summarizes the effect of order on RT, for both
target-direction groups, on days 1, 7, and 30. Figure 4.11 shows RT for above/behind and left/right
targets, by retention day and set, within array presentation order.
Repeated measures ANOVA by training, array presentation order, retention day, and set did not
reveal a significant main effect of array presentation order on % correct for either target group. Further,
Kruskal-Wallis tests by order, day (including training day), set, and relative-target-direction group did not
reveal any significant differences in % correct (p = .05 criterion). A repeated measures ANOVA by
66
training, order, retention day, set, and target group showed a significant array-presentation-order effect on
RT [F(1, 21) = 5.2, p < .05].
When both training groups’ data are taken together, subjects that were trained on Array A first were
clearly faster in every set of every retention day for both relative-target-direction groups (sign test, p
<.001). Subjects who saw Array A first had significantly faster RT in set 2 of day 30 (χ2 = 4.4, df = 1, p <
.05) for left/right targets. These subjects also had significantly faster RT in several sets for above/behind
targets than those who saw B first (df = 1 for each result): On day 1, they had significantly faster RT for
these targets in set 1 (χ2 = 4.7, p < .05); on day 7, they had it in sets 1, 3, 4, 5, and 6 (set 1: χ2 = 5.7, p <
.05; set 3: χ2 = 6.3, p < .05; set 4: χ2 = 5.6, p < .05; set 5: χ2 = 6.6, p < .05; and set 6: χ2 = 7.5, p < .01;
respectively); and on day 30, they had it in sets 1, 2, and 4 (set 1: χ2 = 9.2, p < .01; set 2: χ2 = 6.6, p < .05;
and set 4: χ2 = 4.2, p < .05; respectively).
Configurational knowledge was also affected by order because there was a training effect only
among subjects who saw Array A first (see section 4.4.3). These results indicate that the order in which
the arrays are learned has an effect on subjects’ retention of configuratoinal knowledge and spatial ability
over time. Possible reasons for this are discussed in the next chapter.
Table 4-14 Order Effect on RT Performance, by Retention Day and Set, within Relative-Target Direction
Target
Group:
Day 1
1
Above/
Behind
Left/
Right
*
2
3
4
Day 7
5
6
1
*
2
Day 30
3
4
5
6
1
2
*
*
*
*
*
*
3
4
5
6
*
*
* : Subjects who saw Array A first had significantly faster RT performance than those who saw the reverse
order.
67
Mean RT for Left/Right Targets within Array Order
Mean Response Time (sec)
10
9
8
Retention Day
1
7
30
7
6
5
4
3
Array Order
2
Array B - A
Array A - B
1
06
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Mean RT for Above/Behind Targets within Array Order
Mean Response Time (sec)
10
9
8
7
Retention Day
1
7
30
6
5
4
3
Array Order
2
Array B - A
Array A - B
1
06
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Set
Figure 4.11. Mean RT for Relative-Target-Direction Groups within Array Presentation Order across Training, by
Set, over Days; upper panel: RT for left/right targets, lower panel: RT for above/behind targets; error bars are +/- 1
SEM.
68
Chapter 5: DISCUSSION
It was thought that the “Triad” strategy (remembering and learning to visualize the surfaces that
intersect at a corner) would help subjects organize spatial relationships in visual memory. The particular
“right-hand-rule”-based strategy we chose to teach them to use for remembering triads, in retrospect, was
evidently cumbersome. Some subjects found it difficult to use; others found it reliable, but all found it
time consuming. Although % correct performance of the strategy and control groups was similar for
left/right targets, the control group, which had not been trained with the triad/right hand rule procedure,
had shorter RT for the left/right targets. We suspect that the strategy-trained subjects obediently followed
instructions and tried to use the right hand rule. Many performed hand motions associated with our
example of the right-hand rule. The triad strategy appeared to hamper the performance of subjects with
higher paper and pencil test scores, while the pairs appeared to help the performance of subjects with
lower scores. This could account for the lack of a correlation between strategy group performance and
paper and pencil tests. Other mnemonic strategies, such as memorizing the clockwise (or
counterclockwise) order in a ring of 4 objects (Sect.4.7), were discovered and described by some of the
control subjects and may be worth investigating further. Many of the control subjects, however,
eventually discovered and used a “Pairs” strategy.
The object arrays were originally designed with the hope and expectation that neither would be
more easily learned than the other. The analysis, however, showed that Array B was subtly easier to learn
than A. Why? This effect could have been due to differences in color arrangement. Colors and animals
on surfaces to the left and right (red parrots and green fish, respectively) in the baseline orientation in
Array B may have seemed to show better contrast than corresponding pictures in A (brown snakes and
reddish-brown deer). The colors on the behind-subject object (roosters) in Array A were more difficult to
see than on the corresponding one (giraffes) in B. Subjects complained that it was harder, at first, to
recognize this target and pair it with its opposite in Array A. The object ahead (butterfly) in Array B was
the only insect in that array and may have been used as a reference point for organizing the locations of
69
the objects relative to one another. We think training was more challenging in Array A, especially when it
was seen first. We hypothesize that strategy-trained subjects who saw Array A first, the group that
showed the most compelling transfer (as noted in section 4.2), used the strategies that were taught to them
in order to overcome the greater ambiguity of A’s spatial arrangement. Then, they used either mental
imagery or excelled at using the same strategies in Array B. Control subjects who saw Array B first
showed transfer of learning for above/behind targets from the first environment to the second, while
control subjects who saw Array A first did not. We hypothesize that the former may have been more
likely to notice the antonymic qualities of the paired objects in Array B, and, on their own, decided to try
a pairs-memorization strategy, which they also used later when tested in Array A.
Exit interviews indicated that only 1/3 of our subjects claimed that they used mental imagery
successfully, including as many of the control subjects as training subjects, and usually not even in both
enviroments. Some of the subjects may have misunderstood the visualization question used in the
interviews (sect. 3.3.6), but this does not seem likely. Despite our emphasis in training on visualization,
we probably did not teach anyone to use mental imagery in this relatively short experiment. Instead, we
simply encouraged those with the ability to use it.
In the present experiments, we were able to demonstrate an effect similar to that of target location
with respect to major planes of body symmetry shown by Tversky and collaborators (Franklin and
Tversky, 1990; Tversky et al., 1992; Bryant et al., 1992; Bryant and Wright, 1999). Their subjects
imagined erect, supine and prone body positions with respect to familiar scenes with clear relationships
between objects described by narratives. They studied steady-state performance, rather than the learning
process or changes in spatial ability and configurational knowledge over time. In addition to finding
longer response times for targets on the left or right of imagined body orientations, we also found lower
% correct for these targets than for other principal body directions. We looked for these effects in the
results of our first experiment. Although trends appeared, they were not statistically significant.
70
Spatial ability trials were performed only on the second array encountered on the training day.
This prevented us from assessing the effect of time on spatial ability in the first array. Not only did their
spatial ability in the first array go untested, but their performance for the second array on days 7 and 30
was probably better than it would have been if the subjects had completed only 24 trials on days 1 and 7,
respectively. Thus, of the data analysis detailed in section 4.5.1, the results of Friedman tests on sets later
than 3 (e.g., set 6) on retention days should be interpreted warily. Further, the best measure of the effect
of time on configurational knowledge was probably the configurational knowledge result for the first
array, since feedback was not provided on the correct configuration.
71
Chapter 6: CONCLUSIONS
Our ultimate goal is to investigate the time and cost-effectiveness of virtual reality as a training
medium for spatial orientation tasks in unfamiliar body orientations. With further development, our
procedure is intended to serve as a countermeasure against effects of unpleasant disorientation and against
illusions that threaten astronauts’ capacity to carry out missions in microgravity. This experiment sought
to investigate whether subjects could learn to perform generic spatial orientation tasks analogous to those
confronting a crewmember in a space station node. We were also interested in knowing if one can “learn
how to learn” an environment; i.e. whether computer-based training with generic strategies followed by
experience in one environment accelerates learning in a second environment. We also wanted to know
how long configurational memory of specific environments and the more general spatial memory skills
were retained, and to understand the role of mental imagery.
Subjects were asked to memorize the spatial relationships among six objects attached to the six
walls of a virtual node, and “point” in the direction of a target object after seeing two surfaces uncovered.
They were challenged to do this when their imagined body was in every possible simulated roll
orientation, and while facing any of the six node surfaces. Their performance was measured in terms of
response time (RT) and percent correct (% correct). After they had responded in one relative orientation,
subjects were shown a view of the full node interior from that orientation (i.e. given the correct answer).
This allowed them not only to confirm their answer, but also to study briefly the full node interior before
continuing with the next trial. After being trained with the targets in one object array, they were given the
same trials with objects of a second array substituted object-for-object for the first. Half our subjects
received strategy training that emphasized visualizing the environment from a baseline orientation, and
using memorized opposite pairs and triads of objects.
Most subjects learned to do the task from any possible orientation within 36 trials in either virtual
test environment. Evidence that learning was accelerated in the second environment was seen mainly in
72
RT as opposed to % correct, and for targets that appeared above or behind the subject’s body. The
transfer, however, occurred in earlier sets and in more sets for the strategy group than for the control. The
strategy-trained subjects who saw Array A first had the most compelling transfer: they were the only ones
to show significant learning trends in both % correct and RT for left/right targets that continued across the
change in environment. We suspect that learning is more easily generalized if strategy training is received
and if the initial training is challenging. This is supported in part by the fact that quality of spatial ability
over retention days depended on the order in which environments were learned on the training day.
In an exit interview, one subject of several suggested that performing the task again after 7 and 30
days was “kind of like riding a bike – once you get back on, you remember how to do it almost
immediately” after about 3 or 4 trials. Interview results suggested that control subjects used a combination
of self-developed declarative mnemonic rules and mental visualization techniques, as was noted in our
first experiment (Oman et al., 2000). We suspect that many subjects in both the strategy and control
groups memorized opposite pairs of objects, rather than relying on the use of mental imagery when
responding to targets above or behind. Self-reported use of mental imagery, however, was statistically
associated with superior performance for the left/right targets and predicted by 2/3D figure rotation
ability.
Our subject groups were matched by performance on conventional tests of field independence
and 2/3D figure rotation, and array presentation order was counterbalanced. Steady-state task
performance measures for the control group were significantly correlated with conventional tests of field
independence and 2/3D figure rotation ability, replicating the results found in our first experiment.
Strategy training particularly helped those subjects who did not report using mental imagery. Mental
imagery probably played an important role in finding left/right targets; i.e. when mental rotation is
required for correct inference of one’s body orientation.
Subjects’ retained configurational knowledge better when we showed feedback of the correct
configuration and gave them practice with it. Without feedback or practice, configuration test scores and
73
completion times degraded over time for the strategy group and were consistently bad for the control.
Strategy-trained subjects maintained configurational knowledge better over time – especially when the
first training environment was Array A. Spatial ability actually improves over 30 days, while suffering
little after layoffs of up to 21 days. That the effects of the experience and training are measurably retained
after several weeks is important if three dimensional spatial memory training is to be used as a
countermeasure for astronauts, since the training could be generic and would not necessarily have to be
conducted immediately before flight.
Spatial ability for targets located above or behind imagined body orientations is more quickly
acquired and more readily generalized to multiple environments. Strategy training increased subjects’ %
correct and RT for locating targets in these more easily accessible directions after they learned in the first
environment. Subjects’ RT for above/behind targets was shorter than for left/right, and more consistently
so in the strategy group. The superior RT of the strategy group on above/behind was probably to be
expected, since they likely used pairs to locate those targets. The appearance of this effect in the control
group on retention days (especially days 7 and 30) as opposed to the training day makes sense since most
of them eventually developed a pairs strategy of their own.
The “strategy training” technique we employed can probably be improved. Although learning
“triads”, the spatial relationships between objects in a corner, is doubtless useful, the particular “right
hand rule” method we taught was apparently cumbersome, and not all subjects were able to employ it
successfully. Furthermore, our “control” training constituted a form of training in itself: the subjects had a
chance to try different strategies for doing the task, and after 24 trials or so, many were successful in
finding strategies that worked – especially for above/behind targets. Our strategy training helped, but the
most important factor was simply providing practice on an unfamiliar task like this, and experimenting
with different strategies. The baseline and pairs strategies, whether taught or self-discovered, were the key
to doing the task. Mental imagery was not necessarily learned, but as noted in sect. 4.7, some subjects
thought visualization got easier over time.
74
Future experiments should test retention of spatial memory for all training environments equally
often. Completion time for the task of reconstructing the object arrays was slightly confounded with the
time subjects needed to get reacquainted with response buttons. Practice with controls, therefore, should
be given before testing on each retention test day so that this measurement can be reliable. The exit
interview data may not be entirely reliable because, although questions were standardized, they were
intended to allow for unexpected, free-form responses. The investigator simply wrote down whatever
subjects happened to have said. To eliminate these and other variable effects, it is best that questions be
self-contained and provide specific, mutually exclusive, and exhaustive multiple-choice answers for the
subjects to choose from.
Taken together, our two spatial memory experiments demonstrate the ability of humans to
perform both reverse (imagined body orientation) and forward (inferred body orientation) tasks. The
forward task encountered in the present experiment involved reorientation in all three dimensions, which
is analogous to what astronauts face in a space station node. That some subjects in the first experiments
and many members of the control training group in the present experiments discovered the pairs strategy
on their own shows that practice in this sort of task is beneficial, even without formal strategy training.
Observed transfer of learning, greater for strategy-trained subjects in the present experiments, is
encouraging and shows that teaching generic memorization strategies and providing practice with mental
imagery can help subjects learn to orient in virtual environments. Head-mounted virtual reality displays
are practical for spatial orientation training in 1-G and can serve as a practical countermeasure for
problems in 0-G. Our paradigm and findings could also be used to design and evaluate alternative sets of
visual landmarks that might facilitate evaluation of emergency-route markings.
In order to make this paradigm a reality, we must investigate how much our generic training helps
orientation in 0-G environments, and whether it is more or less effective than environment-specific
training. The most obvious advantage of such generic training is that it is much cheaper than missionspecific training. The latter requires more programming time, higher-resolution displays, and high-end
75
graphics simulators to produce realistic-looking ISS and Shuttle virtual mockups. In addition to our
generic training, there could be sessions with mission-specific environments designed to our own
specifications, each lasting as little as 30 minutes [e.g., EVA training in the Virtual Reality Development
Lab and at Johnson Space Center (JSC)]. Our current generic virtual spatial orientation procedure requires
only one virtual environment with a set of generic landmarks, and about 2 hours of training time.
Participants come out of the experience feeling that they have learned some useful tricks for orienting in a
three-dimensional environment. As demonstrated in the present experiment, what they learn can be
applied in more than one environment and retained for at least 21 days.
We tested learning and retention in abstract environments with a stereoscopic, head-mounted
display. Several other questions remain: Can subjects learn to do 3D orientation tasks with a desktop
display? Does training with virtual reality actually improve performance when orienting in real 0-G? How
well can subjects learn to orient in more complex arrangements (e.g., multiple nodes)? Does learning in
abstract environments improve performance when orienting in realistic space station interiors where the
physical landmarks are more salient and familiar? How far beyond 30 days do spatial abilities and
configurational knowledge last?
76
References
Anderson, J.R. (1982). Acquisition of cognitive skill. Psychological Review 89,369-406
Bryant D.J. and Tversky B. (1999). Mental representations of perspective and spatial relations from diagrams and
models. Journal of Experimental Psychology: Learning, Memory and Cognition, 25(1):137-156
Bryant, D.J. and Tversky, B. (1992). Internal and external spatial frameworks for representing described scenes.
Journal of Memory and Language 31, 74-98
Bryant, D.J. and Wright, W.G. (1999). How body asymmetries determine accessibility in spatial frameworks. The
Quarterly Journal of Experimental Psychology, 52A(2), 487-508.
Burrough, B. (1998) Dragonfly: NASA and the Crisis Aboard MIR. New York: Harper Collins.
Conover, W.J. (1999). The Page Test for Ordered Alternatives. Chapter in Practical Nonparametric Statistics, 3rd
Edition, p. 380, Wiley.
Cooper, H.S.F. Jr. (1976). A House In Space. Holt, Rinehhart and Winston, 183 pp.
Farrell, M. J., & Robertson, I. H. (1998). Mental rotation and the automatic updating of body-centered spatial
relationships. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 227-233.
Franklin, N. and Tversky, B. (1990). Searching imagined environments. Journal of Experimental Psychology:
General, 119 (5), 63-76
Gazenko, O. (1964). Medical studies on the cosmic spacecraft Vostok and Voskhod. NASA TTF-9207
Howard, I.P. and Childerson, L. (1994). The contribution of motion, the visual frame and visual polarity to
sensations of body tilt. Perception, 23, 753-762.
Ivanenko, Y. P., Grasso, R., Israel, I., & Berthoz, A. (1997). The contribution of otolisths and semicircular canals to
the perception of two-dimensional passive whole-body motion in humans. Journal of Physiology, 502, 223233, London.
Klatzky, R. L., Loomis, J. M., Beall, A. C., Chance, S. S., & Golledge, R. G. (1998). Spatial updating of selfposition and orientation during real, imagined, and virtual locomotion. Psychological Science, 9, 293-298.
Koh, G. (1997). Training spatial knowledge acquisition using virtual environments. Thesis (M. Eng.), Massachusetts
Institute of Technology, Dept. of Electrical Engineering and Computer Sciences.
Lackner, J.R. (1992). Spatial orientation in weightless environments. Perception, 21:803-812.
Loomis, J. M., Klatzky, R. L., Golledge, R. G., Cicinelli, J. G., Pellegrino, J. W., & Fry, P. (1993). Nonvisual
navigation by blind and sighted: Assessment of path integration ability. Journal of Experimental Psychology:
General, 122, 73-91.
Loomis, J. M., Da Silva, J. A., Fujita, N., & Fukusima, S. S. (1992). Visual space perception and visually directed
action. Journal of Experimental Psychology: Human Perception and Performance, 18, 906-922.
Mast, F., Kosslyn, S.M., & Berthoz, A. (1999). Visual mental imagery interferes with allocentric orientation
judgements. NeuroReport, 10, 3549-3553.
Matsnev E.I., Yakovleva I.Y., Tarasov I.K., Alekseev V.N., Kornilova L.N., Mateev A.D., Gorgiladze G.I. (1983).
Space Motion Sickness - Phenomenology, Countermeasures, and Mechanisms. Aviation, Space and
Environmental Medicine, 54: 312-317
McDonald, T. P. and J. W. Pellegrino (1993). Psychological perspectives on spatial cognition. Behavior and
Environment: Psychological and Geographical Approaches. T. Garling and R. Golledge. Amsterdam, Elsevier
Science Publishers: 47-82.
Mittelstaedt, M. L., & Glasauer, S. (1991) Idiothetic navigation in gerbils and humans. Zoologische Jahrbucher
Abteilungun fur algemeine Zoologie und Physiologie der Tiere, 95, 427-435.
Oman C.M., Howard I.P., Carpenter-Smith T., Beall A.C., Natapoff A., Zacher J., and Jenkin H.L. (2000) Neurolab
experiments on the role of visual cues in microgravity spatial orientation. Aviation, Space, and Environmental
Medicine, 71(3): 283
Oman C.M., Lichtenberg B.K., Money K.E., McCoy R.K. (1986). M.I.T./Canadian vestibular experiments on the
Spacelab-1 mission: 4. Space motion sickness: symptoms, stimuli, and predictability. Experimental Brain
Research, 64: 316-334.
Oman C., Shebilske W., Richards J., Tubre T., Beall A., Natapoff A. (submitted, June 2000). Three dimensional
spatial memory and learning in real and virtual environments. Journal of Spatial Cognition and Computation.
Oman C, Skwersky A. (1997). Effect of scene polarity and head orientation on illusions of tumbling in a virtual
environment. Aviation, Space and Environmental Medicine 68:7, 649
Piaget, J and Inhelder, B. (1967). A child’s conception of space. Norton; New York
77
Pick, H. L. Jr, & Rieser, J. J. (1982). Children’s cognitive mapping, in Spatial Orientation: Development and
Physiological Bases. Ed. M. Potegal, New York: Academic Press, 107-128.
Regian, J.W., Shebilske, W.L., & Monk, J.M. (1992). Virtual reality: An instructional medium for visual spatial
tasks. Journal of Communication, 42, 136-149.
Rieser, J. J. (1989). Access to knowledge of spatial structure at novel points of observation. Journal of Experimental
Psychology: Learning, Memory, & Cognition, 15(6), 1157-1165.
Rieser, J. J., Guth, D. A., & Hill, E. W. (1986). Sensitivity to perspective structure while walking without vision.
Perception, 15, 173-188.
Sadalla, E.K., Burroughs, W.J., and Staplin, L.J. (1980) Reference points in spatial cognition. Journal of
Experimental Psychology: Human Learning and Memory, 6 (5), 516-528
Shebilske W. L., Tubre T., Willis T., Hanson A., Oman C., and Richards J. (2000). Simulating Spatial Memory
Challenges Confronting Astronauts. Proceedings of the Annual Meeting of the Human Factors and Ergonomics
Society, July 30, 2000.
Siegel, A. W. and S. H. White (1975). The development of spatial representations of large-scale environments.
Advances in child development and behaviour. H. W. Reese. New York, Academic Press. 10: 9-55.
Simons, D. J., & Wang, R. F. (1998) Percieving real-world viewpoint changes. Psychological Science. 9, 315-320.
Wilson, P.N., Foreman, N., Tlauka, M. (1997). Transfer of spatial information from a virtual to a real environment.
Human Factors 39(4): 526-531.
Witkin, H. A., Oltman, P., Raskin, E., & Karp, S. (1971). Manual for the Embedded Figures Tests. Palo Alto, CA:
Consulting Psychologists Press.
Witmer, B.G., Bailey, J.H., & Knerr, B.W. (1996). Virtual spaces and real world places: Transfer of route
knowledge. International Journal of Human-Computer Studies, 45, 413-428.
Wraga, M., Creem, & S. H., Proffitt, D. R. (1999) The influence of spatial reference frames on imagined object- and
view rotations. Acta Psychologica, 102, 247-264.
78
Appendix A:
Python/VRUT Code
Program code was written in Python (v. 1.5) and VRUT (release 2.2). For the strategy group, only the
code associated with computer-based training is displayed because all other code was identical for both
groups.
A.1 Training Day Script for Control Group
#Choose which icon set to trAin on first:
#'1' for ANIMALS1 (CROSS) set first
#'2' for ANIMALS2 (SQUARE) set first
ICONORDER = 1
#Choose which stimulus file to use
FILENAME = 'TrainingH.txt'
PRACTICE = 'Practice.txt'
#Enter subject's nAme in quotes here
SUBJECT = 'foo’
# 1 - HMD/trAcker, 2 - CONSOLE/no trAcker
HMD = 1
import vrut
import os
from string import *
import win32api
import time
from whrandom import random
from random import choice
if HMD == 1:
vrut.go(vrut.STEREO | vrut.HMD)
print 'adding sensor------------------------'
ms = vrut.addsensor('intersense')
vrut.tracker()
else:
vrut.go(vrut.CONSOLE)
vrut.setfov(60, 1.333)
vrut.setipd(0.06)
#Put eyepoint inside node AgAinst bAck surfAce
vrut.eyeheight(0)
vrut.translate(vrut.HEAD_POS, 0, 0, -.6)
#*********************************************************
#LoAd geometry
#*********************************************************
#Order in which you wAnt the Icon sets to be presented.
if ICONORDER == 1:
first = vrut.addchild('../models/animals1.wrl')
second = vrut.addchild('../models/animals2.wrl')
elif ICONORDER == 2:
first = vrut.addchild('../models/animals2.wrl')
79
second = vrut.addchild('../models/animals1.wrl')
#----------------------------------------------------------------practiceIcons = vrut.addchild('../models/ww_practice.wrl')
frame = vrut.addchild('../models/ww_box.wrl')
mask = vrut.addchild('../models/maskADJ.wrl')
cardBegin = vrut.addchild('../models/ww_trialbegin.wrl')
switchCard = vrut.addchild('../models/switch.wrl')
calibrateBegin = vrut.addchild('../models/calibrateCard.wrl')
calibrateFinish = vrut.addchild('../models/calibrateFinish.wrl')
practiceBegin = vrut.addchild('../models/practiceBegin.wrl')
Arrows = []
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
########################################################################tArge
t cArds for PRACTICEICONS
iPracCard = []
iPracCard.append(vrut.addchild('../models/picture_card.wrl'))
iPracCard.append(vrut.addchild('../models/picture_card.wrl'))
iPracCard.append(vrut.addchild('../models/picture_card.wrl'))
iPracCard.append(vrut.addchild('../models/picture_card.wrl'))
iPracCard.append(vrut.addchild('../models/picture_card.wrl'))
iPracCard.append(vrut.addchild('../models/picture_card.wrl'))
iPracTex = []
iPracTex.append(vrut.addtexture('../practiceIcons/diamonds.jpg'))
iPracTex.append(vrut.addtexture('../practiceIcons/star.jpg'))
iPracTex.append(vrut.addtexture('../practiceIcons/hearts.jpg'))
iPracTex.append(vrut.addtexture('../practiceIcons/pound.jpg'))
iPracTex.append(vrut.addtexture('../practiceIcons/spades.jpg'))
iPracTex.append(vrut.addtexture('../practiceIcons/clubs.jpg'))
########################################################################cArds
for EXPERIMENT ICONS
iCard = []
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard.append(vrut.addchild('../models/picture_card.wrl'))
iCard2 = []
iCard2.append(vrut.addchild('../models/picture_card.wrl'))
iCard2.append(vrut.addchild('../models/picture_card.wrl'))
iCard2.append(vrut.addchild('../models/picture_card.wrl'))
80
iCard2.append(vrut.addchild('../models/picture_card.wrl'))
iCard2.append(vrut.addchild('../models/picture_card.wrl'))
iCard2.append(vrut.addchild('../models/picture_card.wrl'))
iTex = []
########################################################################tArge
t cArds for AnimAls1: iTex[0-5]
iTex.append(vrut.addtexture('../nonpolarized/TARGfish.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/TARGturtles.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/TARGparrots.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/TARGlions.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/TARGbutterflies.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/TARGgiraffes.jpg'))
########################################################################tArge
t cArds for AnimAls2: iTex[6-11]
iTex.append(vrut.addtexture('../nonpolarized/TARGdeer.jpg'))# lAyer 0
iTex.append(vrut.addtexture('../nonpolarized/TARGfrogs.jpg'))# lAyer 1
iTex.append(vrut.addtexture('../nonpolarized/TARGsnakes.jpg'))# lAyer 2
iTex.append(vrut.addtexture('../nonpolarized/TARGbluebirds.jpg'))#lyr 3
iTex.append(vrut.addtexture('../nonpolarized/TARGelephants.jpg'))#lyr 4
iTex.append(vrut.addtexture('../nonpolarized/TARGroosters.jpg'))#lyr 5
########################################################################TArge
t, BreAk, And End cArds: iTex[12-16]
iTex.append(vrut.addtexture('../nonpolarized/targbackground.jpg'))
iTex.append(vrut.addtexture('../textures/break.jpg'))
iTex.append(vrut.addtexture('../textures/practiceEnd.jpg'))
iTex.append(vrut.addtexture('../textures/end1.jpg'))
iTex.append(vrut.addtexture('../textures/end2.jpg'))
######################################################################## The
nAturAl ordering of icon sets (i.e. AnimAls1 first, AnimAls2 second):
if ICONORDER == 1:
for i in range(0, 6):
iCard[i].texture(iTex[i], 'card')
iCard2[i].texture(iTex[i+6], 'card')
# The REVERSE ordering of icon sets (i.e. AnimAls2 first, AnimAls1
# second):
elif ICONORDER == 2:
for i in range(0, 6):
iCard[i].texture(iTex[i+6], 'card')
iCard2[i].texture(iTex[i], 'card')
for i in range(0, 6):
iPracCard[i].texture(iPracTex[i], 'card')
iCard[i].scale(1.5, 1.5, 0)
iCard2[i].scale(1.5, 1.5, 0)
iPracCard[i].scale(1.5, 1.5, 0)
for i in range(6, 11):
iCard[i].texture(iTex[i+6], 'card')
#######################################################################
#****************************************
# positioning of instructionAl objects **
#****************************************
for i in range(0, 6):
Arrows[i].scale(.5, .5, .5)
81
#surfAce on right
Arrows[0].rotate(0,1,0, -170)
Arrows[0].translate(.25,0,0)
#surfAce Above
Arrows[1].rotate(0,0,1, -90)
Arrows[1].translate(0,.25,0)
#surfAce on left
Arrows[2].rotate(0,1,0, -10)
Arrows[2].translate(-.25,0,.1)
#surfAce below
Arrows[3].rotate(0,0,1, 90)
Arrows[3].translate(0,-.25,0)
#surfAce strAight AheAd
Arrows[4].rotate(0,1,0, 80)
Arrows[4].translate(.15,0,.25)
#surfAce behind
Arrows[5].rotate(0,1,0, -90)
Arrows[5].translate(.1,0,-.25)
cardBegin.translate(0, 0, -.01)
switchCard.translate(0,0,-.01)
calibrateBegin.translate(0,0,-.01)
calibrateFinish.translate(0,0,-.01)
practiceBegin.translate(0,0,-.01)
iCard[6].translate(0, 0, .27)
cardBegin.scale(2,2,1)
switchCard.scale(2,2,1)
calibrateBegin.scale(2,2,1)
calibrateFinish.scale(2,2,1)
practiceBegin.scale(2,2,1)
for i in range(7, 11):
iCard[i].scale(2, 2, 0)
iCard[6].scale(2.5, 2.5, 0)
#****************************************
#Hide All the geometry And icons
#****************************************
frame.curtain(vrut.CLOSE)
first.curtain(vrut.CLOSE)
second.curtain(vrut.CLOSE)
practiceIcons.curtain(vrut.CLOSE)
cardBegin.curtain(vrut.CLOSE)
switchCard.curtain(vrut.CLOSE)
calibrateBegin.curtain(vrut.CLOSE)
calibrateFinish.curtain(vrut.CLOSE)
practiceBegin.curtain(vrut.CLOSE)
82
mask.curtain(vrut.CLOSE)
for group in Arrows:
group.curtain(vrut.CLOSE)
for card in iCard:
card.curtain(vrut.CLOSE)
for card in iCard2:
card.translate(0, 0, .25)
card.curtain(vrut.CLOSE)
for card in iPracCard:
card.translate(0, 0, .25)
card.curtain(vrut.CLOSE)
#****************************************
#timer flAgs & conditionAl vAriAbles
**
#****************************************
BLAST_FACTOR
= 1.0
NO_TASK
START_EXP
START_TRIAL
SHOW_STIMULUS
SHOW_TARGET
MEMORY_TASK
SEARCH_TASK
END
TAKE_BREAK
START_TRAINING
VIEW_NODE
CALIFORNIA
CALIBRATION
LIMBO
ORIENT_NODE
SWITCH
SWITCH2
SWITCH3
SWITCH4
SWITCH5
SWITCH6
SWITCH7
REAL_EXPT
MOCK_EXPT
START_PRACTICE
LIMBO2
LIMBO3
BEFORE
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
TRUE
FALSE
= 1
= 0
# Time constAnts
STIMULUS_TIME
TARGET_TIME
= 1.0 *BLAST_FACTOR
= 1.0 *BLAST_FACTOR
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
83
MEMORY_TIME
SEARCH_TIME
#
#
#
R
C
L
F
O
E
= 1.0 *BLAST_FACTOR
= 700.0 *BLAST_FACTOR
Numbers in () correspond to the surfAce identities (see Appendix B)
in the bAseline orientAtion defined by the convention creAted by
AlAn NAtApoff And JAson RichArds (1999-2000)
= (0)
= (1)
= (2)
= (3)
= (4)
= (5)
Z = (7) #break card
#************************************
#other vAriAbles
#************************************
#counter for icon sets
ICONSET = 1
# CAlibrAtion ArrAy for Arrow displAy
#cAlibrAte = [0, 5, 1, 2]
calibrate = [0, 2, 1, 2, 0, 5, 1, 5, 0, 1, 2, 5, 0, 1, 2, 5, 0, 1, 2, 5]
# TriAl-by-triAl specificAtions:
currentTrial = 0
rotate = 0
# Stuff to record reAction times
startMemory = 0
startSearch = 0
endTime
= 0
counter
= 0
clicker
= 0
trialNum = 0
goNextTrial = TRUE
#**********************************************************************#
ReAd Stimulus File And Open DAtA File for ExperimentAl TriAls
#**********************************************************************def
InitializeExp():
global
global
global
global
global
global
global
global
file
data
allEntry
allAngle
allTarget
allFloor
allDirection
ms
file = open(FILENAME,'r')
84
print 'opened stim file: ', FILENAME
#'r' for reAding
data = open(SUBJECT,'a')
print 'created output file: ', SUBJECT
#'A' for Append
data.write('%CALIFORNIA GROUP' + '\n')
data.write('%Subject Name:' + SUBJECT +'\n')
data.write('%Stimulus File:' + FILENAME +'\n'+'\n')
data.write('%Columns ='+'\n')
data.write('%S'+'\t'+'SET'+'\t'+'t#'+'\t'+'C'+'\t'+'I'+'\t'+'RT'+'\t')
data.write('Ent'+'\t'+'Targ'+'\t'+'Rot'+'\t'+'Dir'+'\n'+'\n')
vrut.watch('opening: '+SUBJECT)
#TrAining stimulus file
all = file.readlines()
allEntry = []
allAngle = []
allTarget = []
allFloor = []
allDirection = []
for i in range(0, len(all)):
access = all[i]
s = split(access)
allEntry.append(eval(s[0]))
allAngle.append(atoi(s[1]))
allTarget.append(eval(s[2]))
allFloor.append(eval(s[3]))
allDirection.append(atoi(s[4]))
file.close()
vrut.watch('Initialize Experiment...')
#**********************************************************************
#ReAd Stimulus File for PrActice TriAls
#**********************************************************************
def InitializePractice():
global
global
global
global
global
global
global
fileP
allEntryP
allAngleP
allTargetP
allFloorP
allDirectionP
ms
fileP = open(PRACTICE, 'r')
print 'opened practice file: ', PRACTICE
#'r' for reAding
#PrActice stim file
allP = fileP.readlines()
allEntryP = []
allAngleP = []
allTargetP = []
allFloorP = []
85
allDirectionP = []
for i in range(0, len(allP)):
accessP = allP[i]
sP = split(accessP)
allEntryP.append(eval(sP[0]))
allAngleP.append(atoi(sP[1]))
allTargetP.append(eval(sP[2]))
allFloorP.append(eval(sP[3]))
allDirectionP.append(atoi(sP[4]))
fileP.close()
vrut.watch('Initialize Practice...')
#**********************************************************************
#Timer for TrAining Exercises
#**********************************************************************
def DummyTraining(timer):
global
global
global
global
global
TASK
ms
startTime
clicker
stimulus
TASK = timer
vrut.watch('task: %d' %(TASK))
if timer == START_TRAINING:
frame.curtain(vrut.OPEN)
cardBegin.curtain(vrut.OPEN)
elif timer == VIEW_NODE:
practiceIcons.curtain(vrut.OPEN)
elif timer == SWITCH:
switchCard.curtain(vrut.OPEN)
elif timer == LIMBO:
switchCard.curtain(vrut.CLOSE)
cardBegin.curtain(vrut.OPEN)
elif timer == ORIENT_NODE:
vrut.rotate(practiceIcons, vrut.ZAXIS, 90)
practiceIcons.curtain(vrut.OPEN)
elif timer == SWITCH2:
switchCard.curtain(vrut.OPEN)
vrut.rotate(practiceIcons, vrut.ZAXIS, -90)
elif timer == LIMBO2:
switchCard.curtain(vrut.CLOSE)
calibrateBegin.curtain(vrut.OPEN)
86
elif timer == CALIBRATION:
if len(calibrate) > 0:
startTime = time.time()
stimulus = choice(calibrate)
Arrows[stimulus].curtain(vrut.OPEN)
else:
vrut.starttimer(MOCK_EXPT, .1)
elif timer == MOCK_EXPT:
calibrateFinish.curtain(vrut.OPEN)
#*********************************************************************
#Subject Controls for TrAining Exercises
#*********************************************************************
def DummyKey(key):
global
global
global
global
counter
clicker
direction
calibrate
endTime = time.time()
if key =='1' and HMD == 1:
ms.reset()
win32api.Sleep(200)
vrut.watch('tracker has been reset')
if key == ' ':
if TASK == START_TRAINING:
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(VIEW_NODE, .1)
elif TASK == VIEW_NODE:
practiceIcons.curtain(vrut.CLOSE)
vrut.starttimer(SWITCH, .1)
elif TASK == SWITCH:
switchCard.curtain(vrut.CLOSE)
vrut.starttimer(LIMBO, .1)
elif TASK == LIMBO:
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(ORIENT_NODE, .1)
elif TASK == ORIENT_NODE:
practiceIcons.curtain(vrut.CLOSE)
vrut.starttimer(SWITCH2, .1)
elif TASK == SWITCH2:
switchCard.curtain(vrut.CLOSE)
vrut.starttimer(LIMBO2, .1)
elif TASK == LIMBO2:
calibrateBegin.curtain(vrut.CLOSE)
87
vrut.starttimer(CALIBRATION, .1)
elif TASK == MOCK_EXPT:
calibrateFinish.curtain(vrut.CLOSE)
vrut.callback(vrut.TIMER_EVENT, 'PracticeTimer')
vrut.callback(vrut.KEYBOARD_EVENT, 'PracticeKey')
vrut.starttimer(START_PRACTICE, .1)
else:
return
if TASK == CALIBRATION:
if (key == '5' and stimulus == 5) or (key == '4' and stimulus == 2):
Arrows[stimulus].curtain(vrut.CLOSE)
elif (key == '6' and stimulus == 0) or (key == '8' and stimulus == 1):
Arrows[stimulus].curtain(vrut.CLOSE)
else:
return
if endTime - startTime < 2.0:
calibrate.remove(stimulus)
vrut.starttimer(CALIBRATION, .1)
if TASK == START_TRAINING:
if key == 'a':
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(MOCK_EXPT, .1)
elif key == 'e':
cardBegin.curtain(vrut.CLOSE)
vrut.callback(vrut.KEYBOARD_EVENT, 'ExptKey')
vrut.callback(vrut.TIMER_EVENT, 'ExptTimer')
vrut.starttimer(START_TRIAL, .1)
#**********************************************************************
#
Timer for PRACTICE TriAls
#**********************************************************************def
PracticeTimer(timer):
global
global
global
global
global
global
global
global
global
global
global
global
TASK
currentTrial
trialNum
goNextTrial
counter
showTargetP
showEntryP
showRotateP
showFloorP
showDirectionP
rotRoom
rotate
TASK = timer
vrut.watch('task: %d' %(TASK))
88
if timer == START_PRACTICE:
InitializePractice()
switchCard.curtain(vrut.CLOSE)
practiceBegin.curtain(vrut.OPEN)
elif timer == START_TRIAL:
counter = 0
showTargetP = allTargetP[currentTrial]
showFloorP = allFloorP[currentTrial]
showEntryP = allEntryP[currentTrial]
showDirectionP = allDirectionP[currentTrial]
if showEntryP == Z:
TakeBreak()
else:
# show begin cArd
cardBegin.curtain(vrut.OPEN)
trialNum = trialNum + 1
vrut.watch('entry: %d' %(showEntryP))
vrut.watch('target: %d' %(showTargetP))
vrut.watch('floor: %d' %(showFloorP))
vrut.watch('direction: %d' %(showDirectionP))
if showEntryP == F:
vrut.rotate(practiceIcons,
elif showEntryP == O:
vrut.rotate(practiceIcons,
elif showEntryP == L:
vrut.rotate(practiceIcons,
elif showEntryP == R:
vrut.rotate(practiceIcons,
elif showEntryP == C:
vrut.rotate(practiceIcons,
vrut.XAXIS, 90)
vrut.YAXIS, 180)
vrut.YAXIS, -90)
vrut.YAXIS, 90)
vrut.XAXIS, -90)
elif timer == SHOW_TARGET:
iCard[6].curtain(vrut.OPEN)
iPracCard[showTargetP].curtain(vrut.OPEN)
vrut.starttimer(SHOW_STIMULUS, TARGET_TIME)
elif timer == SHOW_STIMULUS:
iPracCard[showTargetP].curtain(vrut.CLOSE)
iCard[6].curtain(vrut.CLOSE)
showRotateP = allAngleP[rotate]
if showRotateP == 3:
showRotateP = -90
elif showRotateP == 9:
showRotateP = 90
elif showRotateP == 6:
showRotateP = 180
rotRoom = -showRotateP
89
if rotate < len(allAngleP):
rotate = rotate + 1
vrut.rotate(practiceIcons, vrut.ZAXIS, rotRoom)
mask.curtain(vrut.OPEN)
practiceIcons.curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
vrut.starttimer(MEMORY_TASK, STIMULUS_TIME)
elif timer == MEMORY_TASK:
Arrows[3].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
practiceIcons.curtain(vrut.CLOSE)
mask.curtain(vrut.CLOSE)
vrut.starttimer(SEARCH_TASK, MEMORY_TIME)
elif timer == SEARCH_TASK:
practiceIcons.curtain(vrut.OPEN)
counter = counter + 1
if counter < (SEARCH_TIME*SEARCH_TIME):
vrut.starttimer(SEARCH_TASK, 1/SEARCH_TIME)
else:
vrut.starttimer(END, .1)
elif timer == END:
vrut.watch('End of a Trial')
practiceIcons.curtain(vrut.CLOSE)
vrut.rotate(practiceIcons, vrut.ZAXIS, -rotRoom)
if showEntryP == F:
vrut.rotate(practiceIcons,
elif showEntryP == O:
vrut.rotate(practiceIcons,
elif showEntryP == L:
vrut.rotate(practiceIcons,
elif showEntryP == R:
vrut.rotate(practiceIcons,
elif showEntryP == C:
vrut.rotate(practiceIcons,
vrut.XAXIS, -90)
vrut.YAXIS, 180)
vrut.YAXIS, 90)
vrut.YAXIS, -90)
vrut.XAXIS, 90)
currentTrial = currentTrial + 1
if currentTrial < len(allTargetP):
goNextTrial = TRUE
vrut.starttimer(START_TRIAL, .1)
else:
iCard[8].curtain(vrut.OPEN)
rotate = 0
currentTrial = 0
trialNum = 0
vrut.callback(vrut.TIMER_EVENT, 'ExptTimer')
vrut.callback(vrut.KEYBOARD_EVENT, 'ExptKey')
vrut.starttimer(BEFORE, 2.0)
90
TASK == NO_TASK
#**********************************************************************
#Subject inputs for PRACTICE TriAls
#**********************************************************************def
PracticeKey(key):
global goNextTrial
global currentTrial
global rotate
global counter
global ms
if key =='1' and HMD == 1:
ms.reset()
win32api.Sleep(200)
vrut.watch('tracker has been reset')
if TASK == START_PRACTICE:
if key == ' ':
practiceBegin.curtain(vrut.CLOSE)
vrut.starttimer(START_TRIAL, .1)
elif key == 'a':
practiceBegin.curtain(vrut.CLOSE)
vrut.callback(vrut.KEYBOARD_EVENT, 'ExptKey')
vrut.callback(vrut.TIMER_EVENT, 'ExptTimer')
vrut.starttimer(START_TRIAL, .1)
elif key == 'b':
practiceBegin.curtain(vrut.CLOSE)
currentTrial = 4
vrut.starttimer(START_TRIAL, .1)
elif TASK == START_TRIAL:
if key == ' ':
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(SHOW_TARGET, .1)
elif (TASK==MEMORY_TASK or TASK==SHOW_STIMULUS):
if (key=='4' or key=='5' or key=='6' or key=='8'):
win32api.Beep(1300, 90)
elif TASK == SEARCH_TASK:
if key == '0':
counter = SEARCH_TIME*SEARCH_TIME
win32api.Beep(1300, 90)
else:
return
#**********************************************************************#Timer
for ExperimentAl TriAls
#**********************************************************************
91
def ExptTimer(timer):
global
global
global
global
global
global
global
global
global
global
global
global
global
global
global
global
TASK
ms
currentTrial
showTarget
showEntry
showRotate
showFloor
showDirection
rotRoom
rotate
trialNum
startMemory
startSearch
goNextTrial
counter
ICONSET
TASK = timer
if timer == BEFORE:
if ICONSET == 1:
iCard[8].curtain(vrut.OPEN)
else:
iCard[9].curtain(vrut.OPEN)
elif timer == START_TRIAL:
if currentTrial == 0 and ICONSET == 1:
InitializeExp()
counter = 0
vrut.watch('currentTrial = %d' %(currentTrial))
showTarget = allTarget[currentTrial]
showFloor = allFloor[currentTrial]
showEntry = allEntry[currentTrial]
showDirection = allDirection[currentTrial]
if showEntry == Z:
TakeBreak()
else:
cardBegin.curtain(vrut.OPEN)
trialNum = trialNum + 1
vrut.watch('entry: %d' %(showEntry))
vrut.watch('target: %d' %(showTarget))
vrut.watch('floor: %d' %(showFloor))
vrut.watch('trial: %d' %(trialNum))
if ICONSET == 1:
if showEntry == F:
vrut.rotate(first, vrut.XAXIS, 90)
92
elif showEntry == O:
vrut.rotate(first,
elif showEntry == L:
vrut.rotate(first,
elif showEntry == R:
vrut.rotate(first,
elif showEntry == C:
vrut.rotate(first,
vrut.YAXIS, 180)
vrut.YAXIS, -90)
vrut.YAXIS, 90)
vrut.XAXIS, -90)
else:
if showEntry == F:
vrut.rotate(second,
elif showEntry == O:
vrut.rotate(second,
elif showEntry == L:
vrut.rotate(second,
elif showEntry == R:
vrut.rotate(second,
elif showEntry == C:
vrut.rotate(second,
vrut.XAXIS, 90)
vrut.YAXIS, 180)
vrut.YAXIS, -90)
vrut.YAXIS, 90)
vrut.XAXIS, -90)
elif timer == SHOW_TARGET:
iCard[6].curtain(vrut.OPEN)
if ICONSET == 1:
iCard[showTarget].translate(0, 0, .25)
iCard[showTarget].curtain(vrut.OPEN)
else:
iCard2[showTarget].curtain(vrut.OPEN)
vrut.starttimer(SHOW_STIMULUS, TARGET_TIME)
elif timer == SHOW_STIMULUS:
iCard[showTarget].curtain(vrut.CLOSE)
iCard2[showTarget].curtain(vrut.CLOSE)
iCard[6].curtain(vrut.CLOSE)
startMemory = time.time()
showRotate = allAngle[rotate]
vrut.watch('rotate = %d' %(rotate))
if showRotate == 3:
showRotate = -90
elif showRotate == 9:
showRotate = 90
elif showRotate == 6:
showRotate = 180
rotRoom = -showRotate
if rotate < len(allAngle):
rotate = rotate + 1
mask.curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
if ICONSET == 1:
vrut.rotate(first, vrut.ZAXIS, rotRoom)
93
first.curtain(vrut.OPEN)
else:
vrut.rotate(second, vrut.ZAXIS, rotRoom)
second.curtain(vrut.OPEN)
vrut.starttimer(MEMORY_TASK, STIMULUS_TIME)
elif timer == MEMORY_TASK:
if ICONSET == 1:
first.curtain(vrut.CLOSE)
else:
second.curtain(vrut.CLOSE)
mask.curtain(vrut.CLOSE)
Arrows[3].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
vrut.starttimer(SEARCH_TASK, MEMORY_TIME)
elif timer == SEARCH_TASK:
if ICONSET == 1:
first.curtain(vrut.OPEN)
else:
second.curtain(vrut.OPEN)
if counter == 0:
startSearch = time.time()
counter = counter + 1
if counter < (SEARCH_TIME*SEARCH_TIME):
vrut.starttimer(SEARCH_TASK, 1/SEARCH_TIME)
else:
vrut.starttimer(END)
elif timer == END:
vrut.watch('End of a Trial')
if ICONSET== 1:
first.curtain(vrut.CLOSE)
vrut.rotate(first, vrut.ZAXIS, -rotRoom)
if showEntry == F:
vrut.rotate(first, vrut.XAXIS, -90)
elif showEntry == O:
vrut.rotate(first, vrut.YAXIS, 180)
elif showEntry == L:
vrut.rotate(first, vrut.YAXIS, 90)
elif showEntry == R:
vrut.rotate(first, vrut.YAXIS, -90)
elif showEntry == C:
vrut.rotate(first, vrut.XAXIS, 90)
else:
second.curtain(vrut.CLOSE)
vrut.rotate(second, vrut.ZAXIS, -rotRoom)
if showEntry == F:
vrut.rotate(second, vrut.XAXIS, -90)
elif showEntry == O:
vrut.rotate(second, vrut.YAXIS, 180)
elif showEntry == L:
vrut.rotate(second, vrut.YAXIS, 90)
94
elif showEntry == R:
vrut.rotate(second, vrut.YAXIS, -90)
elif showEntry == C:
vrut.rotate(second, vrut.XAXIS, 90)
currentTrial = currentTrial + 1
if currentTrial < len(allTarget):
goNextTrial = TRUE
vrut.starttimer(START_TRIAL, .1)
else:
iCard[9].curtain(vrut.OPEN)
print 'Its all over folks!'
if ICONSET == 1:
ICONSET = ICONSET + 1
currentTrial = 0
rotate = 0
trialNum = 0
data.write('%End of first session for ' + SUBJECT +
'\n'+ '\n')
data.write('%Beginning of second session for ' +
SUBJECT + '\n')
data.flush()
vrut.callback(vrut.KEYBOARD_EVENT, 'ExptKey')
vrut.callback(vrut.TIMER_EVENT, 'ExptTimer')
vrut.starttimer(BEFORE, .1)
else:
iCard[10].curtain(vrut.OPEN)
#**********************************************************************
#
Subject inputs for ExperimentAl TriAls
#**********************************************************************
def ExptKey(key):
global
global
global
global
global
global
data
goNextTrial
currentTrial
rotate
counter
trialNum
endTime = time.time()
vrut.watch('task: %d' %(TASK))
if key =='1' and HMD == 1:
ms.reset()
win32api.Sleep(200)
vrut.watch('tracker has been reset')
if TASK == BEFORE:
if key == ' ':
if ICONSET == 1:
iCard[8].curtain(vrut.CLOSE)
else:
iCard[9].curtain(vrut.CLOSE)
95
vrut.starttimer(START_TRIAL, .1)
elif TASK == START_TRIAL:
#***********************************************************
#set triAl to beginning of StAge 2 (2nd bAck surfAce)
#***********************************************************
if key == 'a':
currentTrial = 21
rotate = 21
trialNum = 22
if ICONSET == 1:
if showEntry == F:
vrut.rotate(first, vrut.XAXIS, -90)
elif showEntry == O:
vrut.rotate(first, vrut.YAXIS, 180)
elif showEntry == L:
vrut.rotate(first, vrut.YAXIS, 90)
elif showEntry == R:
vrut.rotate(first, vrut.YAXIS, -90)
elif showEntry == C:
vrut.rotate(first, vrut.XAXIS, 90)
else:
if showEntry == F:
vrut.rotate(second, vrut.XAXIS, -90)
elif showEntry == O:
vrut.rotate(second, vrut.YAXIS, 180)
elif showEntry == L:
vrut.rotate(second, vrut.YAXIS, 90)
elif showEntry == R:
vrut.rotate(second, vrut.YAXIS, -90)
elif showEntry == C:
vrut.rotate(second, vrut.XAXIS, 90)
vrut.starttimer(START_TRIAL, .1)
elif key == 'b':
currentTrial = 10
rotate = 10
if ICONSET == 1:
if showEntry == F:
vrut.rotate(first, vrut.XAXIS, -90)
elif showEntry == O:
vrut.rotate(first, vrut.YAXIS, 180)
elif showEntry == L:
vrut.rotate(first, vrut.YAXIS, 90)
elif showEntry == R:
vrut.rotate(first, vrut.YAXIS, -90)
elif showEntry == C:
vrut.rotate(first, vrut.XAXIS, 90)
else:
if showEntry == F:
vrut.rotate(second, vrut.XAXIS, -90)
elif showEntry == O:
96
vrut.rotate(second,
elif showEntry == L:
vrut.rotate(second,
elif showEntry == R:
vrut.rotate(second,
elif showEntry == C:
vrut.rotate(second,
vrut.YAXIS, 180)
vrut.YAXIS, 90)
vrut.YAXIS, -90)
vrut.XAXIS, 90)
vrut.starttimer(START_TRIAL, .1)
elif key == ' ':
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(SHOW_TARGET, .1)
elif TASK == TAKE_BREAK:
if key == ' ':
if rotate < len(allAngle):
rotate = rotate + 1
if currentTrial < len(allTarget):
currentTrial = currentTrial + 1
goNextTrial = TRUE
# Hide the break card
iCard[7].curtain(vrut.CLOSE)
vrut.starttimer(START_TRIAL, .1)
elif (TASK == SHOW_STIMULUS or TASK == MEMORY_TASK):
if (key=='4' or key=='5' or key=='6' or key=='8'):
vrut.watch('response registered')
#sound A "beep"
win32api.Beep(1300, 90)
#write the dAtA to file
outdata = SUBJECT+'\t'+str(ICONSET)+'\t'
+str(trialNum)+'\t'+'0'+'\t'
+str(key)+'\t'
+str(endTime-startMemory)
outdata = outdata+'\t'+str(showEntry)+'\t'
+str(showTarget)+'\t'+str(showRotate)
+'\t'+str(showDirection)+'\n'
data.write(outdata)
data.flush()
elif TASK == SEARCH_TASK:
if key=='0':
counter = SEARCH_TIME*SEARCH_TIME
outdata =SUBJECT+'\t'+str(ICONSET)+'\t'+str(trialNum)
+'\t'+'1'+'\t'+str(key)+'\t'
+str(endTime-startSearch)
outdata = outdata+'\t'+str(showEntry)+'\t'
+str(showTarget)+'\t'+str(showRotate)
+'\t'+str(showDirection)+'\n'
data.write(outdata)
97
data.flush()
else:
return
#*********************************************************************
#
BreAk for Subject
**
#*********************************************************************
def TakeBreak():
global TASK
# show tAke breAk cArd
iCard[7].curtain(vrut.OPEN)
TASK = TAKE_BREAK
vrut.callback(vrut.KEYBOARD_EVENT, 'DummyKey')
vrut.callback(vrut.TIMER_EVENT, 'DummyTraining')
vrut.starttimer(START_TRAINING, 0.5)
98
A.2 Training Day Script for Strategy Group
[code identical to that for the control group]
.
.
.
#**********************************************************************#
Timer for Training Exercises
#**********************************************************************def
TrainingTimer(timer):
global
global
global
global
global
global
TASK
ms
startTime
clicker
orient
stimulus
TASK = timer
vrut.watch('task: %d' %(TASK))
if timer == START_TRAINING:
frame.curtain(vrut.OPEN)
trainBegin.curtain(vrut.OPEN)
elif timer == START_BASELINE:
baselineBegin.curtain(vrut.OPEN)
elif timer == BASELINE:
practiceIcons.curtain(vrut.OPEN)
Arrows[2].curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
elif timer == SWITCH:
switchCard.curtain(vrut.OPEN)
vrut.starttimer(LIMBO, 4.0)
elif timer == LIMBO:
switchCard.curtain(vrut.CLOSE)
cardBegin.curtain(vrut.OPEN)
elif timer == BASELINE2:
practiceIcons.curtain(vrut.OPEN)
Arrows[2].curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
elif timer == ROLL:
vrut.rotate(practiceIcons, vrut.ZAXIS, 90)
practiceIcons.curtain(vrut.OPEN)
Arrows[0].curtain(vrut.OPEN)
99
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
elif timer == SWITCH2:
switchCard.curtain(vrut.OPEN)
vrut.starttimer(LIMBO2, 4.0)
elif timer == LIMBO2:
switchCard.curtain(vrut.CLOSE)
cardBegin.curtain(vrut.OPEN)
elif timer == PITCH:
vrut.rotate(practiceIcons, vrut.XAXIS, 90)
practiceIcons.curtain(vrut.OPEN)
Arrows[2].curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[5].curtain(vrut.OPEN)
elif timer == SWITCH3:
switchCard.curtain(vrut.OPEN)
vrut.starttimer(START_PAIRS, 4.0)
elif timer == START_PAIRS:
switchCard.curtain(vrut.CLOSE)
pairsBegin.curtain(vrut.OPEN)
elif timer == PAIRS:
clicker = clicker + 1
vrut.watch('clicker = %d' %(clicker))
practiceIcons.curtain(vrut.OPEN)
if clicker == 1:
ArrowsRed[0].curtain(vrut.OPEN)
Arrows[2].curtain(vrut.OPEN)
elif clicker == 2:
ArrowsRed[0].curtain(vrut.CLOSE)
Arrows[2].curtain(vrut.CLOSE)
#vrut.rotate(maskOPPO, vrut.ZAXIS, 90)
ArrowsRed[1].curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
elif clicker == 3:
ArrowsRed[1].curtain(vrut.CLOSE)
Arrows[3].curtain(vrut.CLOSE)
#vrut.rotate(maskOPPO, vrut.XAXIS, 90)
Arrows[4].curtain(vrut.OPEN)
ArrowsRed[2].curtain(vrut.OPEN)
elif timer == SWITCH4:
switchCard.curtain(vrut.OPEN)
vrut.starttimer(LIMBO4, 4.0)
elif timer == LIMBO4:
switchCard.curtain(vrut.CLOSE)
vrut.rotate(practiceIcons, vrut.ZAXIS, -90)
cardBegin.curtain(vrut.OPEN)
100
elif timer == LEFT_DOWN:
practiceIcons.curtain(vrut.OPEN)
Arrows[1].curtain(vrut.OPEN)
Arrows[2].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
elif timer == START_TRIADS:
clicker = 0
triadsBegin.curtain(vrut.OPEN)
elif timer == TRIADS:
clicker = clicker + 1
if clicker == 1:
practiceIcons.curtain(vrut.OPEN)
Arrows[2].curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
elif clicker == 2:
example.curtain(vrut.OPEN)
elif clicker == 3:
vrut.rotate(practiceIcons, vrut.ZAXIS, 180)
mask.curtain(vrut.OPEN)
practiceIcons.curtain(vrut.OPEN)
Arrows[3].curtain(vrut.OPEN)
Arrows[4].curtain(vrut.OPEN)
elif clicker == 4:
mask.curtain(vrut.CLOSE)
vrut.rotate(maskTRIAD, vrut.ZAXIS, 180)
vrut.rotate(maskTRIAD, vrut.YAXIS, 180)
vrut.rotate(maskTRIAD, vrut.XAXIS, -90)
maskTRIAD.curtain(vrut.OPEN)
Arrows[3].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
ArrowsRed[0].curtain(vrut.OPEN)
elif timer == POOP:
calibrateBegin.curtain(vrut.OPEN)
elif timer == CALIBRATION:
if len(calibrate) > 0:
startTime = time.time()
stimulus = choice(calibrate)
Arrows[stimulus].curtain(vrut.OPEN)
else:
vrut.starttimer(MOCK_EXPT, .1)
elif timer == MOCK_EXPT:
calibrateFinish.curtain(vrut.OPEN)
#**********************************************************************#
Subject inputs for Training Exercises
#**********************************************************************def
TrainingKey(key):
101
global
global
global
global
global
data
counter
clicker
direction
calibrate
endTime = time.time()
if key =='1' and HMD == 1:
ms.reset()
win32api.Sleep(200)
vrut.watch('tracker has been reset')
if key == ' ':
if TASK == START_TRAINING:
trainBegin.curtain(vrut.CLOSE)
vrut.starttimer(START_BASELINE, .1)
elif TASK == START_BASELINE:
baselineBegin.curtain(vrut.CLOSE)
vrut.starttimer(BASELINE, .1)
elif TASK == BASELINE:
practiceIcons.curtain(vrut.CLOSE)
Arrows[2].curtain(vrut.CLOSE)
Arrows[3].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
vrut.starttimer(SWITCH, .1)
elif TASK == LIMBO:
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(ROLL, .1)
elif TASK == ROLL:
practiceIcons.curtain(vrut.CLOSE)
Arrows[0].curtain(vrut.CLOSE)
Arrows[3].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
vrut.rotate(practiceIcons, vrut.ZAXIS, -90)
vrut.starttimer(SWITCH2, .1)
elif TASK == LIMBO2:
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(PITCH, .1)
elif TASK == PITCH:
practiceIcons.curtain(vrut.CLOSE)
Arrows[2].curtain(vrut.CLOSE)
Arrows[3].curtain(vrut.CLOSE)
Arrows[5].curtain(vrut.CLOSE)
vrut.rotate(practiceIcons, vrut.XAXIS, -90)
vrut.starttimer(SWITCH3, .1)
elif TASK == START_PAIRS:
102
pairsBegin.curtain(vrut.CLOSE)
vrut.starttimer(PAIRS, .1)
elif TASK == PAIRS:
if clicker < 3:
vrut.starttimer(PAIRS, .1)
else:
practiceIcons.curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
ArrowsRed[2].curtain(vrut.CLOSE)
vrut.starttimer(SWITCH4, .5)
elif TASK == LIMBO4:
cardBegin.curtain(vrut.CLOSE)
vrut.starttimer(LEFT_DOWN, .1)
elif TASK == LEFT_DOWN:
practiceIcons.curtain(vrut.CLOSE)
Arrows[1].curtain(vrut.CLOSE)
Arrows[2].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
vrut.rotate(practiceIcons, vrut.ZAXIS, 90)
vrut.starttimer(START_TRIADS, .5)
elif TASK == START_TRIADS:
triadsBegin.curtain(vrut.CLOSE)
vrut.starttimer(TRIADS, .5)
elif TASK == TRIADS:
if clicker < 4:
if clicker == 1:
practiceIcons.curtain(vrut.CLOSE)
Arrows[2].curtain(vrut.CLOSE)
Arrows[3].curtain(vrut.CLOSE)
Arrows[4].curtain(vrut.CLOSE)
elif clicker == 2:
example.curtain(vrut.CLOSE)
vrut.starttimer(TRIADS, .5)
else:
ArrowsRed[0].curtain(vrut.CLOSE)
maskTRIAD.curtain(vrut.CLOSE)
practiceIcons.curtain(vrut.CLOSE)
vrut.rotate(practiceIcons, vrut.ZAXIS, 180)
vrut.starttimer(POOP, .1)
elif TASK == POOP:
calibrateBegin.curtain(vrut.CLOSE)
vrut.starttimer(CALIBRATION, 1.5)
elif TASK == MOCK_EXPT:
calibrateFinish.curtain(vrut.CLOSE)
vrut.callback(vrut.TIMER_EVENT, 'PracticeTimer')
vrut.callback(vrut.KEYBOARD_EVENT, 'PracticeKey')
vrut.starttimer(START_PRACTICE, .1)
103
else:
return
if key == 'e':
trainBegin.curtain(vrut.CLOSE)
vrut.callback(vrut.KEYBOARD_EVENT, 'ExptKey')
vrut.callback(vrut.TIMER_EVENT, 'ExptTimer')
vrut.starttimer(START_TRIAL, .5)
if TASK == CALIBRATION:
if (key == '5' and stimulus == 5) or (key == '4' and stimulus ==
2):
Arrows[stimulus].curtain(vrut.CLOSE)
elif (key == '6' and stimulus == 0) or (key == '8' and stimulus
== 1):
Arrows[stimulus].curtain(vrut.CLOSE)
else:
return
if endTime - startTime < 2.0:
calibrate.remove(stimulus)
vrut.starttimer(CALIBRATION, .5)
.
.
.
[remaining code identical to that for the control group]
A.3 Script for the Configurational Knowledge Retention Test
SUBJECT
= 'Mnew_sub10_paste_day30'
#SUBJECT
= 'foopaste'
FILENAME = 'ANIMALS1'
#FILENAME = 'ANIMALS2'
HMD = 1
#if '1' -- tracker will be used, set to '2' to disable
#tracker.
ICONSET = 1
# '1' for ANIMALS1
# '2' for ANIMALS2
import vrut
import os
from string import *
import win32api
import time
vrut.setfov(60, 1.333)
vrut.setipd(0.06)
if HMD == 1:
vrut.go(vrut.STEREO | vrut.HMD)
print 'adding sensor------------------------'
ms = vrut.addsensor('intersense')
104
vrut.tracker()
print '*************', HMD
else:
#vrut.go(vrut.STEREO | vrut.HMD)
vrut.go(vrut.CONSOLE)
vrut.eyeheight(0)
vrut.translate(vrut.HEAD_POS,0,0,-.8)
#*************************************************************
#Load the pointing arrows, node_frame, chooser, and endCard **
#*************************************************************
frame = vrut.addchild('../models/ww_box.wrl')
chooser = vrut.addchild('../models/ww_chooser.wrl')
Arrows = []
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
Arrows.append(vrut.addchild('../models/arrow.wrl'))
cardBegin
endCard
newlife
backselect
=
=
=
=
vrut.addchild('../models/ww_trialbegin.wrl')
vrut.addchild('../models/retention_end.wrl')
vrut.addchild('../models/ww_newlife.wrl')
vrut.addchild('../models/bs.wrl')
#*********************************************************************
#Load the six icons to be revealed to the subject
#[ICONS, OBJECTS, and POLAR]
#*********************************************************************
iCard = []
iCard.append(vrut.addchild('../models/picture_card2.wrl'))
iCard.append(vrut.addchild('../models/picture_card2.wrl'))
iCard.append(vrut.addchild('../models/picture_card2.wrl'))
iCard.append(vrut.addchild('../models/picture_card2.wrl'))
iCard.append(vrut.addchild('../models/picture_card2.wrl'))
iCard.append(vrut.addchild('../models/picture_card2.wrl'))
iTex = []
if ICONSET == 1:
#Activate ANIMALS1
iTex.append(vrut.addtexture('../nonpolarized/fish.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/turtles.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/parrots.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/lions.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/butterflies.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/giraffes.jpg'))
elif ICONSET == 2:
#Activate ANIMALS2
iTex.append(vrut.addtexture('../nonpolarized/deer.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/frogs.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/snakes.jpg'))
105
iTex.append(vrut.addtexture('../nonpolarized/bluebirds.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/elephants.jpg'))
iTex.append(vrut.addtexture('../nonpolarized/roosters.jpg'))
#*********************************************
# apply textures in iTex to cards in iCard **
#*********************************************
for i in range(0, 6):
iCard[i].texture(iTex[i], 'card')
#************************************
# scaling & positioning of objects **
#************************************
for i in range(0, 6):
Arrows[i].scale(0.5, 0.5, 0.5)
#surface on right
Arrows[4].rotate(0,1,0, 180)
Arrows[4].translate(.2,0,0)
#surface above
Arrows[0].rotate(0,0,1, -90)
Arrows[0].translate(0,.2,0)
#surface on left
Arrows[5].translate(-.2,0,0)
#surface below
Arrows[2].rotate(0,0,1, 90)
Arrows[2].translate(0,-.2,0)
#surface straight ahead
Arrows[3].rotate(0,1,0, 80)
Arrows[3].translate(0,0,.2)
#surface behind
Arrows[1].rotate(0,1,0, -100)
Arrows[1].translate(.1,0,-.2)
cardBegin.translate(0,0,-.2)
newlife.translate(0,-.1,-.4)
newlife.rotate(1,0,0, 10)
endCard.translate(0,0,-.5)
backselect.translate(0,-.15,-.4)
backselect.rotate(1,0,0, 10)
#form a 2x3 palette of icons from which to select
vrut.translate(iCard[0], -.2,.1,-.3)
vrut.translate(iCard[1], 0,.1,-.3)
vrut.translate(iCard[2], .2,.1,-.3)
vrut.translate(iCard[3], -.2,-.1,-.3)
vrut.translate(iCard[4], 0,-.1,-.3)
vrut.translate(iCard[5], .2,-.1,-.3)
106
chooser.rotate(0,1,0, 180)
chooser.scale(1.35, 1.35, 1.35)
vrut.translate(chooser, -.2,.1,.3)
#**********************************
#Hide all the geometry and icons **
#**********************************
frame.curtain(vrut.CLOSE)
cardBegin.curtain(vrut.CLOSE)
endCard.curtain(vrut.CLOSE)
chooser.curtain(vrut.CLOSE)
newlife.curtain(vrut.CLOSE)
backselect.curtain(vrut.CLOSE)
for group in Arrows:
group.curtain(vrut.CLOSE)
for card in iCard:
card.curtain(vrut.CLOSE)
#********************************************************************
#__________________________OPEN DATA FILE______________________
#********************************************************************
def OpenDataFile():
global data
data = open(SUBJECT,'a')
print 'created output file: ', SUBJECT
#'a' for append
data.write('%Subject Name: ' + SUBJECT +'\n')
data.write('%Retention Test: ' + FILENAME +'\n')
data.write('%Columns ='+'\n')
data.write('%subject'+'\t'+'set'+'\t'+'t#'+'\t'+'+/-'+'\t'+'icon'+'\t')
data.write('RT1'+'\t'+'+/'+'\t'+'loc'+'\t'+'RT2'+'\t'+'RTtotal'+'\n'+'\n')
vrut.watch('opening: '+SUBJECT)
#****************************************
#timer flags & conditional variables
**
#****************************************
START_EXP
START_TEST
SHOW_ICONS
NEXT_ICON
PLACE_ANSWER
UNDO
=
=
=
=
=
=
0
1
2
3
4
5
LAST_CHANCE
END
= 10
= 11
107
TRUE
FALSE
= 1
= 0
trialNum
= 0
counter
index
carrier
= 0
= 0
= 0
iconCount = [0, 1, 2, 3, 4, 5]
arrowCount = [0, 1, 2, 3, 4, 5]
iconStorage
= []
arrowStorage
= []
iconsLeft
= 0
startTime
endTime
startTime2
endTime2
=
=
=
=
0
0
0
0
#******************************************************************
#_________________________PROCEDURE__________________________
#******************************************************************
def mytimer(timer):
global
global
global
global
global
global
global
global
TASK
trialNum
goNextIcon
counter
index
MAX
startTime
startTime2
TASK = timer
if timer == START_EXP:
OpenDataFile()
#ms.reset()
vrut.starttimer(START_TEST, .1)
vrut.watch('Start...')
elif timer == START_TEST:
frame.curtain(vrut.OPEN)
cardBegin.curtain(vrut.OPEN)
elif timer == SHOW_ICONS:
trialNum = trialNum + 1
cardBegin.curtain(vrut.CLOSE)
for card in iCard:
card.curtain(vrut.OPEN)
if trialNum == 1:
backselect.curtain(vrut.OPEN)
else:
108
backselect.curtain(vrut.CLOSE)
startTime = time.time()
if len(iconCount) == 0:
vrut.starttimer(LAST_CHANCE)
else:
chooser.curtain(vrut.OPEN)
vrut.watch('icons left: %d' %(len(iconCount)))
vrut.starttimer(NEXT_ICON)
elif timer == UNDO:
for card in iCard:
card.curtain(vrut.OPEN)
Arrows[arrowCount[index-1]].curtain(vrut.CLOSE)
startTime = time.time()
counter = 0
index = 0
chooser.curtain(vrut.OPEN)
vrut.watch('icons left now: %d' %(len(iconCount)))
vrut.starttimer(NEXT_ICON)
elif timer == NEXT_ICON:
MAX = len(iconCount)
if counter == MAX:
counter = 0
if iconCount[counter] == 0:
chooser.translate(-.2,.1,.3)
elif iconCount[counter] == 1:
chooser.translate(0,.1,.3)
elif iconCount[counter] == 2:
chooser.translate(.2,.1,.3)
elif iconCount[counter] == 3:
chooser.translate(-.2,-.1,.3)
elif iconCount[counter] == 4:
chooser.translate(0,-.1,.3)
elif iconCount[counter] == 5:
chooser.translate(.2,-.1,.3)
counter = counter + 1
elif timer == PLACE_ANSWER:
startTime2 = time.time()
if index < MAX:
if index == 0:
Arrows[arrowCount[index]].curtain(vrut.OPEN)
elif index > 0:
Arrows[arrowCount[index]].curtain(vrut.OPEN)
109
Arrows[arrowCount[index-1]].curtain(vrut.CLOSE)
else:
Arrows[arrowCount[MAX-1]].curtain(vrut.CLOSE)
Arrows[arrowCount[0]].curtain(vrut.OPEN)
index = 0
index = index + 1
elif timer == LAST_CHANCE:
backselect.curtain(vrut.CLOSE)
newlife.curtain(vrut.OPEN)
elif timer == END:
newlife.curtain(vrut.CLOSE)
#endCard.curtain(vrut.OPEN)
#********************************************************************
#_________________________SUBJECT CONTROLS_______________________
#********************************************************************
def mykey(key):
global data
global counter
global index
global carrier
global choice
global location
global iconCount
global arrowCount
global iconStorage
global arrowStorage
global MAX
global outdata
global endTime
if key == ' ':
if TASK == START_TEST:
vrut.starttimer(SHOW_ICONS, 0.01)
elif TASK == NEXT_ICON:
vrut.starttimer(NEXT_ICON)
elif TASK == PLACE_ANSWER:
vrut.starttimer(PLACE_ANSWER)
elif key == 'c' and TASK == NEXT_ICON:
for card in iconCount:
#select an icon
iCard[card].curtain(vrut.CLOSE) #from the palette
backselect.curtain(vrut.CLOSE)
110
chooser.curtain(vrut.CLOSE)
choice = iconCount[counter-1] #**This is the icon
# they choose.
iCard[choice].curtain(vrut.OPEN)
iconStorage.append(choice)
#** record the time it took for the subject to choose
#
an icon
#*******************************************************
endTime = time.time()
outdata = SUBJECT+'\t'+str(ICONSET)+'\t'
+str(len(iconStorage))+'\t'+'1'+'\t'
+str(choice)+'\t'+str(endTime-startTime)
#*******************************************************
iconCount.remove(choice)
vrut.watch('icon removed = %d' %(choice))
vrut.starttimer(PLACE_ANSWER, .01)
elif key == 'b' and TASK == PLACE_ANSWER:
iconCount.append(choice)
iconStorage.remove(choice)
#**Write data to file*********************************
endTime2 = time.time()
outdata = outdata+'\t'+'0'+'\t'+'0'+'\t'
+str(endTime2-startTime2)+'\t'
outdata = outdata + str((endTime2-startTime2)
+(endTime-startTime))+'\n'
data.write(outdata)
data.flush()
#****************************************************
vrut.starttimer(UNDO, .01)
#****************************
#put the chosen icon in the
#desired location
#****************************
elif key == 'c' and TASK == PLACE_ANSWER:
carrier = carrier + 1
location = arrowCount[index-1]
Arrows[location].curtain(vrut.CLOSE)
iCard[choice].translate(0,0,0)
if location == 4:
#on right
vrut.rotate(iCard[choice], vrut.YAXIS, 90)
iCard[choice].curtain(vrut.OPEN)
elif location == 0:
#above
vrut.rotate(iCard[choice], vrut.XAXIS, -90)
iCard[choice].curtain(vrut.OPEN)
elif location == 5:
#on left
111
vrut.rotate(iCard[choice], vrut.YAXIS, -90)
iCard[choice].curtain(vrut.OPEN)
elif location == 2:
#below
vrut.rotate(iCard[choice], vrut.XAXIS, 90)
iCard[choice].curtain(vrut.OPEN)
elif location == 3:
#straight ahead
iCard[choice].curtain(vrut.OPEN)
elif location == 1:
#behind
vrut.rotate(iCard[choice], vrut.YAXIS, 180)
iCard[choice].curtain(vrut.OPEN)
#**Write data to file************************************
endTime2 = time.time()
outdata = outdata+'\t'+'1'+'\t'+str(location)+'\t'
+str(endTime2-startTime2)+'\t'
outdata = outdata + str((endTime2-startTime2)
+(endTime-startTime))+'\n'
data.write(outdata)
data.flush()
#********************************************************
arrowStorage.append(location)
arrowCount.remove(location)
vrut.watch('times placed icon: %d' %(carrier))
vrut.watch('arrow removed: %d' %(location))
iconsLeft = len(iconCount)
location = arrowCount[0]
vrut.watch('iconsLeft = %d' %(iconsLeft))
if iconsLeft == 1:
choice = iconCount[0]
iCard[choice].translate(0,0,0)
if location == 4:
#on right
vrut.rotate(iCard[choice], vrut.YAXIS, 90)
iCard[choice].curtain(vrut.OPEN)
elif location == 0:
#above
vrut.rotate(iCard[choice], vrut.XAXIS, -90)
iCard[choice].curtain(vrut.OPEN)
elif location == 5:
#on left
vrut.rotate(iCard[choice], vrut.YAXIS, -90)
iCard[choice].curtain(vrut.OPEN)
elif location == 2:
#below
vrut.rotate(iCard[choice], vrut.XAXIS, 90)
iCard[choice].curtain(vrut.OPEN)
elif location == 3:
#straight ahead
iCard[choice].curtain(vrut.OPEN)
elif location == 1:
#behind
vrut.rotate(iCard[choice], vrut.YAXIS, 180)
iCard[choice].curtain(vrut.OPEN)
#**Write last choice to file**********************
outdata = SUBJECT+'\t'+str(ICONSET)+'\t'+'6'+'\t'+'1'
+'\t'+str(choice)+'\t'
112
+str(endTime-startTime)
outdata = outdata+'\t'+'1'+'\t'+str(arrowCount[0])
+'\t'+ str(endTime2-startTime2)+'\t'
outdata = outdata + str((endTime-startTime)
+(endTime2-startTime2))+'\n'
data.write(outdata)
data.flush()
#*********************************************
vrut.watch('times placed icon: 6')
vrut.watch('arrow removed: %d' %(arrowCount[0]))
iconStorage.append(choice)
iconCount.remove(choice)
arrowStorage.append(arrowCount[0])
arrowCount.remove(arrowCount[0])
vrut.watch('arrowStorage has %d elements' ...
%(len(arrowStorage)))
index = 0
counter = 0
vrut.starttimer(SHOW_ICONS)
elif key=='b':
if TASK == NEXT_ICON:
#***********************************
#remove icon from previous location
arrowCount.append(arrowStorage[carrier-1])
#and return to placing arrows ...
# the following two lines need to be indented inside the "if" loop:
vrut.watch('arrow replaced: %d' %(arrowStorage[carrier-1]))
vrut.watch('icon to be moved: %d' %(iconStorage[len(iconStorage)-1]))
moveIcon = iconStorage[len(iconStorage)-1]
if TASK == LAST_CHANCE:
newlife.curtain(vrut.CLOSE)
#on right
if arrowStorage[carrier-1] == 4:
vrut.rotate(iCard[moveIcon], vrut.YAXIS, -90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
113
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
#above
elif arrowStorage[carrier-1] == 0:
vrut.rotate(iCard[moveIcon], vrut.XAXIS, 90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
#on left
elif arrowStorage[carrier-1] == 5:
vrut.rotate(iCard[moveIcon], vrut.YAXIS, 90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
#below
elif arrowStorage[carrier-1] == 2:
vrut.rotate(iCard[moveIcon], vrut.XAXIS, -90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
#straight ahead
elif arrowStorage[carrier-1] == 3:
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
114
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
#behind
elif arrowStorage[carrier-1] == 1:
vrut.rotate(iCard[moveIcon], vrut.YAXIS, 180)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
arrowStorage.remove(arrowStorage[carrier-1])
carrier = carrier - 1
iconCount.append(moveIcon)
vrut.watch('icon replaced = %d' %(moveIcon))
#**Write data to file*************************
endTime = time.time()
outdata = SUBJECT+'\t'+str(ICONSET)+'\t'
+str(len(iconStorage))+'\t'+'0'+'\t'
+str(moveIcon)+'\t'+str(endTime-startTime)
outdata = outdata+'\t'+'0'+'\t'+'0'+'\t'+'0'+'\t'
outdata = outdata + str(endTime-startTime)+'\n'
data.write(outdata)
data.flush()
#*******************************************************
iconStorage.remove(moveIcon)
vrut.starttimer(UNDO, .01)
#***********************************
#remove icon from previous location
#and return to placing arrows
#***********************************
elif TASK == LAST_CHANCE:
arrowCount.append(arrowStorage[4])
arrowCount.append(arrowStorage[5])
vrut.watch('arrow replaced: %d' %(arrowStorage[4]))
vrut.watch('icon to be moved: %d' %(iconStorage[4]))
vrut.watch('arrow replaced: %d' %(arrowStorage[5]))
vrut.watch('icon to be moved: %d' %(iconStorage[5]))
moveIcon = iconStorage[4]
moveIcon2 = iconStorage[5]
newlife.curtain(vrut.CLOSE)
115
#******************************
#move first of last two icons
#******************************
if arrowStorage[4] == 4:
#on right
vrut.rotate(iCard[moveIcon], vrut.YAXIS, -90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
elif arrowStorage[4] == 0:
#above
vrut.rotate(iCard[moveIcon], vrut.XAXIS, 90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
elif arrowStorage[4] == 5:
#on left
vrut.rotate(iCard[moveIcon], vrut.YAXIS, 90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
elif arrowStorage[4] == 2:
#below
vrut.rotate(iCard[moveIcon], vrut.XAXIS, -90)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
116
elif moveIcon == 2:
vrut.translate(iCard[moveIcon],
elif moveIcon == 3:
vrut.translate(iCard[moveIcon],
elif moveIcon == 4:
vrut.translate(iCard[moveIcon],
elif moveIcon == 5:
vrut.translate(iCard[moveIcon],
.2,.1,-.3)
-.2,-.1,-.3)
0,-.1,-.3)
.2,-.1,-.3)
elif arrowStorage[4] == 3:
#straight ahead
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
elif arrowStorage[4] == 1:
#behind
vrut.rotate(iCard[moveIcon], vrut.YAXIS, 180)
if moveIcon == 0:
vrut.translate(iCard[moveIcon], -.2,.1,-.3)
elif moveIcon == 1:
vrut.translate(iCard[moveIcon], 0,.1,-.3)
elif moveIcon == 2:
vrut.translate(iCard[moveIcon], .2,.1,-.3)
elif moveIcon == 3:
vrut.translate(iCard[moveIcon], -.2,-.1,-.3)
elif moveIcon == 4:
vrut.translate(iCard[moveIcon], 0,-.1,-.3)
elif moveIcon == 5:
vrut.translate(iCard[moveIcon], .2,-.1,-.3)
#******************************
#move second of last two icons
#******************************
if arrowStorage[5] == 4:
vrut.rotate(iCard[moveIcon2], vrut.YAXIS, -90)
if moveIcon2 == 0:
vrut.translate(iCard[moveIcon2], -.2,.1,-.3)
elif moveIcon2 == 1:
vrut.translate(iCard[moveIcon2], 0,.1,-.3)
elif moveIcon2 == 2:
vrut.translate(iCard[moveIcon2], .2,.1,-.3)
elif moveIcon2 == 3:
vrut.translate(iCard[moveIcon2], -.2,-.1,-.3)
elif moveIcon2 == 4:
vrut.translate(iCard[moveIcon2], 0,-.1,-.3)
elif moveIcon2 == 5:
vrut.translate(iCard[moveIcon2], .2,-.1,-.3)
117
elif arrowStorage[5] == 0:
vrut.rotate(iCard[moveIcon2], vrut.XAXIS, 90)
if moveIcon2 == 0:
vrut.translate(iCard[moveIcon2], -.2,.1,-.3)
elif moveIcon2 == 1:
vrut.translate(iCard[moveIcon2], 0,.1,-.3)
elif moveIcon2 == 2:
vrut.translate(iCard[moveIcon2], .2,.1,-.3)
elif moveIcon2 == 3:
vrut.translate(iCard[moveIcon2], -.2,-.1,-.3)
elif moveIcon2 == 4:
vrut.translate(iCard[moveIcon2], 0,-.1,-.3)
elif moveIcon2 == 5:
vrut.translate(iCard[moveIcon2], .2,-.1,-.3)
elif arrowStorage[5] == 5:
vrut.rotate(iCard[moveIcon2], vrut.YAXIS, 90)
if moveIcon2 == 0:
vrut.translate(iCard[moveIcon2], -.2,.1,-.3)
elif moveIcon2 == 1:
vrut.translate(iCard[moveIcon2], 0,.1,-.3)
elif moveIcon2 == 2:
vrut.translate(iCard[moveIcon2], .2,.1,-.3)
elif moveIcon2 == 3:
vrut.translate(iCard[moveIcon2], -.2,-.1,-.3)
elif moveIcon2 == 4:
vrut.translate(iCard[moveIcon2], 0,-.1,-.3)
elif moveIcon2 == 5:
vrut.translate(iCard[moveIcon2], .2,-.1,-.3)
elif arrowStorage[5] == 2:
vrut.rotate(iCard[moveIcon2], vrut.XAXIS, -90)
if moveIcon2 == 0:
vrut.translate(iCard[moveIcon2], -.2,.1,-.3)
elif moveIcon2 == 1:
vrut.translate(iCard[moveIcon2], 0,.1,-.3)
elif moveIcon2 == 2:
vrut.translate(iCard[moveIcon2], .2,.1,-.3)
elif moveIcon2 == 3:
vrut.translate(iCard[moveIcon2], -.2,-.1,-.3)
elif moveIcon2 == 4:
vrut.translate(iCard[moveIcon2], 0,-.1,-.3)
elif moveIcon2 == 5:
vrut.translate(iCard[moveIcon2], .2,-.1,-.3)
elif arrowStorage[5] == 3:
if moveIcon2 == 0:
vrut.translate(iCard[moveIcon2],
elif moveIcon2 == 1:
vrut.translate(iCard[moveIcon2],
elif moveIcon2 == 2:
vrut.translate(iCard[moveIcon2],
elif moveIcon2 == 3:
vrut.translate(iCard[moveIcon2],
-.2,.1,-.3)
0,.1,-.3)
.2,.1,-.3)
-.2,-.1,-.3)
118
elif moveIcon2 == 4:
vrut.translate(iCard[moveIcon2], 0,-.1,-.3)
elif moveIcon2 == 5:
vrut.translate(iCard[moveIcon2], .2,-.1,-.3)
elif arrowStorage[5] == 1:
vrut.rotate(iCard[moveIcon2], vrut.YAXIS, 180)
if moveIcon2 == 0:
vrut.translate(iCard[moveIcon2], -.2,.1,-.3)
elif moveIcon2 == 1:
vrut.translate(iCard[moveIcon2], 0,.1,-.3)
elif moveIcon2 == 2:
vrut.translate(iCard[moveIcon2], .2,.1,-.3)
elif moveIcon2 == 3:
vrut.translate(iCard[moveIcon2], -.2,-.1,-.3)
elif moveIcon2 == 4:
vrut.translate(iCard[moveIcon2], 0,-.1,-.3)
elif moveIcon2 == 5:
vrut.translate(iCard[moveIcon2], .2,-.1,-.3)
arrowStorage.remove(arrowStorage[5])
arrowStorage.remove(arrowStorage[4])
carrier = carrier - 1
iconCount.append(moveIcon)
iconCount.append(moveIcon2)
vrut.watch('icon replaced = %d' %(moveIcon))
vrut.watch('icon replaced = %d' %(moveIcon2))
#**Write data to file*********************************
endTime = time.time()
outdata = SUBJECT+'\t'+str(5)+'\t'+'0'+'\t'
+str(moveIcon2)+'\t'+str(endTime-startTime)
+'\n'
data.write(outdata)
data.flush()
outdata = SUBJECT+'\t'+str(4)+'\t'+'0'+'\t'
+str(moveIcon)+'\t'+str(endTime-startTime)
+'\n'
data.write(outdata)
data.flush()
#******************************************************
iconStorage.remove(moveIcon2)
iconStorage.remove(moveIcon)
vrut.starttimer(UNDO, .01)
elif key == 'c' and TASK == LAST_CHANCE:
vrut.starttimer(END, .01)
elif key =='1' and HMD == 1:
ms.reset()
win32api.Sleep(200)
vrut.watch('tracker has been reset')
119
vrut.watch('task: %d' %(TASK))
if key == 'b':
win32api.Beep(1000, 90)
if key == 'c':
win32api.Beep(1300, 90)
#___________________________________________________________________
vrut.callback(vrut.KEYBOARD_EVENT, 'mykey')
vrut.callback(vrut.TIMER_EVENT, 'mytimer')
print 'all done'
vrut.starttimer(START_EXP, 0.5)
#vrut.go(vrut.CONSOLE | vrut.BETA, __name__)
120
Appendix B:
Subject History Questionnaire
Do you have medical conditions that would be aggravated if you
became motion sick? (yes,no)
If you said “yes,” you should not be a subject for this experiment and you should
stop right now. Otherwise, please continue...
Have you ever experienced dizzy spells? (yes,no)
If yes, can you please describe these experiences?
or ... motion sickness? (yes,no)
If yes, can you please explain some of your experiences?
What is you dominant eye? (left,right)
To find your dominant eye, hold your index finger up about 10 inches from
your eyes and close each eye one at a time. If you close one eye and your
finger seems to move, the closed eye is dominant.
Do you have normal peripheral vision? (yes,no)
Do you have normal depth perception? (yes,no)
Do you need corrective lenses? (yes,no)
121
Check all that apply. I have...
astigmatism
dyslexia
type(s):
near sightedness
blind spots
where:
far sightedness
phoria
color-blindness
color(s):
wall eye
strabismus
Do you have any hearing loss? (yes,no)
If yes, please explain how you have/are lost/losing your hearing.
Do you have any balance problems? (yes,no)
If yes, please describe the nature of your balance problem(s)?
Do you have a history of chronic ear infections? (yes,no)
If yes, can you please elaborate?
What is your gender? M F
122
Previous Experience
PleAse describe Any experience you hAve hAd with VirtuAl ReAlity
systems.
123
Appendix C:
Paper and Pencil Tests
Group Embedded Figures Test (Witkin, Oltman, Raskin, & Karp, 1971). This paper-and-pencil
measure of field dependence/independence requires participants to identify perceptually a target
geometric figure embedded within an irrelevant stimulus content. The test consisted of 3 sections
containing 7, 9, and 9 items, respectively. The test was timed with participants receiving 3 minutes to
complete each of the 3 sections. Participants’ scores were computed as the total number of figures
correctly identified. The higher the score, the more field-independent the participant; the lower the score,
the more field-dependent the participant. The test manual reports an average split-half reliability of 0.82
for sets 2 and 3 and a test-retest reliability coefficient of 0.89.
Card Rotations Test (Ekstrom, French, Harman, & Derman, 1976). This paper-and-pencil
measure of spatial orientation consists of 2 sets of 10 items, each of which depict a drawing of a card cut
into an irregular shape. To its right are eight other drawings of the same card, sometimes merely rotated
and sometimes turned over to its other side. Participants must indicate for each drawing whether the card
has been rotated (i.e., it is the same as the original) or turned over (i.e., it is different from the original).
The test is timed, with participants receiving three minutes for each set of items. Reliability coefficients
ranging from .80 to .89 are reported in the test manual.
Cube Comparisons Test (Ekstrom, French, Harman, & Derman, 1976). This paper-and-pencil
measure of spatial orientation consists of 2 sets of 21 items, each of which depict two drawings of a cube.
Assuming no cube can have two faces that are alike, the participant indicates whether the two drawings
for each item could possibly be of the same cube or could not be of the same cube. The test is timed, with
participants receiving three minutes for each set of items. Reliability coefficients ranging from .47 to .84
are reported in the test manual.
124
Appendix D:
Instruction Slide Shows
D.1. Strategy Group Instructions
Slide 1-2:
3-Dimensional Spatial Learning
in a Virtual Space Station
NODE:
An Instructional Slideshow
Jason T. Richards
Master’s Thesis Project
MVL / Aero & Astro / MIT
INTRODUCTION
In the course of our normal lives, if we
enter a room, look around a bit, and walk
out again, we can usually easily describe
what the room was like, and where
prominent objects were in the room. But
astronauts living on the MIR space station
say that when they try to do this in three
dimensions, the task is not so easy. For
example, the MIR station consists of six
different modules, all connected together at
right angles by a central hub called a
"node". Astronauts floating inside the node
are surrounded by hatches leading in six
different directions.
NODE
MIR schematic
NEXT
125
Slide 3:
Intro. continued...
Learning to recognize the correct hatch to go through is difficult, they say,
because you can be floating in practically any orientation, and learning the
hatch arrangement in a way that is independent of the orientation of your own
body isn't a task we have experience doing on earth.
This experiment, sponsored by NASA, is designed to study how people learn
to do an analogous task: learning the arrangement of six different objects, or
pictures on the walls of a virtual cubic room in this case. Each picture is
unique and gives the surface on which it is located a distinct spatial identity
relative to the other pictures and their respective surfaces.
Essentially, we will be spinning you around inside this virtual room in all
directions and showing it to you from several different orientations. Your
task will be to learn the arrangement of the pictures well enough so you can
mentally visualize it (i.e. “see it in your mind’s eye” using mental imagery)
and predict where each picture is regardless of your orientation inside the
room.
NEXT
Slide 4:
Intro. continued...
For example, you can probably imagine what this lab room would look like if
you turned around and faced the other direction. People do this all the time. It
gets hard when you have to imagine the room in a tilted or upside down
orientation.
We believe that there are a few tricks that are helpful to use when beginning to
learn the arrangement. Remember, your ultimate goal is to be able to perform
this task by visualization alone! But, when you get stuck, these tricks will be
there to help you out. We will refer to these three tricks by the following
names: (1) Baseline Orientation, (2) Pairs, and (3) Triads.
You are going to get some practice with these tricks in a virtual node whose
pictures are related in some simple way.
Let’s start with Baseline Orientation…
ON TO Baseline Orientation
126
Slide 5:
Baseline Orientation
Mentally visualizing yourself in different orientations is difficult to do, and
it’s very cumbersome to try take in information about all 6 pictures at once!
Consequently, it makes sense to group the information into useful “chunks.”
The first chunk will establish your baseline orientation. This will be the first
orientation you master in the experiment.
“The Baseline 3”: Start out by memorizing the locations of 3 pictures. For
example, 1) the one in front of you, 2) the one below you, and 3) the one to
your left. This requires that you first NAME the pictures and remember them
in terms of their locations relative to your body as seen from the baseline
orientation. Make a genuine attempt to establish a picture of the Baseline 3 in
your mind.
Take a look at the practice node, and try to remember the pictures with arrows
pointing to them as an example Baseline 3.
PUT ON THE VIRTUAL REALITY GOGGLES
Slide 6:
Baseline continued...
Let’s try visualizing being in a different orientation relative to the baseline
orientation…
Try to visualize what the Baseline 3 would look like if you’d just been
tilted right-shoulder down (I.e. 90 degrees clockwise) in the node.
Where do you think the pictures in the Baseline 3 will appear relative to
your body?
Take a look and find out. The Baseline 3 will be highlighted with arrows
as before.
PUT ON THE VIRTUAL REALITY HELMET
127
Slide 7:
Baseline continued...
You might be able to visualize other possible orientations in the node
and how they would affect the appearance of the Baseline 3.
Visualize yourself in the baseline orientation. Suppose you had just
been tilted back 90 degrees (such that you are now lying on your back)
from the baseline orientation. Can you “see in your mind’s eye” where
the Baseline 3 pictures would appear? Let’s see if you’re right…
(Arrows will still be pointing to the Baseline 3.)
PUT ON THE VIRTUAL REALITY GOGGLES
ASIDE: You can probably imagine how the Baseline 3 would appear if you’d
been turned to the left or right inside the node.
Next, we’ll show you how to deal with the other 3
pictures in the node...
Slide 8:
PAIRS
A Pair is defined to be two pictures that are located across from each other
inside the node.
As you may have already noticed, each pair in the practice node is
characterized by a different color: Red, Blue, or Black.
Black This was only done to
emphasize the Pairs concept. Colors may not be used in the real experiment!
The trick to learning Pairs is to NAME each of the pictures in a way that
suggests which picture is across from it in the node. For example, in the
practice node, “Hearts (left) is across from Diamonds (right)” is an obvious
pairing due to the natural oppositeness of the pictures. Finding antonymic
properties between paired pictures makes remembering and visualizing them
much easier. In order to get good with Pairs, you must really try to visualize
them. This is yet another way to chunk information about the pictures into
meaningful groups.
PUT ON THE VIRTUAL GOGGLES ON TO VIEW PAIRS
128
Slide 9:
Putting Baseline and Pairs
together...
Baseline Orientation and Pairs are the only two tricks you really need
to visualize the picture arrangement from different orientations.
Let’s try it…
Imagine that you’ve been tilted left-shoulder down (relative to the
baseline orientation) inside the practice node. Using the Baseline 3
and Pairs tricks, can you visualize where all of the pictures will
appear? Take a look…
PUT ON THE VIRTUAL REALITY GOGGLES
Slide 10:
Triads
Although the Baseline Orientation and Pairs strategies are all you need
to mentally visualize the picture arrangement, it is helpful to have the
Triad trick to fall back on if need be.
A Triad is defined as the triplet formed by three adjacent surfaces
joined at any of the 8 corners of the node.
You have already memorized one triad -- the Baseline 3. Again, this is
the only triad you really need to memorize. We hope that you will be
able to mentally visualize this Triad from different orientations without
the help of any tricks.
However, the Triad trick provides a rule by which you may locate
pictures around you…
NEXT
129
Slide 11:
Remember the “right-hand rule” having to do with 3D coordinate axes?
(It can also be a left-hand rule.) We propose that you use some
variation of this rule to find pictures when your ability to visualize is not
yet adequately developed. Here is an example:
• Extend all of your fingers, take note of the surface to
which your fingers are pointing
• “Sweep” fingers around to point to an adjacent surface
• Point your thumb to a third surface
• Remember the order in which these pictures are pointed to
in your rule
This is way for you to quickly locate pictures around you when
visualization fails. You should only try to use it when you are struggling
to visualize.
Try establishing one of the Hand Rules now with the Baseline 3.
PUT ON THE VIRTUAL REALITY GOGGLES
NEXT
Slide 12:
Triads continued...
As before, you can imagine how the locations of the pictures in the node
will change relative to you given a change in your viewpoint.
When you put the HMD on this time, you will be in a new orientation in
the node. You will see two of the pictures in the practice node -- the one
in front of you and the below you -- from this new orientation.
First, try to mentally visualize where the “hearts” picture is using baseline
orientation and Pairs;
Then, try to verify the location of this picture with the Hand Rule.
PUT ON THE VIRTUAL REALITY HELMET
Eventually, you may be able to recognize triads other than the Baseline 3 and
use them to your advantage. However, this usually does not happen without
a lot of practice. In general, it is to your advantage to pay attention to triads!
130
Slide 13:
Experimental Setup
The trials you experience during the experiment will proceed in the
following manner:
1.
Before each trial, you will see a card that says “Begin”,
indicating that a new trial will begin when you press the spacebar. Once
you press the spacebar, you’ll see one of the pictures appear on a green
background with the word “TARGET
TARGET” written on it. It will disappear
after 2 SECONDS. You need to remember this picture for it is the target
that you will attempt to locate…
2.
Next, only 2 of the surfaces in the node will be uncovered –
namely, the one in front of you and the one below you - with arrows
pointing to them. They will be visible for only 3 SECONDS. During this
time, you should try to imagine what your orientation is inside the room
and try to visualize where all of the pictures are around you. You can
make a response during this time if you know where the target is.
NEXT
Slide 14:
Experimental Setup continued...
3.
For the next 7 SECONDS , all of the pictures will be covered up.
If you haven’t made a response as to where you think the target is located,
you should try to get that in during these 7 seconds. In any case, try to
respond as fast as possible without sacrificing accuracy.
There are only 4 possible target locations (right, left, above, and behind).
Target responses are made relative to your body by using the numeric
keypad portion of a computer keyboard (see below).
“above”
Gray
= response keys
“0”
= terminate trial
White
= not used
8
7
9
“behind”
“to left”
4
5
6
1
2
3
“to right”
0
NEXT
131
Slide 15:
Experimental Setup continued...
Finally…
4.
for the last 7 SECONDS of the trial, all
of the pictures will be uncovered for you
to see where the target picture was. You
should use as much of this time to study
the entire spatial arrangement of the
pictures.
*Once you become very confident in
your spatial memory, you have the
option to move on to the next trial
before the 7 seconds is up by pressing
the ‘0’ key as shown in the diagram.
Please, do not do this unless you really
are confident.
7
8
9
4
5
6
1
2
3
0
Terminate trial
Slide 16:
Experimental Setup continued...
The response keys have velcro on them (the “5” has a knob on it) so you can
feel where they are. Your index finger, middle finger, and ring finger will
naturally rest over the “4,” “5,” and “6” keys. Your thumb naturally rests on
the “0” key. You can move your middle finger up to push the “8” key.
Calibration Exercise: To get familiar with the keys, you will try to match
arrows that appear in the node by pressing the corresponding response keys as
fast as possible. This will continue until you respond fast enough 5 times for
each surface.
Practice Trials: Then, you will do 5 practice trials in the practice node you saw
earlier. The trials will proceed automatically as described in Experimental
Setup. This is mainly so you can get used to the timing of the trials. Don’t
worry too much about trying to memorize this configuration.
PUT ON THE VIRTUAL REALITY GOGGLES
ON TO Trials
132
Slide 17:
Trials
In the real experiment, you will complete the following number
of trials in each of 2 virtual nodes:
Trials 1 - 4
Your orientation inside the node will be the same in each of
these trials. This is when you should try to establish your Baseline
Orientation (and your “Baseline 3”).
We will randomly vary your simulated tilt orientation
Trials 5 - 12
while you face a NEW direction in the virtual node.
You could be in any possible orientation within the node;
Trials 13 - 36
I.e. facing any direction while tilted by any amount. A difficult task, yes -but one of which we are confident you are capable!
So, 36 trials in one node, then 36 trials in a second node.
NEXT
Slide 18:
THE END
The investigator will answer any other questions you have before beginning the
real thing.
While doing the real experiment, remember the following things:
1. Your ultimate goal is to be able to mentally visualize the node from any
orientation during the last 24 trials.
2. Concentrate on mentally visualizing the node between trials
3. Rehearse rules (Baseline
Baseline Orientation,
Orientation Pairs,
Pairs Triads)
Triads between trials and
during the study time in preparation for the last 24 trials.
4. You can make multiple guesses in each trial - we’ll take your last guess
as long as you get it in before the pictures appear at the end of the trial.
GOOD LUCK!
133
D.2. Control Group Instructions
.
.
.
[Slides 1–3 for the Control Group = slides 1-3 for the Strategy Group]
.
.
.
Slide 4:
Intro. continued...
Before you actually try to do this, we’ll first allow you to observe a practice
virtual room similar to the one you’ll encounter in the experiment. You will
NOT see these pictures in the real experiment. This is only so you can get used
to wearing the Head Mounted Display and looking around in a virtual
environment.
Make sure you look around in all directions, and try to imagine that you’re
actually in this virtual room.
PUT ON THE VIRTUAL GOGGLES
NEXT
134
Slide 5:
Intro. continued...
Now, imagine that your simulated body orientation has just been tilted
right-shoulder down (I.e. 90 degrees clockwise) in the node.
Where do you think the pictures will appear relative to your body?
Take a look and find out. Notice how the pictures in the virtual node have
changed location relative to your body.
PUT ON THE VIRTUAL GOGGLES
ASIDE: You may also be able to imagine how the pictures would appear if your
simulated orientation had been rotated in other directions. Which pictures would
be affected?
ON TO Experimental Setup
.
.
.
[Slides 6-9 for the Control Group = 13-16 for the Strategy Group]
.
.
.
135
Slide10:
Trials
In the real experiment, you will complete the following number
of trials in each of 2 virtual nodes:
Trials 1 - 4
Your orientation inside the node will be the same in each of
these trials. That’s our way of taking it easy on you in the very beginning.
We will randomly vary your simulated tilt orientation
Trials 5 - 12
while you face a NEW direction in the virtual node.
You could be in any possible orientation within the node;
Trials 13 - 36
I.e. facing any direction while tilted by any amount. A difficult task, yes -but one of which we are confident you are capable!
So, 36 trials in one node, then 36 trials in a second node.
NEXT
Slide 11:
GAME TIME!
That’s it for instructions.
Now, the investigator will answer any other questions you may have
before beginning the real thing. Remember:
1. Your ultimate goal is to be able to mentally visualize the node from any
orientation during the last 24 trials.
2. Concentrate on mentally visualizing the node between trials
3. You may use the ‘0’ key to end the 7-second study phase. However, in
the early trials, you should use this time to prepare yourself for the last 24
trials in which you will be in any possible orientation.
4. You can make multiple guesses in each trial - we’ll take your last
guess as long as you get it in before the pictures appear at the end of the
trial.
GOOD LUCK!
136
Appendix E:
Trial Sequence and Counterbalancing
On the training day, subjects completed blocks of 4, 8, and 24 trials in each virtual environment
one at a time. In the first 4 trials, subjects saw each of the four possible targets in successive trials from
the same simulated orientation. In next 8 trials, subjects’ simulated viewpoint was switched such that they
faced a new direction in the virtual room (i.e. a new surface behind them). Half of the 16 possible rollorientation/response-direction combinations were presented in a pseudo-random fashion at this new back
surface. These 8 trials were balanced by roll orientation and relative target direction within sub-blocks of
4 trials. In the last 24 trials, subjects were presented with any possible orientation (all back-surface/rollorientation combinations). There are 96 back-surface/roll-orientation/relative-target-direction
combinations possible (6 x 4 x 4 = 96) in a virtual cubic room. Due to resource limitations and time
restrictions, subjects saw a representative subset of 24 of these combinations. One relative target
direction was presented per orientation in this subset. These 24 trials were balanced for back surface and
perceived difficulty over the entire stage. Sub-blocks of 4 trials were balanced for relative target direction
(see Figure E.1).
137
Surface
Roll
Target
Ahead Orientation Surface
E
0
L
E
0
R
E
0
E
E
0
C
Surface
Below
F
F
F
F
Relative
Target
Direction
Stage 1:
left
right
Baseline
behind Orientation
above
L
L
L
L
L
L
L
L
3
9
0
6
9
3
0
6
C
O
E
L
F
L
C
O
O
E
F
C
E
O
F
C
left
above
right
behind
left
behind
above
right
F
C
L
O
R
C
R
O
R
F
O
C
L
E
R
C
E
L
F
L
E
F
O
E
9
6
0
9
9
3
3
0
6
0
6
9
6
3
0
0
9
3
6
9
6
3
3
0
F
L
C
F
E
O
F
O
R
L
R
L
O
E
E
C
C
L
O
F
F
O
L
L
R
O
F
L
O
L
E
F
C
O
C
R
C
L
F
E
R
O
E
E
C
L
R
F
behind
right
above
left
above
left
right
behind
behind
left
right
above
right
behind
left
behind
right
behind
above
left
above
right
above
left
Stage 2:
second viewing direction
all roll orientations
Stage 3:
all 6 viewing directions
all roll orientations
Figure E.1. Combinations, Order, and Notation Convention Used for Trials
138
Appendix F:
Scoring Convention for Configuration Test
We thought it possible that when tested for retention of configurational knowledge, our subjects
would reproduce the object configurations correctly, but from a different (i.e. rotated) point of view,
while maintaining their spatial relations. Therefore, we developed a scoring procedure that could detect a
rotated response, and classified other reponses in terms of the complexity of geometric paths from the
correct response.
We scored a response as “0” if it reproduced the positions perfectly as seen from the baseline
orientation. Every other spatially consistent response was coded by the rotation needed to produce the
pattern the subject reported. For example, the cube surfaces were numbered 0, 1, 2, and 3, reading
counterclockwise from the right as shown in Figure F.1. If the subject gave, instead of the intended 0, 1,
2, 3 surfaces, the surfaces 1, 2, 3, 0, we scored this permutation as “1”, as though he had rotated the
original configuration by 90 degrees about his roll axis. If rotated by 180 degrees (i.e. if his response was
“2301” instead of “0123”), we scored his permutation (“error”) as “2”. Similarly, the permutation (0123)
→ (3012) was scored “3”. The choice of error code was arbitrary, but since we took roll-errors (Z-axis
rotations) to be the simplest possible, they were given the lowest error-code numbers.
139
‘5’ = behind
Figure F.1. Object-position code in the baseline orientation
The next lowest error-codes were assigned to yaws. A yaw (Y-axis rotation) of 90 degrees, would
permute the faces as follows: [012345] → [415320], i.e. the permutation (0425), which we scored “4”. A
yaw of 180 degrees (Y2), i.e. the permutation (02)(45), was scored “5”; Y3 = (0524) was scored “6”.
Knitting these together, we scored rolls (Z, Z2, Z3) → (1, 2, 3), yaws (Y, Y2, Y3) → (4, 5, 6),
and pitches (X, X2, X3) → (7, 8, 9). Again, these codes were to be used principally to classify rather than
order the errors. To the degree that rolls, yaws, and pitches represent errors of increasing probability, the
coding has some heuristic potential, but the speculative projection of the actual results (and not some
underlying theory) motivated the device. In an analogous way, we coded pure inversions about the z, y,
and x axes as 10, 11, and 12, respectively.
There is 1 correct answer (rank 0), 9 simple-rotation errors (ranks 1-9), 3 inversion errors (ranks
10-12), 156 double-rotation, rotation-inversion, or double inversion permutations thereof (ranks 13-168),
and the triple-inversion (rank 169). Out of these 170 possible scores, 40 are unique with a range from 0 to
129 (see Table F-1).
Simple rotations were judged less costly because they reflect better visual memory for the spatial
relationships among the objects in the array and are more frequent than inversions. Errors about the x-axis
(pitch or left-right reversal) were judged least costly than the others because rotation errors most
frequently occur in roll and humans have shown to have worse spatial memory for body axes with more
140
symmetry (Bryant and Wright, 1999). Each error was represented by a matrix whose row and column
numbers corresponded to target and response locations, respectively. These matrices were produced via
matrix multiplication (see Table F-2).
141
Table F-1. Unique Rank-Order Scores
The minimum rank-order of any set (seen in bold in the far left column) of equivalent responses served as
the unique representative for that set.
---0
1
2
3
4
5
6
7
8
9
10
11
12
17
18
19
20
21
22
23
24
25
30
32
33
35
36
43
45
46
48
49
62
63
64
89
101
102
103
129
16
13
14
15
52
34
57
91
31
99
77
38
37
59
47
87
79
44
53
131
51
50
60
54
74
72
76
61
85
55
81
133
90
147
88
149
128
127
163
165
Simple rotations and inversions
28
29
26
27
71
56
69
113
67
111
115
114
75
92
68
118
97
66
121
40
41
42
39
83
65
78
125
98
117
130
138
135
146
144
80
86
96
93
116
120
94
95
123
157
159
110
108
119
122
132
136
160
134
162
139
137
150
152
58
70
82
100
73
109
155
167
84
106
142
166
104
126
141
153
151
143
145
161
158
156
148
164
112
124
140
154
168
Identity
Z
Z2
Z3
Y
Y2
Y3
X
X2
X3
z
y
x
105
107
169
142
Table F-2. Double Combinations
The number in the first column is the rank-order index for the corresponding [(second column) ∗ (third
column)] matrix product (0 = the identity matrix).
0
0
0
I
29
2
3
58
4
6
1
0
1
Z
30
2
4
59
4
7
2
0
2
Z2
31
2
5
60
4
8
3
0
3
Z3
32
2
6
61
4
9
4
0
4
Y
33
2
7
62
4
10
5
0
5
Y2
34
2
8
63
4
11
6
0
6
Y3
35
2
9
64
4
12
7
0
7
X
36
2
10
65
5
0
8
0
8
X2
37
2
11
66
5
1
9
0
9
X3
38
2
12
67
5
2
10
0
10
z
39
3
0
68
5
3
11
0
11
y
40
3
1
69
5
4
12
0
12
x
41
3
2
70
5
5
13
1
0
42
3
3
71
5
6
14
1
1
43
3
4
72
5
7
15
1
2
44
3
5
73
5
8
16
1
3
45
3
6
74
5
9
17
1
4
46
3
7
75
5
10
18
1
5
47
3
8
76
5
11
19
1
6
48
3
9
77
5
12
20
1
7
49
3
10
78
6
0
21
1
8
50
3
11
79
6
1
22
1
9
51
3
12
80
6
2
23
1
10
52
4
0
81
6
3
24
1
11
53
4
1
82
6
4
25
1
12
54
4
2
83
6
5
26
2
0
55
4
3
84
6
6
27
2
1
56
4
4
85
6
7
28
2
2
57
4
5
86
6
8
87
6
9
115
8
11
143
11
0
88
6
10
116
8
12
144
11
1
89
6
11
117
9
0
145
11
2
90
6
12
118
9
1
146
11
3
91
7
0
119
9
2
147
11
4
92
7
1
120
9
3
148
11
5
93
7
2
121
9
4
149
11
6
94
7
3
122
9
5
150
11
7
95
7
4
123
9
6
151
11
8
96
7
5
124
9
7
152
11
9
97
7
6
125
9
8
153
11
10
143
98
7
7
126
9
9
154
11
11
99
7
8
127
9
10
155
11
12
100
7
9
128
9
11
156
12
0
101
7
10
129
9
12
157
12
1
102
7
11
130
10
0
158
12
2
103
7
12
131
10
1
159
12
3
104
8
0
132
10
2
160
12
4
105
8
1
133
10
3
161
12
5
106
8
2
134
10
4
162
12
6
107
8
3
135
10
5
163
12
7
108
8
4
136
10
6
164
12
8
109
8
5
137
10
7
165
12
9
110
8
6
138
10
8
166
12
10
111
8
7
139
10
9
167
12
11
112
8
8
140
10
10
168
12
12
169
10
11
113
8
9
141
10
11
114
8
10
142
10
12
12
144
Appendix G:
Subject Consent Form
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
MAN VEHICLE LABORATORY
DEPARTMENT OF AERONAUTICS AND ASTRONAUTICS
STUDY OF
Visual Orientation in Unfamiliar Gravito-Inertial Environments
I have been asked to participate in a study on the training of 3D spatial learning in virtual environments. I
understand my participation is voluntary and that I am free to withdraw my consent and to discontinue
participation in this study at any time without prejudice. I have completed a questionnaire related to my
medical history, which might influence the test results. I understand that the investigator requests me to
come in on 4 different days. On Day 1, the experiment session runs up to 2 hours long and involves a
succession of experimental trials, each lasting a maximum of 24 sec. I understand that I will be invited
back for 3 more experiment sessions 1 day later (Day 2), 7 days later (Day 3) and 30 days later (Day 4),
each ½-hour in length. I understand that I will be paid for my participation in a pro-rated manner at
$10/hour. In each trial, I will be asked to make responses to simulated scenes based on spatial memory
using a helmet mounted computer display and keyboard. I will also be asked to complete some pencil
and paper tests during the course of the experiment. All tests will be conducted sitting upright in a chair.
I understand that all data developed from my participation in this study will be coded and kept
confidential and that my identity will remain anonymous.
I understand that some of the visual scenes are potentially disorienting, and so there is a possibility I may
experience some malaise, headache, nausea or other symptoms of motion sickness. If I experience
unacceptable symptoms, I am free to close my eyes, ask for a break, or withdraw entirely from the
experiment at any time without prejudice. If I experience any significant aftereffects, I will report them to
the experimenter, and should I experience any difficulties with orientation, I will not operate an
automobile.
In the unlikely event of physical injury resulting from participation in this research, I understand that
medical treatment will be available from the MIT Medical Department, including first aid emergency
treatment and follow-up care as needed, and that my insurance carrier may be billed for the cost of such
treatment. However, no compensation can be provided for medical care apart from the foregoing. I also
understand that making such medical treatment available, or providing it, does not imply that such injury
is the investigator's fault. I also understand that by participating in this study I am not waiving any of my
legal rights.
I understand that I may also contact the Chairman of the Committee on the Use of Humans as
Experimental Subjects, MIT 253-6787, if I feel I have been treated unfairly as a subject. Further
information may be obtained by calling the Institute's Insurance and Legal Affairs Office at 253-2822.
I volunteer to participate in the experiment described above.
Subject _____________________________________________Date________________
Experimenter_________________________________________Date________________
145
Download