revised 11/20/05 THE EFFECT OF NAVIGATION MAPS ON PROBLEM SOLVING TASKS

advertisement
revised 11/20/05
THE EFFECT OF NAVIGATION MAPS ON PROBLEM SOLVING TASKS
INSTANTIATED IN A COMPUTER-BASED VIDEO GAME
by
Richard Wainess
14009 Barner Ave., Sylmar, CA 91342
PHONE/FAX 818-364-9419
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(EDUCATION)
May 2005
Copyright 2005
Richard Wainess
revised 11/20/05
ACKNOWLEDGEMENTS
I am eternally grateful to many people whose generous guidance, support,
and encouragement helped me to achieve the enormous accomplishment of
completing my dissertation and, ultimately, earning the degree of Doctor of
Philosophy in Education. I would like to thank Dr. Richard Clark and Dr. Janice
Schafrik, for their roles as teachers and members of the committee of five. I
would like to thank Dr. Yanis Yortsos for his support as my outside committee
member. My thanks goes to Dr. Ed. Kazlauskas, not only for his roles as teacher
and committee of three member, but more importantly, as my masters’ advisor
and for his support throughout my graduate career. My final thanks to faculty
goes to my advisor, Dr. Harold O’Neil, for exemplifying the roles of master and
mentor, for opening his arms and his heart to me. Harry was my teacher, my
advisor, and my guide, and I will value our friendship as it continues to grow.
Special thanks go to my colleagues, Dr. Claire Chen and Dr. Danny Shen,
for their friendship, assistance, and support. I am also grateful to my son and
granddaughter for their pride in me, giving me my most cherished motivation for
being a role model. I owe an immeasurable debt of gratitude to my parents who
have encouraged every endeavor I have undertaken, every goal I aspired to, and
every dream I followed. They allowed me to reach far beyond what might have
been reasonable, a freedom that has led to now. And my final thanks and endless
love go to my wife, Janet, for being there when I needed her most, giving me
more than I deserved, and making my dream our dream.
ii
revised 11/20/05
iii
Table of Contents
ACKNOWLEDGEMENTS
ii
List of Tables
vii
List of Figures
x
Abstract
xi
CHAPTER I: INTRODUCTION
Background of the Problem
Statement of the Problem
Purpose of the Study
Significance of the Study
Research Questions and Hypotheses
Overview of the Methodology
Organization of the Report
1
1
3
4
5
7
8
9
CHAPTER II: LITERATURE REVIEW
Cognitive Load Theory
Types of Cognitive Load
Working Memory
Long Term Memory
Schema Development
Automation
Mental Models
Elaboration and Reflection
Metacognition
Meaningful Learning
Mental Effort
Mental Effort and Motivation
Goals and Mental Effort
Goal Setting Theory
Goal Orientation Theory
Self-Efficacy
Self-Efficacy Theory
Expectancy-Value Theory
Task Value
Problem Solving
O’Neil’s Problem Solving Model
Learner Control
Summary of Cognitive Load Literature
10
11
12
16
17
17
18
19
19
20
21
21
22
23
24
25
25
26
26
27
27
28
30
32
revised 11/20/05
Games and Simulations
Games
Simulations
Simulation-Games
Games, Simulations, and Simulation-Games
Video Games
Motivational Aspects of Games
Fantasy
Control and Manipulation
Challenge and Complexity
Curiosity
Competition
Feedback
Fun
Play
Flow
Engagement
Learning and Other Outcomes from Games and Simulations
Positive Outcomes from Games and Simulations
Relationship of Motivation to Negative or Null Outcomes
from Games and Simulations
Relationship of Instruction Design to Learning
from Games and Simulations
Reflection and Debriefing
Summary of Games and Simulations Literature
Assessment of Problem Solving
Measurement of Content Understanding
Measurement of Problem Solving Strategies
Measurement of Self-Regulation
Summary of Problem Solving Assessment Literature
Scaffolding
Graphical Scaffolding
Navigation Maps
Contiguity Effect
Split Attention Effect
Summary of Scaffolding Literature
Summary of the Literature Review
CHAPTER III: METHODOLOGY
Research Questions and Hypotheses
Research Design
Study Sample
Pilot Study Sample
iv
38
39
40
41
41
44
45
47
48
49
50
51
52
52
53
53
54
54
55
63
64
65
65
70
70
75
76
77
79
81
82
89
89
90
94
103
103
104
105
105
revised 11/20/05
v
Main Study Sample
Solicitation of Participants for the Main Study
Randomized Assignment for the Main Study
Number of Participants Whose Data Were Analyzed
Hardware
Instruments
Demographic, Gameplay, and Game Preference Questionnaire
Task Completion Form
Self-Regulation Questionnaire
SafeCracker®
Navigation Map
Knowledge Map
Content Understanding Measure
Scoring of Knowledge Map
Domain-Specific Problem Solving Strategies Measure
Scoring of Problem Solving Strategies Retention and Transfer Responses
Procedure for the Pilot Study
Administration of Demographic and Self-Regulation Questionnaires
Introduction to Using the Knowledge Mapping Software
Introduction to the Game SafeCracker
SafeCracker Training Script
Introduction to Using the Navigation Map
Training Map Script
Script for the Control Group on How to Navigate the Mansion
First Game
Creating the Knowledge Map (Occasion 1)
First Problem Solving Strategies Questionnaire (Occasion 1)
Second Game
Knowledge Map and Problem Solving Strategies Questionnaires (Occasion 2)
Debriefing and Extra Play Time
Timing Chart for the Pilot Study
Results of the Pilot Study
Adjustments to the Knowledge Mapping Instructions
Adjustments to the SafeCracker Instructions
Adjustments to the Problem Solving Strategies Instructions
Adjustments to the Task Completion Form
Procedure for the Main Study
Demographic and Self-Regulation Questionnaires
Introduction to the Using Knowledge Mapping Software
Introduction to the Game SafeCracker
SafeCracker Training Script
Introduction to Using the Navigation Map
Training Map Script
105
105
107
108
110
112
112
114
116
116
119
123
124
128
130
136
138
138
139
141
142
146
147
150
151
152
152
153
154
154
155
155
156
157
159
160
161
161
161
165
166
173
174
revised 11/20/05
vi
Script for the Control Group on How to Navigate the Mansion
First Game
Creating the Knowledge Map (Occasion 1)
Problem Solving Strategies Questionnaire (Occasion 1)
Second Game
Knowledge Map and Problem Solving Strategies Questionnaires (Occasion 2)
Debriefing and Extra Play Time
Timing Chart for the Main Study
177
178
179
179
180
181
182
182
CHAPTER IV: ANALYSIS AND RESULTS
Research Hypotheses
Content Understanding Measurement
Interrater Reliability of the Problem Solving Strategy Measure
Problem Solving Strategy Measure
Retention Question
Transfer Question
Trait Self-Regulation Measure
Safe Cracking Performance
Continuing Motivation Measure
Tests of the Research Hypotheses
184
184
185
188
193
193
198
200
205
210
211
CHAPTER V: SUMMARY OF RESULTS AND DISCUSSION
Summary of Results
Discussion
Possible Effects from the Contiguity Effect and Extraneous Load
Possible Effects from Strategy Training
Strategy Priming During Knowledge Map Training
Strategy Priming During SafeCracker Training
Strategy Priming During Navigation Map and Basic
Navigation Training
Strategy Priming at the Start of Each Game
Summary of the Discussion
215
215
218
218
223
225
225
CHAPTER VI: SUMMARY, CONCLUSIONS, AND IMPLICATIONS
Summary
Conclusions
Implications
230
230
235
236
REFERENCES
240
APPENDICES
Appendix A: Self-Regulation Questionnaire
Appendix B: Knowledge Map Specifications
257
260
226
226
227
revised 11/20/05
vii
List of Tables
Table
Page
1. Characteristics of Games and Simulations
44
2. Non-Empirical Studies: Media, Measures, and Participants
57
3. Empirical Studies: Media, Measures, and Participants
60
4. Characteristics of Games, Simulations, and SafeCracker
118
5. An Example of Participant Knowledge Map Scoring
125
6. Problem Solving Strategy Retention and Transfer Questions
132
7. Idea Units for the Problem Solving Strategy Retention Question
134
8. Idea Units for the Problem Solving Strategy Transfer Question
136
9. Time Chart for the Pilot Study
146
10. Time Chart for the Main Study
183
11. Descriptive Statistics of Knowledge Map Occasion 1 and Occasion 2
Scores for the Control Group, Navigation Map Group, and
Both Groups Combined
185
12. Descriptive Statistics of the Percentage of Knowledge Map Occasion 1
and Occasion 2 Scores for the Control Group, Navigation Map Group,
and Both Groups Combined
187
13. Knowledge Map Means by Group by Occasion
188
14. Matrix of the Number of Participant Responses Assigned to Each
Idea Unit in the Problem Solving Retention Measure Based on
Two Rater’s Scoring
190
15. Matrix of the Number of Participant Responses Assigned to Each
Idea Unit in the Problem Solving Transfer Measure Based on
Two Rater’s Scoring
192
revised 11/20/05
viii
16. Descriptive Statistics of Problem Solving Strategy Retention Occasion 1
and Occasion 2 Scores for the Control Group, Navigation Map Group,
and Both Groups Combined
194
17. Descriptive Statistics of the Percentage of Problem Solving Strategy
Retention Occasion 1 and Occasion 2 Scores for the Control Group,
Navigation Map Group, and Both Groups Combined
196
18. Means for Problem Solving Strategy Retention by Group by Occasion
197
19. Descriptive Statistics of Problem Solving Strategy Transfer Occasion 1
and Occasion 2 Scores for the Control and Navigation Map Groups
198
20. Descriptive Statistics of the Percentage of Problem Solving Strategy Transfer
Occasion 1 and Occasion 2 Scores for the Control Group, Navigation Map
Group, and Both Groups Combined
199
21. Means for Problem Solving Strategy Transfer by Group by Occasion
200
22. Descriptive Statistics of Trait Self-Regulation Scores for the Control Group,
Navigation Map Group, and Both Groups Combined
201
23. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Knowledge Maps for the Control
Group
203
24. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Problem Solving Retention Responses
by the Control Group
203
25. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Problem Solving Transfer Responses
by the Control Group
203
26. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Knowledge Maps for the Control
Group
203
27. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Problem Solving Retention Responses
by the Control Group
204
revised 11/20/05
ix
28. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Problem Solving Transfer Responses
by the Control Group
204
29. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Knowledge Maps for Both Groups
Combined
204
30. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Problem Solving Retention Responses
for Both Groups Combined
205
31. Correlation Between Self-Regulation Components and Occasion 1,
Occasion 2, and Improvement for Problem Solving Transfer Responses
for Both Groups Combined
205
32. Descriptive Statistics of the Number of Safes Opened During Occasion 1
and Occasion 2, and the Total Number of Safes Opened by the Control
Group, Navigation Map Group, and Both Groups Combined
207
33. Means for the Number of Safes Opened by Group by Occasion
208
34. Correlation Between Self-Regulation Components and Number of Safes
Opened by the Control Group
210
35. Correlation Between Self-Regulation Components and Number of Safes
Opened by the Navigation Map Group
210
36. Correlation Between Self-Regulation Components and Number of Safes
Opened for Both Groups Combined
210
37. Descriptive Statistics of the Continuing Motivation Scores of the Control,
Navigation Map Group, and Both Groups Combined
211
revised 11/20/05
x
List of Figures
Figures
Page
1. O’Neil’s Problem Solving Model
29
2. Knowledge Map User Interface Displaying 3 Concepts and 2 Links
73
3. Adding Concepts to the Knowledge Map
74
4. Participant Solicitation Flyer
106
5. Sample Navigation Map
108
6. Task Completion Form 1 for Pilot Study
115
7. Task Completion Form 2 for Pilot Study
115
8. Navigation Map for Game 1
120
9. Navigation Map for Game 2
120
10. Expert SafeCracker Knowledge Map 1
125
11. Expert SafeCracker Knowledge Map 2
126
12. Expert SafeCracker Knowledge Map 3
127
13. Sample Participant Map for the Game SafeCracker
128
14. Training Map
147
15. Task Completion Form 1 for Main Study
178
16. Task Completion Form 2 for Main Study
180
revised 11/20/05
xi
ABSTRACT
Cognitive load theory defines a limited capacity working memory with
associated auditory and visual/spatial channels. Navigation in computer-based
hypermedia and video game environments is believed to place a heavy cognitive
load on working memory. Current 3-dimensional (3-D) computer-based video games
often include complex, occluded environments (conditions where vision is blocked
by objects in the environment, such as internal walls, trees, hills, or buildings)
preventing players from plotting a direct visual course from a start to finish location.
Navigation maps may provide the support needed to effectively navigate in these
environments. Navigation maps are a type of graphical scaffolding, and scaffolding,
including graphical scaffolding, helps learners by reducing the amount of cognitive
load placed on working memory, thereby leaving more working memory available
for learning.
Navigation maps have been shown to be effective in 3-D, occluded, video
game environments requiring complex navigation with simple problem solving tasks.
Navigation maps have also been shown to be effective in 2-dimensional
environments involving complex problem solving tasks. This study extended the
research by combining these two topics—navigation maps for navigation in 3-D,
occluded, computer-based video games and navigation maps in 2-dimensional
environments with complex problem solving tasks—by examining the effect of a
navigation map on a 3-D, occluded, computer-based video game with a complex
problem solving task. In addition to the effect of a navigation map on a problem
revised 11/20/05
xii
solving task, the effect of a navigation map on continuing motivation was examined.
Results of the study were unexpected; of the five hypotheses (four addressing
problem solving outcomes and one addressing continuing motivation) only one
hypothesis was partially supported, with the other four unsupported.
Two explanations were examined. It is suspected that the game environment
may not have been complex enough for the treatment group to have benefited from
use of a navigation map. Rather, the navigation map may have resulted in added,
unnecessary, cognitive load on the treatment group, offsetting any cognitive benefits
the navigation map was expected to offer, thereby lowering the performance of the
treatment group. The second explanation involved strategy priming. Both the
navigation map group and the control group received a considerable and equivalent
amount of problem solving strategy priming. It is believed that this priming may
have resulted in improving the performance of both groups enough to counter any
differences that might have been observed from the treatment (the navigation map)
had the priming not occurred.
Results of this study suggest that, while navigation maps have been found to
be effective for both navigation and problem solving, not all situations may require
or benefit from a navigation map. Additionally, other forms of scaffolding, such as
strategy priming, may provide a enough support to offset any gains that might be
observed from navigation map usage.
revised 11/20/05
1
CHAPTER 1
INTRODUCTION
With the current power of computers and the state-of-the-art of video
games, it is likely that future versions of educational video games will include
immersive environments in the form of three-dimensional (3-D), computer-based
games requiring navigation through occluded paths in order to perform complex
problem solving tasks. Cutmore, Hines, Maberly, Langford, and Hawgood (2000)
define immersion as a ‘view-centered perspective’ which results in “the sensation of
being situated within an environment as opposed to viewing it on a map or other
such abstract representation” (p. 223). According to Cutmore, et al., occlusion refers
to conditions where vision is blocked by objects in the environment, such as internal
walls or large environmental features like trees, hills, or buildings. Occluded paths
prevent the ability to plot a “direct visual course from the start to finish locations.
Rather, knowledge of the layout is required” (p. 224). This study examines the use of
navigation maps to support navigation through a 3-D, occluded computer-based
video game involving a complex problem solving task.
Chapter one begins with an examination of the background of the problem.
Next the purpose of the study is discussed, followed by why the study is
significant—how it will inform the literature—and the hypotheses that will be
addressed. The next sections in chapter one include an overview of the methodology
that will be utilized and a brief explanation of the organization of this dissertation.
Background of the Problem
revised 11/20/05
2
Educators and trainers began to take notice of the power and potential of
computer games for education and training back in the 1970s and 1980s (Donchin,
1989; Malone, 1981; Malone & Lepper, 1987; Ramsberger, Hopwood, Hargan, &
Underfull, 1983; Ruben, 1999; Thomas & Macredie, 1994). Computer games were
hypothesized to be potentially useful for instructional purposes and were also
hypothesized to provide multiple benefits: (a) complex and diverse approaches to
learning processes and outcomes; (b) interactivity; (c) ability to address cognitive as
well as affective learning issues; and perhaps most importantly, (d) motivation for
learning (O’Neil, Baker, & Fisher, 2002).
Despite early expectations, research into the effectiveness of games and
simulations as educational media has been met with mixed reviews (de Jong & van
Joolingen, 1998; Garris, Ahlers, & Driskell, 2002). It has been suggested that the
lack of consensus can be attributed to weaknesses in instructional strategies
embedded in the media and to other issues related to cognitive load (Chalmers, 2003;
Cutmore et al., 2000; Lee, 1999; Thiagarajan, 1998; Wolfe, 1997). Cognitive load
refers to the amount of mental activity imposed on working memory at an instance in
time (Chalmers, 2003; Cooper, 1998; Sweller & Chandler, 1994, Yeung, 1999).
Researchers have proposed that working memory limitations can have an adverse
effect on learning (Sweller & Chandler, 1994; Yeung, 1999). Further, cognitive load
theory suggests that learning involves the development of schemas (Atkinson, Derry,
Renkl, & Wortham, 2000), a process constrained by limited working memory and
separate channels for auditory and visual/spatial stimuli (Brunken, Plass, & Leutner,
revised 11/20/05
3
2003). Cognitive load theory also describes an unlimited capacity, long-term
memory that can store vast numbers of schemas (Mousavi, Low, & Sweller, 1995).
The inclusion of scaffolding, which provides support during schema
development by reducing the load in working memory, is a form of instructional
design; more specifically, it is an instructional strategy (Allen, 1997; Clark, 2001).
For example, graphical scaffolding, which involves the use of imagery-based aids,
has been shown to provide effective support for graphically-based learning
environments, including video games (Benbasat & Todd, 1993; Farrell & Moore,
2000-2001; Mayer, Mautone, & Prothero, 2002). Navigation maps, a particular form
of graphical scaffolding, have been shown to be an effective scaffold for navigation
of a three-dimensional (3-D) virtual environment (Cutmore et al., 2000). Navigation
maps have also been shown to be an effective support for navigating and problem
solving in a two-dimensional (2-D) hypermedia environment (Baylor, 2001; Chou,
Lin, & Sun, 2000), which is comprised of nodes of information and links between the
various nodes (Bowdish, & Lawless, 1997). What has not been examined, and is the
purpose of this study, is the effect of navigation maps utilized for navigation in a 3D, occluded computer-based video game on outcomes of a complex problem solving
task.
Statement of the Problem
A major instructional issue in learning by doing within simulated
environments concerns the proper type of guidance, that is, how best to create
cognitive apprenticeship (Mayer et al., 2002). A virtual environment creates a
revised 11/20/05
4
number of issues with regards to learning. Problem solving within a virtual
environment involves not only the cognitive load associated with the to-be-learned
material, referred to as intrinsic cognitive load (Paas, Tuovinen, Tabbers, Van
Gerven, 2003), it also includes cognitive load related to the visual nature of the
environment, referred to as extraneous cognitive load (Brunken et al., 2003; Harp &
Mayer, 1998), as well as navigation within the environment—either germane
cognitive load or extraneous cognitive load, depending on the relationship of the
navigation to the learning task (Renkl, & Atkinson, 2003); It is germane cognitive
load if navigation is a necessary component for learning; that is, it is an instructional
strategy. It is extraneous cognitive load if navigation does not, of itself, support the
learning process; that is, it is included as a feature extraneous to content
understanding and learning. An important goal of instructional design within these
immersive environments involves determining methods for reducing the extraneous
cognitive load and/or germane cognitive load, thereby providing more working
memory capacity for intrinsic cognitive load (Brunken et al., 2003). This study will
examine the reduction of cognitive load through the use of graphical scaffolding in
the form of a navigation map, to determine if this instructional strategy can result in
better performance outcomes as reflected in retention and transfer (Paas et al., 2003)
in a game environment. Retention refers to the storage and retrieval of knowledge
and facts (Day, Arthur, & Gettman, 2001). Transfer refers to the application of
acquired knowledge and skills to new situations (Brunken et al., 2003)
revised 11/20/05
5
Purpose of the Study
The purpose of this study is to examine the effect of a navigation map on a
complex problem solving task in a 3-D, occluded computer-based video game. The
environment for this study is the interior of a mansion as instantiated in the video
game SafeCracker® (Daydream Interactive, Inc., 1995/2001). The navigation map is
a printed version of the floor plan of the first floor of the mansion, with relevant
room information, such as the name of the room and the location of doors. The
problem solving task involves navigating through the environment to locate specific
rooms, to find and acquire items and information necessary to open safes located
within the prescribed rooms, and ultimately, to open the safes. With one group
playing the game while using the navigation map and the other group playing the
game without aid of a navigation map, this study will examine differences in
problem solving outcomes informed by the problem solving model defined by
O’Neil (1999); see Figure 1.
Significance of the Study
Research has examined the use of navigation maps, a particular form of
graphical scaffolding, as navigational support for complex problem solving tasks
within a hypermedia environment, where the navigation map provided an overview
of the 2-D, textual-based world which had been segmented into chunks of
information, or nodes (Chou et al., 2000). Research has also examined the use of
navigation maps as a navigational tool in 3-D virtual environments. Studies
involving 3-D environments have examined either the effect of a navigation map on
revised 11/20/05
6
navigation within an occluded environment with the singular goal of getting from
point A to point B (Cutmore et al., 2000) or on navigation in a maze-like
environment (hallways) that included a simple problem solving task; finding a key
along the path in order to open a door at the end of the path (Galimberti, Ignazi,
Vercesi, &Riva, 2001). Research has not combined these two research topics; it has
not assessed the use of navigation maps in relationship to a ‘complex’ problem
solving task in a ‘complex,’ ‘occluded three-dimensional’ virtual environment.
While a number of studies on hypermedia environments have examined the
issue of 2-D maps (i.e., a site map) to aid in navigation of the various nodes for
complex problem solving tasks (e.g., Chou & Lin, 1998), no study has looked at the
effect of the use of 2-D topological maps (i.e., a floor plan) on navigation within a 3D video game environment in relationship to a complex problem solving task. It is
argued here that the role of the two navigation map types (2-D site map and 2-D
topological floor plan) serve the same purpose in terms of cognitive load, which is,
they reduce cognitive load by distributing some of load normally placed in working
memory to an external aid, the navigation map. In other words, information (the
structure of the environment) that would normally have been held in working
memory is offloaded to an accessible, external map of the environment. However, it
is also argued here that the spatial aspects of the two learning environments differ
substantially. A larger cognitive load is placed on the visual/spatial channel of
working memory with a 3-D video game environment as compared to a 2-D
hypermedia environment, due to the more complex visual requirements of working
revised 11/20/05
7
within a 3-D world as opposed to a 2-D world, thereby leaving less working memory
capacity in the 3-D video game for visual stimuli; the navigation map. Therefore, the
cognitive load benefits of map usage in a 3-D environment may not be as great as the
cognitive load benefits of map usage in a 2-D environment, particularly if, as in this
experiment, the map is spatially separated from the main environment—the video
game—a condition which adds cognitive load (Mayer & Moreno, 1998).
As immersive 3-D video games become more widespread as commercial
entertainment, it is likely that interest will also grow for the utilization of 3-D video
games as educational media, particularly because of the perceived motivational
aspects of video games for engaging students. According to Pintrich and Schunk
(2002), motivation is “the process whereby goal-directed activity is instigated and
sustained” (p. 405). As Tennyson and Breuer (2002) contended, motivation
influences both attention and maintenance processes, generating the mental effort
that drives us to apply our knowledge and skills. Salomon (1983) described mental
effort as the depth or thoughtfulness a learner invests in processing material.
Therefore, the role of navigation maps to reduce the load induced by navigation and,
thereby, reduce burdens on working memory, is an important issue for enhancing the
effectiveness of video games as educational environments.
Research Questions and Hypotheses
Research Question 1: Will the problem solving performance of participants
who use a navigation map (the treatment group) in a 3-D, occluded computer-based
revised 11/20/05
8
video game (i.e., SafeCracker®) be better than the problem solving performance of
those who do not use the map (the control group)?
Hypothesis 1: Participants who use a navigation map (the treatment group)
will exhibit significantly greater content understanding than participants who do not
use a navigation map (the control group).
Hypothesis 2: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy retention than participants who do not
use a navigation map (the control group).
Hypothesis 3: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy transfer than participants who do not
use a navigation map (the control group).
Hypothesis 4: There will be no significant difference in self-regulation
between the navigation map group (the treatment group) and the control group.
However, it is expected that higher levels of self-regulation will be associated with
better performance.
Research Question 2: Will the continuing motivation of participants who
use a navigation map in a 3-D, occluded computer-based video game (i.e.,
SafeCracker®) be greater than the continuing motivation of those who do not use the
map (the control group)?
Hypothesis 5: Participants who use a navigation map (the treatment group)
will exhibit a greater amount of continuing motivation, as indicated by continued
revised 11/20/05
9
optional game play, than participants who do not use a navigation map (the control
group).
Overview of the Methodology
This study utilized an experimental, posttest only 2x2 repeated measures
design. The first factor had 2 levels (one treatment group, and one control group).
The second factor had 2 levels (occasion 1 and occasion 2). Participants were
randomly assigned to either the treatment or the control group. Group sessions
involved only one group type: either all treatment participants or all control
participants. The experimental design involved administration of pretest
questionnaires, the treatment, the occasion 1 instruments, the treatment, the occasion
2 instruments, and debriefing. After debriefing, participants were offered up to 30
minutes of additional playing time (to examine continuing motivation).
Organization of the Dissertation
Chapter one provides an overview of the study with a brief introduction and
background of the topic, the problem being addressed, the significance of the study,
the hypotheses that will be tested, and an overview of the methodology of the
experiment. Chapter two is the literature review of the domains that inform the
current research: cognitive load theory, games and simulations, assessment of
problem solving, and scaffolding. Chapter three describes the study’s methodology,
with discussions of the sample, the study, the instruments, the procedures, and the
data analysis methods. Chapter four presents the results of the experiment, and
includes both descriptive and inferential statistics. Chapter five summarizes the
revised 11/20/05
10
results and includes a discussion of the findings. Chapter six includes a summary of
the study, conclusions of the study, implications of the findings, and limitations that
may have affected the results of the study.
revised 11/20/05
11
CHAPTER 2
LITERTURE REVIEW
The literature review includes information on four areas relevant to the
research topic: cognitive load theory, games and simulations, assessment of problem
solving, and scaffolding. The cognitive load section is comprised of an introduction
to cognitive load theory, including three types of cognitive load, followed by
discussions of working and long-term memory, schema development, automation,
mental models, the roles of reflection and elaboration, and metacognition. Next,
under cognitive load theory, is a discussion of meaningful learning, including the
role of mental effort, mental effort and motivation, goals and mental effort, as well as
theories related to mental effort and goal setting, self-efficacy, along with theories
and topics related to self-efficacy, and problem solving, with a discussion of the
O’Neil Problem Solving model (O’Neil, 1999). Next is a discussion of learner
control as informed by cognitive load theory. The discussion of cognitive load theory
ends with summary of the topic.
Following cognitive load theory is a discussion of games and simulations,
beginning with the defining of games, simulations, simulation-games, and video
games. Next the motivational aspects of games are introduced with a discussion of
the major characteristics of motivation: fantasy, control and manipulation, challenge
and complexity, curiosity, competition, feedback, and fun. The final section under
games and simulations is a discussion of learning and other outcomes attributed to
games and simulations. This section includes discussion of positive outcomes from
revised 11/20/05
12
games and simulations, the relationship of motivation to negative or null outcomes
for games and simulations, the relationship of instructional design to learning from
games and simulations, and the roles of reflection and debriefing. The topic ends
with a summary of the games and simulations discussion.
The third section of this chapter is the assessment of problem solving
focused on the three constructs established in the O’Neil Problem Solving model
(O’Neil, 1999): measurement of content understanding, measurement of problem
solving strategies, and measurement of self-regulation. The section ends with a
summary of problem solving assessment.
The fourth and final section, scaffolding, begins with a general discussion of
scaffolding, followed by a review of the literature on a type of scaffolding relevant to
this study, graphical scaffolding. Within graphical scaffolding, research on the use
navigation maps is examined, along with the relationship of the contiguity effect and
the split attention effect to potential benefits of a navigation map. The section ends in
with a summary of scaffolding. Chapter two ends with a summary of the literature
review.
Cognitive Load Theory
Cognition is the mental faculty or process by which knowledge is acquired.
(Berube et al., 2001). Cognitive load theory, which began in the 1980s and
underwent substantial development and expansion in the 1990s (Paas et al., 2003), is
concerned with the development of instructional methods aligned with the learners’
limited cognitive processing capacity, to stimulate their ability to apply acquired
revised 11/20/05
13
knowledge and skills to new situations (i.e., transfer). Cognitive load theory is based
on several assumptions regarding human cognitive architecture: the assumption of a
virtually unlimited capacity of long-term memory, schema theory of mental
representations of knowledge, and limited-processing capacity assumptions of
working memory with partly independent processing units for visual-spatial and
auditory-verbal information (Brunken et al., 2003; Mayer & Moreno, 2003; Mousavi
et al., 1995). Researchers have proposed that working memory limitations can have
an adverse effect on learning (Sweller & Chandler, 1994; Yeung, 1999).
Cognitive load is the total amount of mental activity imposed on working
memory at an instance in time (Chalmers, 2003; Cooper, 1998; Sweller & Chandler,
1994, Yeung, 1999). According to Brunken et al. (2003), cognitive load is a
theoretical construct describing the internal processes of information processing that
cannot be observed directly. Paas et al. (2003) defined cognitive load in terms of two
dimensions; an assessment dimension and a causal dimension. The above definitions
of cognitive load fit within the Paas et al.’s description of the assessment dimension,
which reflects the measurable concepts of cognitive load, mental effort, and
performance. The causal dimension reflects the interaction between task and learner
characteristics (Paas et al., 2003). This literature review will focus on the assessment
dimension and only indirectly discuss the causal dimension.
Types of Cognitive Load
Cognitive load researchers have identified up to three types of cognitive
load. All agree on intrinsic cognitive load (Brunken et al., 2003; Paas et al., 2003;
revised 11/20/05
14
Renkl & Atkinson, 2003), which is the load involved in the process of learning; the
load required by metacognition, working memory, and long-term memory. Working
memory is discussed in the next section and is followed by a discussion of long-term
memory. Metacognition is discussed later under the topic of Meaningful Learning.
Another type of cognitive load agreed upon by researchers is extraneous
cognitive load. However, it is the scope of this load that is in dispute. To some
researchers, any cognitive load that is not intrinsic cognitive load is extraneous
cognitive load. To other researchers, non-intrinsic cognitive load is divided into
germane cognitive load and extraneous cognitive load. Germane cognitive load is
the cognitive load required to process the intrinsic cognitive load (Renkl, &
Atkinson, 2003). From a non-computer-based perspective, this could include
searching a book or organizing notes, in order to process the to-be-learned
information. From a computer-based perspective, this could include the interface and
controls a learner must interact with in order to be exposed to, and process, the to-belearned material. In contrast to germane cognitive load, these researchers see
extraneous cognitive load as the load caused by any unnecessary stimuli, such as
fancy interface designs or extraneous sounds (Brunken et al., 2003).
For each of the two working memory subsystems (visual/spatial and
auditory/verbal; see the next section “Working Memory” for further discussion of
these two subsystems), the total amount of cognitive load for a particular individual
under particular conditions can be defined as the sum of intrinsic, extraneous, and
germane cognitive loads induced by the instructional materials. Therefore, a high
revised 11/20/05
15
cognitive load can be a result of a high intrinsic cognitive load (i.e., the nature of the
instructional content itself). It can, however, also be a result of a high germane
cognitive load (i.e., a result of activities performed on the materials that result in a
high memory load) or high extraneous cognitive load (i.e., a result of the inclusion of
unnecessary information or stimuli that result in a high memory load; Brunken et al.,
2003).
Each type of cognitive load (intrinsic, germane, and extraneous) is affected
by differing characteristics of the learning environment. By addressing each of these
environmental conditions, the various cognitive load types can be controlled or even
reduced. For example, the interdependence of the elements of the to-be-learned
material affects intrinsic cognitive load. According to Paas et al. (2003), low-element
interactivity refers to environments where each element can be learned
independently of the other elements, and there is little direct interaction between the
elements. High-element interactivity refers to environments where there is so much
interaction between elements that they cannot be understood until all the elements
and their interactions are processed simultaneously. As a consequence, high-element
interactivity material is difficult to understand. Element interactivity is the driver of
intrinsic cognitive load, because the demands on working memory capacity imposed
by element interactivity are intrinsic to the material being learned. Reduction in
intrinsic load can occur only by dividing the material into small learning modules
(Paas et al., 2003).
revised 11/20/05
16
Germane cognitive load is influenced by the instructional design. The
manner in which information is presented to learners and the learning activities
required of learners are factors relevant to levels of germane cognitive load (Renkl,
& Atkinson, 2003). Renkl and Atkinson (2003) commented that, unlike extraneous
cognitive load which interferes with learning, germane cognitive load enhances
learning.
Extraneous cognitive load (Renkl & Atkinson, 2003) is the most controllable
load, since it is caused by materials that are unnecessary to instruction. However,
those same materials may be important for motivation. Unnecessary items are
globally referred to as extraneous. However, one category of extraneous items,
seductive details (Mayer, Heiser, & Lonn, 2001), refers to highly interesting but
unimportant elements or instructional segments. Schraw (1998) stated that these
segments usually contain information that is tangential to the main themes of a story,
but are memorable because they deal with controversial or sensational topics. The
seductive detail effect is the reduction of retention caused by the inclusion of
extraneous details (Harp & Mayer, 1998) and affects both retention and transfer
(Moreno & Mayer, 2000). By contrast, some research has proposed that learning
might benefit from the inclusion of extraneous information. Arousal theory suggests
that adding entertaining auditory adjuncts will make a learning task more interesting,
because it creates a greater level of attention so that more material is processed by
the learner (Moreno & Mayer, 2000). A possible solution to the conflict of the
seductive detail effect, which proposes that extraneous details are detrimental, and
revised 11/20/05
17
arousal theory, which proposes that seductive details in the form of interesting
auditory adjuncts may be beneficial, is to include the seductive details, but guide the
learner away from them and to the relevant information (Harp & Mayer, 1998).
A related construct to seductive details is auditory adjuncts. According to
Banbury, Macken, Tremblay, and Jones (2001), while attempting to focus on a
mental activity, most of us, at one time or another, have had our attention drawn to
extraneous sounds. Banbury et al. argued that, on the surface, seductive details and
auditory adjuncts (such as sound effects or music) seem similar, but the underlying
cognitive mechanisms are different. While seductive details seem to prime
inappropriate schemas into which incoming information is assimilated, auditory
adjuncts seem to overload auditory working memory (Moreno & Mayer, 2000). For
a definition of schema, see the discussion below on schema, under Long-Term
Memory. Whether discussing intrinsic cognitive load, germane cognitive load, or
extraneous cognitive load, a major concern of research and instruction is the limits
imposed by working memory.
Working Memory
Working memory refers to the limited capacity for holding information in
mind for several seconds in the context of cognitive activity (Gevins et al., 1998). In
his seminal article, Miller (1956) described a working memory capacity of between
five and nine chunks of information. Bruning et al. (1999) defined a chunk as any
stimulus that is used, such as a letter, number, or word. Recent research has
suggested that, depending on the type of information being processed, the limited
revised 11/20/05
18
capacity of working memory may be much lower than Miller’s (1956) findings of
five to nine chunks of information. According to Paas et al. (2003), working
memory, in which all conscious cognitive processing occurs, can handle only a very
limited number of novel interacting elements; possibly no more than two or three.
According to Baddeley (1986), working memory is comprised of three
components, central executive that coordinates two slave systems—a visuospatial
sketchpad for visuospatial (visual/spatial) information such as written text or
pictures, and a phonological loop for phonological (auditory/verbal) information
such as spoken text or music (Baddeley, 1986, Baddeley & Logie, 1999). All three
systems are limited in capacity and independent from one another. Load placed on
one system does not affect the load placed on the other two systems (Brunken et al.,
2003). When information enters a slave system, the information is decoded and a
mental model is constructed (see the discussion later on “Mental Models”). The
central executive system controls when the information is moved to working
memory for integration with other information, including information retrieved from
long term memory (Bruning et al., 1999). The functions of the central executive
include selecting, organizing, and integrating (Mayer, 2001). Selecting involves
attending to relevant stimuli. Organizing involves building internal connections
among the stimuli to form a coherent mental model. Integrating involves building
connections between the information (the stimuli) and prior knowledge (Mayer,
2001).
Long-Term Memory
revised 11/20/05
19
In contrast to working memory, long-term memory has an unlimited,
permanent capacity (Tennyson & Breuer, 2002) and can contain vast numbers of
schemas (discussed below). Noyes and Garland (2003) commented that information
not held in working memory will need to be retained by the long-term memory
system. Storing more knowledge in long-term memory reduces the load on working
memory, which results in a greater capacity being made available for active
processing.
Schema development. Schemas are cognitive constructs that incorporate
multiple elements of information into a single element with a specific function (Paas
et al., 2003). Schema is a cognitive construct that permits people to treat multiple
sub-elements of information as a single element, categorized according to the
manner in which it will be used (Kalyuga, Chandler, & Sweller, 1998). Schemas are
generally thought of as ways of viewing the world and, in a more specific sense,
ways of incorporating instruction into our cognition. Schema acquisition is a primary
learning mechanism (Chalmers, 2003). Schemas have the functions of storing
information in long-term memory and of reducing working memory load by
permitting people to treat multiple elements of information as a single element
(Kalyuga et al., 1998; Mousavi et al., 1995). According to cognitive load theory,
multiple elements of information can be chunked as single elements in cognitive
schema (Chalmers, 2003).With schema use, a single element in working memory
might consist of a large number of lower level, interacting elements which, if
revised 11/20/05
20
processed individually, might have exceeded the capacity of working memory (Paas
et al., 2003).
Automation. If a schema can be brought into working memory in automated
form, it will place limited demands on working memory resources, leaving more
resources available for cognitive activities such as searching for a possible problem
solution (Kalyuga et al., 1998). Controlled use of schemas requires conscious effort,
and therefore, working memory resources. By contrast, after being sufficiently
practiced over hundred of hours, schemas can operate under automatic, rather than
controlled, processing (Clark, 1999; Mousavi et al., 1995), requiring minimal
working memory resources and allowing for problem solving to proceed with
minimal effort (Kalyuga, Ayers, Chandler, & Sweller, 2003; Kalyuga et al., 1998;
Paas et al., 2003). Because of their cognitive benefits, the primary goals of
instruction are the construction (chunking) and automation of schemas (Paas et al.,
2003).
Mental Models
Mental models explain human understanding of external reality, translating
reality into internal representations and utilizing them in problem solving (Park &
Gittelman, 1995). According to Allen (1997), mental models are usually considered
the way in which people model processes. This emphasis on process distinguishes
mental models from other types of cognitive organizers such as schemas. A mental
model synthesizes several steps of a process and organizes them as a unit. A mental
model does not have to represent all of the steps which compose the actual process.
revised 11/20/05
21
Mental models may be incomplete and may even be internally inconsistent (Allen,
1997). Models of mental models are termed conceptual models. Conceptual models
include: metaphor; surrogates; mapping, task-action grammars, and plans. Mental
model formation depends heavily on the conceptualizations that individuals bring to
a task (Park & Gittelman, 1995).
Elaboration and Reflection
Elaboration and reflection are processes involved to the development of
schemas and mental models. Elaborations are used to develop schemas whereby
nonarbitrary relationships are established between new information elements and the
learner’s prior knowledge (van Merrienboer, Kirshner, & Kester, 2003). According
to Kee and Davies (1990), elaboration consists of the creation of a semantic event
that includes the to-be-learned items in an interaction. For example, if the to-belearned items were the nouns ‘boat’ and ‘ocean,’ the elaboration might consist of the
creation of the semantic event “the boat crossed the ocean.”. With reflection, learners
are encouraged to consider their problem solving process and to try to identify ways
of improving it (Atkinson, Renkl, & Merrill, 2003). Reflection is reasoned and
conceptual, allowing the thinker to consider various alternatives (Howland, Laffey,
& Espinosa, 1997). According to Chi (2000), the self-explanation effect (also known
as reflection or elaboration) is a dual process that involves generating inferences and
correcting the learner’s own mental model.
revised 11/20/05
22
Metacognition
Metacognition, or the management of cognitive processes, involves goalsetting, strategy selection, attention, and goal checking (Jones, Farquhar, & Surry,
1995). According to Harp and Mayer (1998), many cognitive models include the
executive processes of selecting, organizing, and integrating. Selecting involves
paying attention to the relevant pieces of information. Organizing involves building
internal connections among the selected pieces of information, such as causal chains.
Integrating involves building external connections between the incoming information
and prior knowledge existing in the learner’s long-term memory (Harp & Mayer,
1998). According to Jones et al. (1995), cognitive strategies are cognitive events that
describe the way in which we process information. Metacognition is a cognitive
strategy that has executive control over other cognitive strategies. Prior experience in
solving similar tasks and using various strategies will affect the selection of a
cognitive strategy, such as rehearsal strategies, elaboration strategies, or organization
strategies (Jones et al., 1995).
Meaningful Learning
Meaningful learning is defined as deep understanding of the material, which
includes attending to important aspects of the presented material, mentally
organizing it into a coherent cognitive structure, and integrating it with relevant
existing knowledge (Mayer & Moreno, 2003). Meaningful learning is reflected in the
ability to apply what was taught to new situations. Meaningful learning results in an
understanding of the basic concepts of the new material through its integration with
revised 11/20/05
23
knowledge already in long-term memory, known as the assimilation context (Davis
& Wiedenbeck, 2001).
According to assimilation theory (Ausubel, 1963, 1968), there are two kinds of
learning: rote learning and meaningful learning. Rote learning occurs through
repetition and memorization. It can lead to successful performance in situations
identical or very similar to those in which a skill was initially learned. However,
skills gained through rote learning are not easily extensible to other situations,
because they are not based on deep understanding of the material learned.
Meaningful learning, on the other hand, equips the learner for problem solving and
extension of learned concepts to situations different from the context in which the
skill was initially learned (Davis & Wiedenbeck, 2001; Mayer, 1981).
Mental Effort
Meaningful learning requires mental effort (Davis & Wiedenbeck, 2001;
Mayer, 1981). Salomon (1983) described mental effort as the depth or thoughtfulness
a learner invests in processing material. Mental effort is the aspect of cognitive load
that refers to the cognitive capacity that is actually allocated to accommodate the
demands imposed by a task. According to Salomon (1983), mental effort, relevant to
the task and material, appears to be the feature that distinguishes between mindless
or shallow processing from mindful or deep processing. Little effort is expended
when processing is carried out automatically or mindlessly (Salomon, 1983).
According to Clark (2003b), mental effort requires instructional messages (feedback)
that point out the novel elements of the to-be-learned material and emphasize the
revised 11/20/05
24
need to work hard. Clark also commented that instructional messages must present
concrete and challenging, yet achievable, learning and performance goals.
Mental effort and motivation. According to Pintrich and Schunk (2002),
motivation is “the process whereby goal-directed activity is instigated and sustained”
(p. 405). Pintrich and Schunk further commented that motivation is a process that
cannot be observed directly; Instead, it is “inferred from such behaviors as choice of
tasks, persistence, and verbalizations (e.g., ‘I really want to work on this’)” (p. 5).
According to Clark (2003d), “Without motivation, even the most capable person will
not work hard” (p. 21). However, mental effort investment and motivation should not
be equated. Motivation is the driving force, but for learning to actually take place,
some specific relevant mental activity needs to be activated. This activity is assumed
to be the employment of non-automatic effortful elaborations (Salomon, 1983).
A number of variables affect motivation and mental effort. In an extensive
review of motivation theories, Eccles and Wigfield (2002) discussed Brokowski and
colleagues’ motivation model (Borkowski, Carr, Relliger, & Pressley, 1990) that
highlights the interaction of the following cognitive, motivational, and selfprocesses: domain-specific knowledge; strategy knowledge; personal-motivational
states (including attributional beliefs, self-efficacy, and intrinsic motivation); and
knowledge of oneself (including goals and self perceptions). Each of these variables
has been examined through numerous studies. For example, in a study of college
freshmen, Livengood (1992) found that psychological variables (e.g., effort/ability,
reasoning, goal choice, and confidence) are strongly associated with academic
revised 11/20/05
25
participation and satisfaction. And Corno and Mandinah (1983) commented that
students in classrooms actively engage in a variety of cognitive interpretations of
their environments and themselves which, in turn, influence the amount and kind of
effort they will expend on classroom tasks.
Several factors affecting motivation and mental effort will be discussed. First
will be a discussion of goals and mental effort, along with related theories. Next will
be a discussion of self-efficacy and related theories. Domain-specific knowledge will
be discussed later under the heading of Problem Solving. Self-processes and strategy
knowledge were discussed previously under the section entitled Metacognition.
Goals and mental effort. According to Clark (1999), the more novel the goal
is perceived to be, the more effort we will invest until we believe we might fail.
Clark also contended that a task should not be too easy or too hard, because in either
case, the learner will lose interest (Clark, 1999; Malone & Lepper, 1987). At the
point where a goal is perceived as too easy to be worth investment of effort, effort is
reduced as we “unchoose” the goal. At the point where failure expectations begin,
effort is reduced as we unchoose the goal to avoid a loss of control. This inverted U
relationship suggests that mental effort problems include two broad forms: over
confidence and under confidence (Clark, 1999). Therefore, the level of mental effort
necessary to achieve goals can be influenced by adjusting perceptions of goal
novelty and goal attainment, and the effectiveness of the strategies people use to
achieve goals (Clark, 1999).
revised 11/20/05
26
Motivation influences both attention and maintenance processes (Tennyson
& Breuer, 2002), and generates the mental effort that drives us to apply our
knowledge and skills. As mentioned above, mental effort is goal-directed (Pintrich &
Schunk, 2002). But not all goals are motivating. For example, easy goals are not
motivating (Clark, 2003d). Further, vague goals are not as motivating as specific
goals. It has been shown that individuals given more general goals (such as “do your
best”) do not work as long as those given more specific goals, such as “list 70
contemporary authors” (Thompson et al., 2002; Locke & Latham, 2003).
Goal setting theory. According to Thompson et al. (2002), goal setting theory
is based on the simple premise that people exert effort toward accomplishing goals.
Goals may increase performance as long as a few factors are taken into account, such
as acceptance of the goal, feedback on progress toward the goal, a goal that is
appropriately challenging, and a goal that is specific (Thompson et al., 2002). Goal
setting guides the cognitive strategies in a certain direction. Goal checking are those
monitoring processes that check to see if the goal has been accomplished, or if the
selected strategy is working as expected. The monitoring process is active
throughout an activity and constantly evaluates the success of other processes. If a
cognitive strategy appears not to be working, an alternative may then be selected
(Jones et al., 1995).
Goal orientation theory. Goal setting theory is concerned with the prediction
that those with high performance goals and a perception of high ability will exert
great effort, and those with low ability perceptions will avoid effort (Miller et al.,
revised 11/20/05
27
1996). Once we are committed to a goal, we must make a plan to achieve the goal. A
key element of all goal-directed planning is our personal assessment of the necessary
skills and knowledge required to achieve a goal. Related to this assessment is the
self-belief in ones ability to achieve the goal.
Self-Efficacy
Self-efficacy is a judgment of one’s ability to perform a task within a specific
domain (Bandura, 1997). A key aspect of self-efficacy assessment is our perception
of how novel and difficult the goal is to achieve. The ongoing results of this analysis
are hypothesized to determine how much effort we will invest in a goal (Clark,
1999). Perceived self-efficacy refers to subjective judgments of how well one can
execute a particular course of action, handle a situation, learn a new skill or unit of
knowledge, and the like (Salomon, 1983). Perceived self-efficacy has much to do
with how a class of stimuli is perceived. The more demanding the stimuli is
perceived to be, the less efficacious the perceiver would feel about it. Conversely,
the more familiar, easy, or shallow it is perceived, the more efficacious the perceiver
would feel about handling it. It follows that perceived self efficacy should be related
to the perception of demand characteristics (the latter includes the perceived
worthwhileness of expending effort), and that both should affect effort investment
jointly (Salomon, 1983).
Self-efficacy theory. According to Mayer (1998), self-efficacy theory predicts
that students work harder on a learning task when they judge themselves as capable
versus when they lack confidence in their ability to learn. Self-efficacy theory also
revised 11/20/05
28
predicts that students understand the material better when they have high selfefficacy than when they have low self-efficacy (Mayer, 1998). Effort is primarily
influenced by specific and detailed self efficacy assessments of the knowledge
required to achieve tasks (Clark, 1999). A person’s belief about whether he or she
has the skills required to succeed at a task is possibly the most important factor in the
quality and quantity of mental effort that a person will invest (Clark, 2003d).
Expectancy-Value Theory. Related to self-efficacy theory, expectancy-value
theories propose that the probability of behavior depends on the value of the goal
and the expectancy of obtaining that goal (Coffin & MacIntyre, 1999). Expectancies
refer to beliefs about how we will do on different tasks or activities, and values have
to do with incentives or reasons for doing the activity (Eccles & Wigfield, 2002).
From the perspective of expectancy-value theory, goal hierarchies (the importance
and the order of goals) could be organized around aspects of task value. Different
goals may be perceived as more or less useful, or more or less interesting. Eccles and
Wigfield (2002) suggested that the relative value attached to the goal should
influence its placement in a goal hierarchy, and the likelihood a person will try to
attain the goal and, therefore, exert mental effort. Clark (2003b) commented that the
more instruction supports a student’s interest and utility value for instructional goals,
as well as the student’s self-efficacy for a course, the more likely the student will
become actively engaged in the instruction and persist when faced with distractions.
Task value. Task value refers to an individual’s perceptions of how
interesting, important, and useful a task is (Coffin & MacIntyre, 1999). Interest in,
revised 11/20/05
29
and perceived importance and usefulness of, a task comprise important dimensions
of task value (Bong, 2001). Citing Eccles’ expectancy-value model, Townsend and
Hicks (1997) stated that the perception of task value is affected by a number of
factors, including the intrinsic value of a task, its perceived utility value, and its
attainment value. Thus, engagement in an academic task may occur because of
interest in the task or because the task is required for advancement in some other area
(Townsend & Hicks, 1997). According to Corno and Mandinah (1983), a task linked
to one’s aspirations (a “self-relevant” task) is a key condition for task value.
Problem solving
Problem solving is the intellectual skill to propose solutions to previously
unencountered problem situations (Tennyson & Breuer, 2002). A problem exists
when a problem solver has a goal but does not know how to reach it, so problem
solving is mental activity aimed at finding a solution to a problem (Baker & Mayer,
1999). Similarly, Tennyson and Breuer (2002) stated that problem solving is
associated with situations dealing with previously unencountered problems, requiring
the integration of new information with existing knowledge to form new knowledge.
These descriptions encompass Mayer and Moreno’s (2003) definition of transfer as
the ability to apply what was taught to new situations.
According to Tennyson and Breuer (2002), a first condition of problem
solving involves the differentiation process of selecting knowledge that is currently
in storage using known criteria. Concurrently, this selected knowledge is integrated
to form a new knowledge. Cognitive complexity within this condition focuses on
revised 11/20/05
30
elaborating the existing knowledge base. Problem solving may also involve
situations requiring the construction of knowledge by employing the entire cognitive
system. Therefore, the sophistication of a proposed solution is a factor of the
person’s knowledge base, level of cognitive complexity, higher-order thinking
strategies, and intelligence (Tennyson & Breuer, 2002). According to Mayer (1998),
successful problem solving depends on three components—skill, metaskill, and
will—and each of these components can be influenced by instruction.
Metacognition—in the form of metaskill—is central in problem solving because it
manages and coordinates the other components (Mayer, 1998).
O’Neil Problem Solving model. The O’Neil Problem Solving model (O’Neil,
1999, see figure 1 below) is based on Mayer and Wittrock’s (1996)
conceptualization: “Problem solving is cognitive processing directed at achieving a
goal when no solution method is obvious to the problem solver” (p. 47). This
definition is further analyzed into components suggested by the expertise literature:
content understanding or domain knowledge, domain-specific problem solving
strategies, and self-regulation (see, e.g., O’Neil, 1999, 2002). Self-regulation is
composed of metacognition (planning and self-monitoring) and motivation (effort
and self-efficacy). Thus, in the specifications for the construct of problem solving, to
be a successful problem solver, “one must know something (content knowledge),
possess intellectual tricks (problem solving strategies), be able to plan and monitor
one’s progress towards solving the problem (metacognition), and be motivated to
perform” (effort and self-efficacy; O’Neil, 1999, pp. 255-256).
revised 11/20/05
31
Figure 1. O’Neil Problem Solving Model
Problem Solving
Content
Understanding
Problem solving
Strategies
Self-Regulation
Metacognition
Planning
Domain
Specific
SelfMonitoring
Motivation
Effort
Domain
Independent
In problem solving, the skeletal structures are instantiated in content domains,
so that a set of structurally similar models for thinking about problem solving is
applied to science, mathematics, and social studies. These models may vary in the
explicitness of problem representations, the guidance about strategy (if any), the
demands of prior knowledge, the focus on correct procedures, the focus on
convergent or divergent responses, and so on (Baker & Mayer, 1999). Domainspecific aspects of problem solving (e.g., the part that is unique to geometry,
geology, or genealogy) involve the specific content knowledge, the specific
procedural knowledge in the domain, any domain-specific cognitive strategies (e.g.,
geometric proof, test, and fix), and domain specific discourse (O’Neil, 1998, as cited
in Baker & Mayer, 1999). Both domain-independent and domain-dependent
knowledge are usually essential for problem solving. Domain-dependent analyses
revised 11/20/05
32
focus on the subject matter as the source of all needed information (Baker & O’Neil,
2002).
Learner Control
In contrast to more traditional technologies that only deliver information,
computerized learning environments offer greater opportunities for interactivity and
learner control. These environments can offer simple sequencing and pace control or
they can allow the learner to decide which, and in what order, information will be
accessed (Barab, Young, & Wang, 1999). The term navigation refers to a process of
tracking one’s position in an environment, whether physical or virtual, to arrive at a
desired destination (Cutmore et al., 2000).
According to Cutmore et al. (2000), the route through the environment
consists of either a series of locations or a continuous movement along a path.
Effective navigation of a familiar environment depends upon a number of cognitive
factors. These include working memory for recent information, attention to
important cues for location, bearing and motion, and finally, a cognitive
representation of the environment which becomes part of a long-term memory; a
cognitive map (Cutmore et al., 2000). In this study, the control group will be subject
to the cognitive loads described by Cutmore et al. In contrast, the navigation map
provided to the treatment group will help reduce the load imposed on working
memory by aiding those participants in developing a cognitive representation of the
environment.
revised 11/20/05
33
Hypermedia environments divide information into a network of multimedia
nodes, or chunks of information, connected by various links (Barab, Bowdish, &
Lawless, 1997). According to Chalmers (2003), how easily learners become
disoriented in a hypermedia environment may be a function of the user interface.
One area where disorientation can be a problem is in the use of links. Although links
create the advantage of exploration, there is always the chance learners may become
lost, not knowing where they were, where they are going, or where they have been
(Chalmers, 2003).
With regards to virtual 3-D environments, Cutmore et al. (2000) argued that
navigation becomes problematic when the whole path cannot be viewed at once and
is largely occluded by objects in the environment. Under these conditions, one
cannot simply plot a direct visual course from the start to finish locations. Rather,
knowledge of the layout of the space is required (Cutmore et al., 2000).
Daniels and Moore (2000) commented that message complexity, stimulus
features, and additional cognitive demands inherent in hypermedia, such as learner
control, may combine to exceed the cognitive resources of some learners. Dillon and
Gabbard (1998) found that novice and lower aptitude students have the greatest
difficulty with hypermedia. Children are particularly affected by the cognitive
demands of interactive computer environments. According to Howland, Laffey, and
Espinosa (1997), many educators believe that young children do not have the
cognitive capacity to interact and make sense of the symbolic representations of
computer environments.
revised 11/20/05
34
In spite of the intuitive and theoretical appeal of hypertext environments,
empirical findings yield mixed results with respect to the learning benefits of learner
control over program control of instruction (Niemiec, Sikorski, & Wallberg, 1996;
Steinberg, 1989). Six extensive meta-analyses of distance and media learning studies
in the past decade have found the same negative or weak results (Bernard, et al,
2003). In reference to distance learning environments, Clark (2003c) argued that
when sequencing, contingencies, and learning strategies permit only minimal learner
control over pacing, then “except for the most advanced expert learners, learning will
be increased” (p. 14).
Summary of Cognitive Load Literature
Cognitive Load Theory is based on the assumptions of a limited working
memory with separate channels for auditory and visual/spatial stimuli, and a virtually
unlimited capacity long-term memory that stores schemas of varying complexity and
levels of automation (Brunken et al., 2003). According to Paas et al. (2003),
cognitive load refers to the amount of load placed on working memory. Miller
(1956) found that working memory limits range from five to nine chunks of
information. Bruning et al. (1999) defined a chunk as any stimulus that is used, such
as a letter, number, or word. Recent research has suggested that working memory
may be even more limited when working with novel element; limiting its capacity to
as little as two or three novel elements (Paas et al., 2003). Cognitive load can be
reduced through effective use of the auditory and visual/spatial channels, as well as
schemas stored in long-term memory.
revised 11/20/05
35
There are three types of cognitive load that can be defined in relationship to a
learning or problem solving task: intrinsic cognitive load, germane cognitive load,
and extraneous cognitive load. Intrinsic cognitive load refers to the cognitive load
placed on working memory by the to-be-learned material (Paas et al., 2003).
Germane cognitive load refers to the cognitive load required to access and process
the intrinsic cognitive load; For example, the problem solving processes that are
instantiated in the learning process so that learning can occur (Renkl & Atkinson,
2003). Extraneous cognitive load refers to the cognitive load imposed by stimuli that
neither support the learning process (i.e., germane cognitive load) nor are part of the
to-be-learned material (i.e., intrinsic cognitive load). Seductive details, a particular
type of extraneous cognitive load, are highly interesting but unimportant elements or
instructional segments that are often used to provide memorable or engaging
experiences (Mayer et al., 2001; Schraw, 1998).
An important goal of instructional design is to balance intrinsic, germane, and
extraneous cognitive loads to support learning outcomes and to recognize that the
specific balance is dependent on a number of factors (Brunken et al., 2003),
including the amount of prior knowledge and the need for motivation. Another major
factor affecting learning is element interactivity (Paas et al., 2003). Low-element
interactivity refers to environments where each element can be learned
independently of the other elements. High-element interactivity refers to
environments were there is so much interaction between elements that they cannot be
understood until all the elements and their interactions are processed simultaneously.
revised 11/20/05
36
Element interactivity drives intrinsic cognitive load, because the demands of working
memory increase as element interactivity increases. Cognitive load can be reduced
by dividing the to-be-learned materials into small learning modules, thereby reducing
germane load (Paas et al., 2003).
Working memory refers to the limited capacity for holding and processing
chunks of information. According to Miller (1956) working memory capacity varies
from five to nine chunks of information. More recently, Paas et al. (2003) argued that
working memory can only handle two or three “novel” chunks of information.
Working memory is comprised of a central executive that coordinates two slave
systems: a visuospatial sketchpad for visual information and a phonological loop for
auditory information (Baddeley, 1986). All three systems are limited in capacity and
independent of one another (Brunken et al., 2003).
Long-term memory has an unlimited permanent capacity (Tennyson &
Breuer, 2002), and can contain vast amounts of schemas. Schemas are cognitive
constructs that incorporate multiple elements of information into a single element
with a specific function (Paas et al., 2003). Schemas have the functions of storing
information in long-term memory and of reducing working memory load by
permitting people to treat multiple elements of information as a single element or
chunk (Kalyuga, et al., 1998; Mousavi et al., 1995). After being sufficiently
practiced over hundred of hours, schemas can operate under automatic, rather than
controlled, processing (Clark, 1999; Mousavi et al., 1995), requiring minimal
working memory resources and allowing for problem solving to proceed with
revised 11/20/05
37
minimal effort (Kalyuga et al., 2003; Kalyuga et al., 1998; Paas et al., 2003).
Because of their cognitive benefits, the primary goals of instruction are the
construction (chunking) and automation of schemas (Paas et al., 2003).
Mental models, also termed conceptual models, are internal representations
of our understanding of external reality. Mental models include: metaphor;
surrogates; mapping, task-action grammars, and plans. Because mental models
usually model processes, they differ from schema (Allen, 1997).
Elaboration and reflection are processes involved in the development of
schemas and mental models. Elaboration consists of the creation of a semantic event
that includes the to-be-learned items in an interaction (Kees & Davies, 1990).
Reflection encourages learners to consider their problem solving process and to try
to identify ways of improving it (Atkinson et al., 2003).
Metacogntion (i.e., the central executive function of working memory) is the
management of cognitive processes (Jones et al., 1995), as well as awareness of ones
own mental processes (Anderson, Krathwohl, Airasian, Cruikshank, et al., 2001).
According to Harp and Mayer (1998), many cognitive models include the executive
processes of selecting, organizing, and integrating. Selecting involves paying
attention to the relevant pieces of information. Organizing involves building internal
connections among the selected pieces of information, such as causal chains.
Integrating involves building external connections between the incoming information
and prior knowledge existing in the learner’s long-term memory (Harp & Mayer,
1998).
revised 11/20/05
38
Meaningful learning is defined as deep understanding of the material and is
reflected in the ability to apply what was taught to new situations; i.e., problem
solving transfer (Mayer & Moreno, 2003). Meaningful learning requires effective
metacognitive skills: the management of cognitive processes (Jones et al., 1995),
including selecting relevant information, organizing connections among the pieces of
information, and integrating (i.e., building) external connections between incoming
information and prior knowledge that exists in long-term memory (Harp & Mayer,
1998).
Mental effort refers to the cognitive capacity allocated to a task. Mental effort
is affected by motivation, and motivation cannot exist without goals (Clark, 2003d).
Goals are further affected by a combination of self-efficacy, the belief in one’s
ability to successfully carry out a particular behavior (Davis & Wiedenbeck, 2001)
and values, which are related to the incentives or reasons for doing an activity
(Eccles & Wigfield, 2002).
Motivation and mental effort are related, yet one does not necessarily lead to
the other. According to Clark (2003d), “Without motivation, even the most capable
person will not work hard” (p. 21). But motivation does not guarantee effort.
According to Salomon (1983), while motivation is a driving force, learning will only
occur if some specific mental activity is activated in the form of non-automatic
effortful elaborations.
A number of factors affect motivation, including domain-specific knowledge,
strategy knowledge, personal-motivational states (e.g., self-efficacy and intrinsic
revised 11/20/05
39
motivation), and knowledge of oneself (e.g., goals and self-perceptions; Browkowski
et al., 1990). Perceptions of a goal will also affect motivation. According to Clark
(1999), goals must be neither too hard nor too easy. Otherwise, motivation and
mental effort will drop. Goal setting theory suggests that not only must a goal be
appropriately challenging, it must be specific (Thompson et al., 2002). Goal
orientation theory proposes that those with high performance goals and high ability
will exert great effort, while those with low ability perceptions will avoid effort
(Miller et al., 1996). Related to these perceptions is self-belief in one’s ability to
achieve a goal (i.e., self-efficacy; Bandura, 1997). Self-efficacy theory predicts that
students will work harder if they judge themselves as capable of achieving a goal
versus when they lack confidence in their abilities to achieve the goal (Mayer, 1998).
Expectancy-value theory, which is related to self-efficacy theory, proposes
that the probability of behavior depends on the value of a goal and the expectancy of
attaining the goal (Coffin & MacIntyre, 1999). Different goals can be perceived as
more or less useful, or more or less interesting (Eccles & Wigfield, 2002). Task
value, which refers to how interesting, important, or useful a task is (Coffin &
MacIntyre, 1999), is affected by a number of factors, including the intrinsic value of
a task, its perceived utility value, and its attainment value. A task linked to one’s
aspirations (a “self-relevant” task) is a key condition for task value (Corno &
Mandinah, 1983).
Many tasks involve problem solving. Problem solving is “cognitive
processing directed at transforming a given situation into a desired situation when no
revised 11/20/05
40
obvious method of solution is available to the problem solver” (Baker & Mayer,
1999, p. 272). According to Mayer (1998), successful problem solving depends on
three components—skill, metaskill, and will—and each of these components can be
influenced by instruction. Further, metacognition—in the form of metaskill—is
central to problem solving because it manages and coordinates the other components
(Mayer, 1998). The O’Neil Problem Solving model (O’Neil, 1999) defines three core
constructs of problem solving: content understanding, problem solving strategies,
and self-regulation. Content understanding refers to domain knowledge. Problemsolving strategies refer to both domain-specific and domain-independent strategies.
Self-regulation is comprised of metacognition (planning and self-monitoring) and
motivation (effort and self-efficacy; O’Neil 1999, 2002).
Learner control, which is inherent in interactive computer-based media,
allows for control of pacing and sequencing (Barab, Young, & Wang, 1999). It also
provides a potential for cognitive overload in the form of disorientation; loss of place
(Chalmers, 2003). Further, Daniels and Moore (2000) argued that message
complexity, stimulus features, and additional cognitive demands inherent in
hypermedia (such as learner control) may combine to exceed the cognitive resources
of some learners. Further, learner control is a potential source for extraneous
cognitive load. Ultimately, these issues may be the cause of the mixed reviews of
learner control (Bernard, et al, 2003; Niemiec, Sikorski, & Wallberg, 1996;
Steinberg, 1989).
revised 11/20/05
41
Games and Simulations
According to Ricci, Salas, and Cannon-Bowers (1996), “computer-based
educational games generally fall into one of two categories: simulation games and
video games. Simulation games model a process or mechanism relating task-relevant
input changes to outcomes in a simplified reality that may not have a definite
endpoint” (p. 296). Ricci et al. further comment that simulation games “often depend
on learners reaching conclusions through exploration of the relation between input
changes and subsequent outcomes” (p. 296). Video games, on the other hand, are
competitive interactions bound by rules to achieve specified goals that are dependent
on skill or knowledge and often involve chance and imaginary settings (Randel,
Morris, Wetzel, & Whitehill, 1992).
One of the first problems areas with research into games and simulations is
misuse of terminology. Many studies that claim to have examined the use of games
did not use a game (e.g., Santos, 2002). At best, Santos (2002) used an interactive
multimedia that exhibited some of the features of a game, but not enough features to
actually be called a game. A similar problem occurs with simulations. A large
number of research studies use simulations but call them games (e.g., Mayer et al.,
2002). Because the goals and features of games and simulations differ, it is important
when examining the potential effects of the two media to be clear about which one is
being examined. However, there is little consensus in the education and training
literature on how games and simulations are defined.
revised 11/20/05
42
Games
According to Garris et al. (2002), early work in defining games suggested that
there are no properties that are common to all games and that games belong to the
same semantic category only because they bear a family resemblance to one another.
Betz (1995-1996) argued that a game is being played when the actions of individuals
are determined by both their own actions and the actions of one or more actors.
A number of researchers agree that games have rules (Crookall, Oxford, &
Saunders, 1987; Dempsey, Haynes, Lucassen, & Casey, 2002; Garris et al., 2002;
Ricci, 1994). Researchers also agree that games have goals and strategies to achieve
those goals (Crookall & Arai, 1995; Crookall et al., 1987; Garris et al., 2002; Ricci,
1994). Many researchers also agree that games have competition (e.g., Dempsey et
al., 2002) and consequences such as winning or losing (Crookall et al., 1987;
Dempsey et al., 2002).
Betz (1995-1996) further argued that games simulate whole systems, not
parts, forcing players to organize and integrate many skills. Students will learn from
whole systems by their individual actions; individual action being the student’s game
moves. Crookall et al. (1987) also noted that a game does not intend to represent any
real-world system; it is a “real” system in its own right. According to Duke (1995),
games are situation specific. If well designed for a specific situation or condition, the
same game should not be expected to perform well in a different environment.
revised 11/20/05
43
Simulations
In contrast to games, Crookall and Saunders (1989) viewed a simulation as a
representation of some real-world system that can also take on some aspects of
reality. Similarly, Garris et al. (2002) wrote that key features of simulations are they
represent real-world systems. However, Henderson, Klemes, and Eshet (2000)
commented that a simulation attempts to faithfully mimic an imaginary OR real
environment that cannot be experienced directly, for such reasons as cost, danger,
accessibility, or time. Berson (1996) also argued that simulations allow access to
activities that would otherwise be too expensive, dangerous, or impractical for a
classroom. Lee (1999) added that a simulation is defined as a computer program that
relates elements together through cause and effect relationships.
Thiagarajan (1998) argued that simulations do not reflect reality; they reflect
someone’s model of reality. According to Thiagarajan, a simulation is a
representation of the features and behaviors of one system through the use of
another. At the risk of introducing a bit more ambiguity, Garris et al. (2002)
proposed that simulations can contain game features, which leads to the final
definition: simulation-games.
Simulation-Games
Combining the features of the two media, games and simulations, Rosenorn
and Kofoed (1998) described simulation/gaming as a learning environment where
participants are actively involved in experiments, for example, in the form of roleplays, or simulations of daily work situations, or developmental scenarios. This
revised 11/20/05
44
paper will use the definitions of games, simulations, and simulation-games as
defined by Gredler (1996), which combine the most common features cited by the
various researchers, and yet provide clear distinctions between the three media.
Games, Simulations, and Simulation-Games
According to Gredler,
Games consist of rules that describe allowable player moves, game
constraints and privileges (such as ways of earning extra turns), and
penalties for illegal (nonpermissable) actions. Further, the rules may be
imaginative in that they need not relate to real-world events (p. 523).
This definition is in contrast to a simulation, which Gredler (1996) defines as
“a dynamic set of relationships among several variables that (1) change over time
and (2) reflect authentic causal processes” (p. 523). In addition, Gredler describes
games as linear and simulations as non-linear, and games as having a goal of
winning while simulations have a goal of discovering causal relationships. Gredler
also defines a mixed metaphor referred to as simulation games or gaming
simulations, which is any blend of the features of the two interactive media: games
and simulations. Table 1 summarizes the characteristics of games, and simulations,
including two characteristics proposed by this researcher; linear goal structure and
linear intervention.
When Gredler describes games as linear and simulations as non-linear, and
describes games as having a goal of winning while simulations have a goal of
discovering causal relationships, she is linking those two characteristics. In other
words, she is stating that games have linear goal structures and simulations have
revised 11/20/05
45
non-linear goal structures. For example, the goal of a game might be to destroy a
cannon. Then, once the cannon is destroyed, the next goal will be to storm the
fortress. Then, once the fortress is secured, the next goal might be to locate the
enemy’s plans for invasion. This linear structure is a typical format for games—do
this, then do that, then do something else. And if the player wanted to try a different
approach to, for example, destroying the cannon, he or she would have to restart the
game, or restart the level, or load a previously saved game state. Games are not
designed to allow goals to be repeated. With simulations, however, the typical goal is
to examine causal relationships.
The order in which that discovery occurs in a simulation is typically up to the
user. And once the goal is reach, the experimenter may continue by examining other
possibilities for achieving the same goal, or the experimenter can begin working
toward a new goal. Unlike players in a game, if users of a simulation wish to
examine an alternative approach to achieving a goal, they do not have to restart the
simulation or a level. They simply alter the input variables and observe the outcome.
Therefore, as stated, games have linear goal structures and simulations have nonlinear goal structures.
This researcher contends there is another characteristic of games and
simulations that involves either linearity or non-linearity, and that is the medium’s
intervention structure. For both games and simulations, this intervention structure is
non-linear. Intervention refers to the actions a player or user are allowed take at any
given movement of the game or simulation. In almost all instances of intervention,
revised 11/20/05
46
both media give a least two choices. In a game, that might be to save or quit, pick up
a gun or not, open a door or back away from the door, turn left or turn right. In a
simulation, the user might have the choice to save or quit, increase one variable
value’s or decrease it, introduce another variable or remove a variable, etc.
Therefore, for both games and simulations, the intervention structure is non-linear. In
Table 1, the characteristics of simulation-games are not included, since they are
comprised of any combination of games and simulations and, therefore, are implied
in the Table.
Table 1
Characteristics of Games and Simulations
Characteristic
Game
Combination of ones actions
Yes (via human or
plus at least one other’s
computer)
actions
Rules
Defined by game
designer/developer
Goals
To win
Requires strategies to achieve
goals
Includes competition
Includes chance
Has consequences
System size
Reality or Fantasy
Situation Specific
Represents a prohibitive
environment (due to cost,
danger, or logistics)
Yes
Against computer or
other players
Yes
Yes (e.g., win/lose)
Whole
Both
Yes
Yes
Simulation
Yes
Defined by system
being replicated
To discover causeeffect
relationships
Yes
No
Yes
Yes
Whole or Part
Both
Yes
Yes
revised 11/20/05
Table 1 (Continued)
Characteristics of Games and Simulations
Characteristic
Represents authentic causeNo
effect relationships
Requires user to reach own
Yes
conclusion
May not have definite end
No
point
Contains constraints,
Yes
privileges, and penalties
(e.g. earn extra moves, lose
turn)
Linear goal structure
Yes
Linear intervention
No
Is intended to be playful
Yes
Game
47
Simulation
Yes
Yes
Yes
No
No
No
No
Video Games
Just are there are disagreements for the terms game and simulation, there is
disagreement with the term video game. According to Novak (2005), the term video
game came from the arcade business and gravitated to the home console business;
Consoles include the Microsoft Xbox and the Sony Playstation II. Novak contended
that games played on personal computers are computer games, not video games.
However, Soanes (2003) defined a video game as “a game played by electronically
manipulating images produced by a computer program.” This would classify
computer-based games a video game. Similarly, the American Heritage Dictionary
of the English Language, fourth edition (2000), defined a video game as “an
electronic or computerized game played by manipulating images on a video display
or television screen.”
revised 11/20/05
48
While many researcher have referred to computer-based games as video
games (e.g., Greenfield et al., 1994, 1996; Okagaki & Frensch, 1994), others have
referred to computer-based games as computer games or computer-based games
(e.g., Baker et al., 1993; Gopher et al., 1994; Hong & Liu, 2003; Williams &
Clippinger, 2002). According to Kirriemuir (2002b), the terms video game and
computer game are often used interchangeably. In this document we will use the
terms video game, computer game, and computer-based game interchangeably.
Motivational Aspects of Games
According to Garris et al. (2002), motivated learners are easy to describe;
they are enthusiastic, focused and engaged, interested in and enjoy what they are
doing, and they try hard and persist over time. Furthermore, they are self-determined
and driven by their own volition rather than external forces (Garris et al., 2002).
Ricci et al. (1996) defined motivation as “the direction, intensity, and persistence of
attentional effort invested by the trainee toward training” (p. 297). According to
Malouf (1987-1988), continuing motivation is defined as returning to a task or a
behavior without apparent external pressure to do so when other appealing behaviors
are available. Similarly, Story and Sullivan (1986) commented that the most
common measure of continuing motivation is whether a student returns to the same
task at a later time. A construct similar to Story and Sullivan’s definition of
continuing motivation is persistence, which is defined by Pintrich and Schunk (2002)
as “…the continuation of behavior until the goal is obtained and the need is reduced”
(p. 30).
revised 11/20/05
49
With regard to video games, Asakawa and Gilbert (2003) argued that, without
sources of motivation, players often lose interest and drop out of a game. However,
there seems little agreement among researchers as to what those sources are—the
specific set of elements or characteristics that lead to motivation in any learning
environment, and particularly with educational games. According to Rieber (1996)
and McGrenere (1996), motivational researchers have offered the following
characteristics as common to all intrinsically motivating learning environments:
challenge, curiosity, fantasy, and control (Davis & Wiedenbeck, 2001; Lepper &
Malone, 1987; Malone, 1981; Malone & Lepper, 1987). Malone (1981) and others
also included fun as a criteria for motivation.
Stewart (1997) added the motivational importance of goals and outcomes.
Locke and Latham (1990) also commented on the robust findings with regards to
goals and performance outcomes. Locke and Latham argued that clear, specific goals
allow the individual to perceive goal-feedback discrepancies, which are seen as
crucial in triggering greater attention and motivation. Clark (2001) further argued
that motivation cannot exist without goals. Feedback is the final construct cited in
this dissertation as affecting attitudes (Ricci et al., 1996). Feedback is also related to
goals; Clark (2003) commented that, for feedback to be effective, it must be based on
clearly understood, concrete goals.
The following sections will focus on fantasy, control and manipulation,
challenge and complexity, curiosity, competition, feedback, and fun. The role of
goals in fostering effort and motivation was discussed earlier in this dissertation.
revised 11/20/05
50
Fantasy
Research suggests that material may be learned more readily when presented
in an imagined context that interests the learner than when presented in a generic or
decontextualized form (Garris et al., 2002). Malone and Lepper (1987) defined
fantasy as an environment that evokes “mental images of physical or social situations
that do not exist” (p. 250). Rieber (1996) commented that fantasy is used to
encourage learners to imagine they are completing an activity in a context in which
they are really not present. However, Rieber described two types of fantasies:
endogenous and exogenous. Endogenous fantasy weaves relevant fantasy into a
game, while exogenous fantasy simply sugar coats a learning environment with
fantasy. An example of an endogenous fantasy would be the use of a laboratory
environment to learn chemistry, since this environment is consistent with the
domain. An example of an exogenous environment would be to use a hangman game
to learn spelling, because hanging a person has nothing to do with spelling. Rieber
(1996) noted that endogenous fantasy, not exogenous fantasy, is important to
intrinsic motivation, yet exogenous fantasies are a common and popular element of
many educational games. Intrinsic motivation is defined by Pintrich and Schunk
(2002) as “…motivation to engage in an activity for its own sake” (p. 245).
According to Malone and Lepper (1987), fantasies can offer analogies or
metaphors for real-world processes that allow the user to experience phenomena
from varied perspectives. A number of researchers (Anderson & Pickett, 1978;
Ausubal, 1963; Malone & Lepper, 1978, 1987; Singer, 1973) argued that fantasies in
revised 11/20/05
51
the form of metaphors and analogies provide learners with better understanding by
allowing them to relate new information to existing knowledge. According to Davis
and Wiedenbeck (2001), metaphor also helps learners to feel directly involved with
objects in the domain so the computer and interface becomes invisible.
Control and Manipulation
Hannifin and Sullivan (1996) define control as the exercise of authority or the
ability to regulate, direct, or command something. Control, or self-determination,
promotes intrinsic motivation because learners are given a sense of control over the
choices of actions they may take (deCharms, 1986; Deci, 1975; Lepper & Greene,
1978). Furthermore, control implies that outcomes depend on learners’ choices and,
therefore, learners should be able to produce significant effects through their own
actions (Davis & Wiedenbeck, 2001). According to Garris et al. (2002), games evoke
a sense of personal control when users are allowed to select strategies, manage the
direction of activities, and make decisions that directly affect outcomes, even if those
actions are not instructionally relevant.
However, Hannafin & Sullivan (1996) warned that research comparing the
effects of instructional programs that control all elements of the instruction (program
control) and instructional programs in which the learner has control over elements of
the instructional program (learner control) on learning achievement has yielded
mixed results. Dillon and Gabbard (1998) commented that novice and lower aptitude
students have greater difficulty when given control, compared to experts and higher
revised 11/20/05
52
aptitude students; Niemiec et al. (1996) argued that control does not appear to offer
any special benefits for any type of learning or under any type of condition.
Challenge and complexity
Challenge, is defined as “to arouse or stimulate especially by presenting with
difficulties” (retrieved from Webster’s Online Dictionary, June, 8, 2005,
http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=challenge). Berube
(2001) defined challenge as a “requirement for full use of one’s abilities or
resources” (p. 185). These two definitions embody the idea that challenge is related
to intrinsic motivation and occurs when there is a match between a task and the
learner’s skills. The task should not be too easy nor too hard, because in either case,
the learner will lose interest (Clark, 1999; Malone & Lepper, 1987). Clark (1999)
describes this effect as an inverted U-shaped relationship with lack of effort existing
on either side of difficultly ranging from too easy to too hard. Stewart (1997)
similarly commented that games that are too easy will be dismissed quickly.
According to Garris et al. (2002), there are several ways in which an optimal
level of challenge can be obtained. Goals should be clearly specified, yet the
probability of obtaining that goal should be uncertain, and goals must also be
meaningful to the individual. Garris and colleagues argued that linking activities to
valued personal competencies, embedding activities within absorbing fantasy
scenarios, or engaging competitive or cooperative motivations could serve to make
goals meaningful. This relationship of meaningful goals, belief in the probability of
goal attainment, and valued personal competencies are the components of the
revised 11/20/05
53
expectancy-value theory which suggests that the probability of behavior, in this case
motivated behavior, depends on the value of the goal and the expectancy of
obtaining that goal (Coffin & MacIntyre, 1999).
Curiosity
According to Rieber (1996), challenge and curiosity are intertwined.
Curiosity arises from situations in which there is complexity, incongruity, and
discrepancy (Davis & Wiedenbeck, 2001). Sensory curiosity is the interest evoked by
novel situations and cognitive curiosity is evoked by the desire for knowledge
(Garris et al. 2002). Cognitive curiosity motivates the learner to attempt to resolve
the inconsistency through exploration (Davis, & Wiedenbeck, 2001). Curiosity is
identified in games by unusual visual or auditory effects and by paradoxes,
incompleteness, and potential simplifications (Westbrook & Braithwaite, 2002).
Curiosity is the desire to acquire more information, which is a primary component of
the players’ motivation to learn how to operate a game (Westbrook & Braithwaite,
2001).
Malone and Lepper (1987) noted that curiosity is one of the primary factors
that drive learning and is related to the concept of mystery. Garris et al. (2002)
commented that curiosity is internal, residing in the individual, and mystery is an
external feature of the game itself. Thus, mystery evokes curiosity in the individual,
and this leads to the question of what constitutes mystery (Garris et al. 2002).
Research suggests that mystery is enhanced by incongruity of information,
complexity, novelty, surprise, and violation of expectations (Berlyne, 1960),
revised 11/20/05
54
incompatibility between ideas and inability to predict the future (Kagan, 1972), and
information that is incomplete and inconsistent (Malone & Lepper, 1987).
Competition
Studies on competition with games and simulations have mixed results, due to
preferences and reward structures. A study by Porter, Bird, and Wunder (1990-1991)
examining competition and reward structures found that the greatest effects of
reward structure were seen in the performance of those with the most pronounced
attitudes toward either competition or cooperation. The results also suggested that
performance was better when the reward structure matched the individual’s
preference. According to the authors, implications of their study are that emphasis on
competition will enhance the performance of some learners but will inhibit the
performance of others (Porter et al., 1990-1991).
Yu (2001) investigated the relative effectiveness of cooperation with and
without inter-group competition in promoting student performance, attitudes, and
perceptions toward subject matter studied, computers, and interpersonal context.
With fifth-graders as participants, Yu found that cooperation without inter-group
competition resulted in better attitudes toward the subject matter studies and
promoted more positive inter-personal relationships both within and among the
learning groups, as compared to competition (Yu, 2001). The exchange of ideas and
information both within and among the learning groups also tended to be more
effective and efficient when cooperation did not take place in the context of intergroup competition (Yu, 2001).
revised 11/20/05
55
Feedback
Feedback within games can be provided for learners to quickly evaluate their
progress against the established game goal. This feedback can take many forms, such
as textual, visual, and aural (Rieber, 1996). According to Ricci et al. (1996), within
the computer-based game environment, feedback is provided in various forms
including audio cues, score, and remediation immediately following performance
(e.g., after-action review). The researchers argued that these feedback attributes can
produce significant differences in learner attitudes, resulting in increased attention to
the learning environment. Clark (2003) argued that, for feedback to be effective, it
must be based on “concrete learning goals that are clearly understood” (p. 18) and
that it should describe the gap between the learner’s current performance and the
goal. Additionally, the feedback must not be focused on the failure to achieve the
goal (Clark, 2003).
Fun
Quinn (1994, 1997) argued that for games to benefit educational practice and
learning, they need to combine fun elements with aspects of instructional design and
system design that include motivational, learning, and interactive components.
According to Malone (1981) three elements (fantasy, curiosity, and challenge)
contribute to the fun in games. While fun has been cited as important for motivation
and, ultimately, for learning, there is little empirical evidence supporting the concept
of fun. It is possible that fun is not a construct but, rather, represents an amalgam of
revised 11/20/05
56
other concepts or constructs. Relevant alternative concepts or constructs include
play, engagement, and flow.
Play. Resnick and Sherer (1994) defined play as entertainment without fear of
present or future consequences; it is fun. According to Rieber, Smith, and Noah
(1998), serious play describes an intense learning experience in which both adults
and children voluntarily devote enormous amounts of time, energy, and commitment
and, at the same time, derive great enjoyment from the experience. Webster et al.
(1993) found that labeling software training as play showed improved motivation
and performance.
Flow. Csikszentmihalyi (1975; 1990) defines flow or a flow experience as an
optimal experience in which a person is so involved in an activity that nothing else
seems to matter. When completely absorbed in an activity, a person is ‘carried by the
flow,’ hence the origin of the theory’s name (Rieber & Matzko, 2001). Rieber and
Matzko (2001) offered a broader definition of flow, commenting that a person may
be considered in flow during an activity when experiencing one or more of the
following characteristics: Hours pass with little notice; challenge is optimized;
feelings of self-consciousness disappear; the activity’s goals and feedback are clear;
attention is completely absorbed in the activity; one feels in control; and one feels
freed from other worries (Rieber & Matzko, 2001). According to Davis and
Wiedenbeck (2001), an activity that is highly intrinsically motivating can become
all-encompassing to the extent that the individual experiences a sense of total
involvement, losing track of time, space, and other events. Davis and Wiedenbeck
revised 11/20/05
57
also argued that the interaction style of a software package is expected to have a
significant effect on intensity of flow. It should be noted that Reiber and Matzko
commented that play and flow differ in one respect; learning is an expressed
outcome of serious play but not of flow.
Engagement. Davis and Wiedenback (2001) defined engagement as a feeling
of directly working on the objects of interest in the virtual world rather than on
surrogates. According to Davis and Wiedenbeck, this interaction or engagement can
be used along with the components of Malone and Lepper’s (1987) intrinsic
motivation model to explain the effect of an interaction style on intrinsic motivation,
or flow. Garris et al. (2002) commented that training professionals are interested in
the intensity of involvement and engagement that computer games can invoke, to
harness the motivational properties of computer games to enhance learning and
accomplish instructional objectives.
Learning and Other Outcomes for Games and Simulations
Results from studies reporting on the performance and learning outcomes
from games are mixed. This section is subdivided into four discussions. First will be
a discussion of studies indicating positive results regarding performance and learning
outcomes attributed to games and simulations; both empirical and non-empirical
studies will be discussed. Second will be a discussion of studies indicating a link
between motivation and negative or null results regarding performance and learning
outcomes attributed to games and simulations. Third will be a discussion of the
relationship of instructional design to effectiveness of educational games and
revised 11/20/05
58
simulations as an explanation of the mixed findings among game and simulation
studies. Last will be a discussion of reflection and debriefing as a necessary
component to learning, with specific references to the learning instantiated in games
and simulations.
Positive Outcomes from Games and Simulations
Simulations and games have been cited as beneficial for a number of
disciplines and for a number of educational and training situations, including
aviation training (Salas, Bowers, & Rhodenizer, 1998), aviation crew resource
management (Baker, Prince, Shrestha, Oser, & Salas, 1993), laboratory simulation
(Betz, 1995-1996), chemistry and physics education (Khoo & Koh, 1998), urban
geography and planning (Adams, 1998; Betz, 1995-1996), farm and ranch
management (Cross, 1993), language training (Hubbard, 1991), disaster management
(Stolk, Alexandrian, Gros, & Paggio, 2001), and medicine and health care
(Westbrook & Braithwaite, 2001; Yair, Mintz, & Litvak, 2001). For business, games
and simulations have been cited as useful for teaching strategic planning (Washburn
& Gosen, 2001; Wolfe & Roge, 1997), finance (Santos, 2002), portfolio
management (Brozik & Zapalska, 2002), marketing (Washburn & Gosen, 2001),
knowledge management (Leemkuil, de Jong, de Hoog, & Christoph, 2003), and
media buying (King & Morrison, 1998).
In addition to teaching domain-specific skills, games have been used to
impart more generalizable skills. Since the mid 1980s, a number of researchers have
used the game Space Fortress, a 2-D, simplistic arcade-style game, with a hexagonal
revised 11/20/05
59
“fortress” in the center of the screen surrounded by two concentric hexagons, and a
space ship, to improve spatial and motor skills that transfer far outside gameplay,
such as significantly improving the results of fighter pilot training (Day et al., 2001).
Also, in a series of five experiments, Green and Bavelier (2003) showed the potential
of video games to significantly alter visual selection attention. Similarly, Greenfield,
DeWinstanley, Kilpatrick, & Kaye (1994) found, with experiments involving college
students, that video game practice could significantly alter the participants’ strategies
of spatial attentional deployment—speed in which a participant would find and
respond to a visual stimulus on a target display.
Of the various articles discussed in the last two paragraphs, all studies in the
first paragraph were non-empirical. All studies in the second paragraph were
empirically based—the studies by Day et al. (2001), Green and Bavelier (2003), and
Greenfield et al. (1994). Table 2 shows the medium, the measure, and the participant
age for all the articles referenced in the first paragraph and only the Greenfield et al.
article referenced in the second paragraph (the other two appear in Table 3, which is
discussed immediately after Table 2). With the exception of the Greenfield et al.
study, all articles in Table 2 are non-empirical studies and contain three primary
shortcomings. First, they did not include a control group. Second, the primary source
for data was self-report. And third, they did not assess learning outcomes, except
through self-report of perceived learning. For example, Santos (2002) commented
that the survey in his study did not capture was the degree to which students actually
learned as a result of participating in the game. He further commented that students
revised 11/20/05
60
may have enjoy participating in the computer-based exercises and may report
learning, but to demonstrate that media actually facilitates learning is difficult for
researchers (Santos, 2002).
Table 2
Non-empirical Studies: Media, Measures, and Participants
Study
Mediaa
Measures
Participant Ageb
Adams (1998)
SimCity 2000 (b)
Survey on media
Adult
preference,
perceptions,
SimCity prior
experience, results
of experiments.
No control group
Baker, Prince,
Microsoft Flight
Observation.
Adult
Shrestha, Oser,
Simulator (b)
Reaction survey.
& Salas (1993)
No control group.
Betz (1995-1996) SimCity 2000 (b)
Content
Adult
understanding
exam. Perception
and attitude
survey. No control
group.
Brozik &
The Portfolio Game
Simulation
Adult
Zapalska (2002)
(b)
performance; not
knowledge gains.
No control group.
Cross (1993)
AgVenture (c)
Reaction and selfAdult
assessment
survey. No control
group
a
Letters in parentheses indicate type of media: a = game; b= simulation; c=
simulation game.
b
Participants below college are defined as child. Participants college age or higher
are defined as adult.
revised 11/20/05
61
Table 2 (Continued)
Non-empirical Studies: Media, Measures, and Participants
Study
Mediaa
Measures
Participant Ageb
Greenfield,
Robot Battle (a)
Performance after
Adult
DeWinstanley,
game on visual
Kilpatrick, &
attention task.
Kaye (1994)
Treatment/control
based on prior
gaming
experience.
Hubbard (1991)
Hangman (c)
Discussion of games
Not
and language
Applicable
learning
Khoo & Koh
Cerius2 (b)
Self-assessment
Adult
(1998)
questionnaire on
content
understanding. No
control group.
King & Morrison Media Buying
Perceived learning
Adult
(1998)
Simulation (b)
outcomes and
value of
simulation.
Leemkuil, de
KM Quest (webFormative
Adult
Jong, de Hoog,
based) (c)
evaluation of
& Christoph
application
(2003)
functionality
Salas, Bowers, & Various Simulations
Discussion of
N/A
Rhodenizer
(b)
various studies
(1998)
Santos (2002)
Financial system
Self-assessment
Adult
simulator (b)
survey on
perceived contentunderstanding and
motivation. No
control group
a
Letters in parentheses indicate type of media: a = game; b= simulation; c=
simulation game.
b
Participants below college are defined as child. Participants college age or higher
are defined as adult.
revised 11/20/05
62
Table 2 (Continued)
Non-empirical Studies: Media, Measures, and Participants
Study
Mediaa
Measures
Participant Ageb
Stolk,
Web-based disaster
Formative
Adults
Alexandrian,
management
evaluation of
Gros, & Paggio
multimedia (c)
medium.
(2001)
Washburn &
Micromatic (c)
Pre-post knowledge
Gosen (2001)
gains. Attitude
survey. No control
group.
Westbrook &
Health Care Game (c) Pre- and Post-selfAdults
Braithwaite
assessment
(2001)
questionnaires:
domain
knowledge;
attitude toward
working in
groups. Exam to
assess factual
knowledge gains.
No control group.
Wolfe & Roge
Various simulations
Discuss of games
Adults
(1997)
and simulation
that teach strategic
games (b, c)
management.
Yair, Mintz, &
Touch the Sky, Touch Discussion of the
N/A
Litvak (2001)
the Universe (b)
simulation
a
Letters in parentheses indicate type of media: a = game; b = simulation; c =
simulation game.
b
Participants below college are defined as child. Participants college age or higher
are defined as adult.
A recent review of empirically based studies on the use of games and
simulations for teaching or training adults over the last 15 years was conducted by
O’Neil and Wainess (in press). Of the thousands of journal articles published on
games and simulations over the past 15 years (including those listed in Table 2) by
using the search terms games, computer game, PC game, computer video game,
video game, cooperation game, and multi-player game, only 18 empirically-based
revised 11/20/05
63
journal articles were found with either qualitative or quantitative information on the
effectiveness of games with adults as participants. Research based on dissertations or
technical reports was not examined. A hand search of journals for 2004/2005 found
one additional journal article that met the search criteria, for a total of 19 journal
articles. Table 3 shows the medium, the measure, and the participant age for all the
empirical studies found by the search.
Table 3
Empirical Studies: Media, Measures, and Participants
Study
Mediaa
Measures
Participant Ageb
Arthur et al.
Space Fortress (a)
Performance on game,
Adult
(1995)
visual attention
Carr & Groves
Business
Survey
Adult
(1998)
Simulation in
Manufacturing
Management
(c)
Day, Arthur, &
Space Fortress (a)
Performance on game
Adult
Gettman (2001)
and knowledge map
Galimberti,
3D-Maze (a)
Observation and time
Adult
Ignazi, Vercesi,
to complete game
& Riva (2001)
Gopher, Weil, &
Space Fortress II (a) Performance on game
Adult
Baraket (1994)
and flight
performance
Green & Bavelier Medal of Honor (c) Visual attention
Adult
(2003)
Green & Flowers Video catching task Performance on game,
Adult
(2003)
(c)
exit questionnaire
Mayer, Mautone, Profile Game (c)
Performance on
Adult
& Prothero
retention
(2002)
and transfer tests
a
Letters in parentheses indicate type of media: a = game; b= simulation; c=
simulation game.
b
Participants below college are defined as child. Participants college age or higher
are defined as adult.
revised 11/20/05
64
Table 3 (Continued)
Empirical Studies: Media, Measures, and Participants
Study
Mediaa
Measures
Participant Ageb
Moreno & Mayer Design-a-Plant (b)
Performance on
Adult
(2002)
retention and
transfer tests, plus
survey
Morris, Hancock, Delta Force (c)
Performance on
Adult
& Shirkey
game, Stress
(2004)
questionnaire,
observation of
military tactics
used
Parchman, Ellis,
Adventure Game (a)
Retention test,
Adult
Christinaz, &
transfer test,
Vogel (2002)
motivation
questionnaire
Porter, Bird, &
Whale Game (a)
Performance on
Adult
Wunder (1990game, satisfaction
1991)
survey
Prislin, Jordan,
Space Fortress (a)
Performance on
Adult
Worchel,
game,
Senmmer, &
observation,
Shebilske
discussion
(1996)
behavior
Rhodenizer,
AIRTANDEM (b)
Performance on
Adult
Bowers, &
game, retention
Bergondy
tests
(1998)
Ricci, Salas, &
QuizShell (b)
Performance on preAdult
Cannon-Bowers
, post-, and
(1996)
retention tests,
and trainee
reaction
questionnaire
a
Letters in parentheses indicate type of media: a = game; b= simulation; c=
simulation game.
b
Participants below college are defined as child. Participants college age or higher
are defined as adult.
revised 11/20/05
65
Table 3 (Continued)
Empirical Studies: Media, Measures, and Participants
Study
Mediaa
Measures
Participant Ageb
Rosenorn &
Experiment Atrium
Observation
Adult
Kofoed (1998)
(b)
Shebilske, Regian, Space Fortress (a)
Performance on
Adult
Arthur, &
game
Jordan
(1992)
Shewokis (2003)
Winter Challenge
Performance on
Adult
game
Tkacz (1998)
Maze game (c)
Performance on
Adult
game, transfer test
of position
location
a
Letters in parentheses indicate type of media: a = game; b= simulation; c=
simulation game.
b
Participants below college are defined as child. Participants college age or higher
are defined as adult.
Many studies claiming positive outcomes appear to be making unsupported
claims for the media. This was particularly true for the non-empirical studies listed in
Table 2. While less often in the empirical studies listed in Table 3, there were
instances where outcome claims appeared to be substantiated. For example, Mayer,
Mautone, and Prothero (2002) examined performance outcomes using retention and
transfer tests, and Carr and Groves (1998) examined performance outcomes using
self-report surveys. The Mayer et al. (2002) study offered strong statistical support
for their findings, using retention and transfer tests, whereas Carr and Groves used
only participants’ self-reports as evidence of learning effectiveness. In Carr and
Groves’ study, participants reported their belief that they learned something from the
experience. No cognitive performance was actually measured, yet Carr and Groves
suggested that their simulation game was a useful educational tool, and that use of
revised 11/20/05
66
the tool provided a valuable learning experience. It should also be noted that, while
the Mayer et al. study included both treatment and control groups, the Carr and
Groves study involved only treatment groups. As exemplified by the unsubstantiated
claims of Carr and Groves (1998), Leemkuil et al. (2003) commented that much of
the work on the evaluation of games has been anecdotal, descriptive, or judgmental.
A further complication, as discussed earlier in this document, is the issue of
mislabeling media. For example, in their study involving three forms of Chemical,
Biological, and Radiological Defense (CBRD) training for Naval recruits, including
use of a game, Ricci et al. (1996) claimed that results of their study provided
evidence that computer-based gaming can enhance learning and retention of
knowledge. However, the medium used in their study met the criteria for a
simulation game, not a game. Therefore, their claim should have promoted the
benefits of a simulation game, not a game.
Relationship of Motivation to Negative or Null Outcomes from Games and
Simulations
A number of researchers have addressed the issue of the motivational aspects
of games, arguing that the motivation attributed to enjoyment of educational games
may not necessarily indicate learning and, possibly, might indicate less learning.
Garris et al. (2002) noted that, although students generally seem to prefer games over
other, more traditional, classroom training media, reviews have reported mixed
results regarding the training effectiveness of games.
revised 11/20/05
67
Druckman (1995) concluded that games seem to be effective in enhancing
motivation and increasing student interest in subject matter, yet the extent to which
that translates into more effective learning is less clear. As a note of caution,
Brougere (1999) commented that anything that contributes to the increase of emotion
(such as the quality of the design of video games) reinforces the attraction of the
game but not necessarily its educational effectiveness. Similarly, Salas et al. (1998)
commented that liking a simulation does not necessarily transfer to learning.
Salomon (1984) went even further, by commenting that a more positive attitude can
actually indicate less learning. And in an early meta-analysis of the effectiveness of
simulation games, Dekkers and Donatti (1981) found a negative relationship between
duration of training and training effectiveness. Simulation games became less
effective the longer the game was played (suggesting that perhaps trainees became
bored over time). Clark and Sugrue (2001) described a novelty effect where student
effort and attention is high when a medium is novel (new) but which tends to
diminish over time as students become more familiar with the medium.
Relationship of Instructional Design to Learning from Games and Simulations
de Jong and van Joolingen (1998), after reviewing a large number of studies
on learning from simulations, concluded, “There is no clear and univocal outcome in
favor of simulations. An explanation of why simulation based learning does not
improve learning results can be found in the intrinsic problems that learners may
have with discovery learning” (p. 181). These problems are related to processes such
as hypothesis generation, design of experiments, interpretation of data, and
revised 11/20/05
68
regulation of learning. After analyzing a large number of studies, de Jong and van
Joolingen (1998) concluded that adding instructional support to simulations might
help to improve the situation.
The hypothesis is that games themselves are not sufficient for learning but
there are elements in games that can be activated within an instructional context that
may enhance the learning process (Garris et al., 2002). In other words, outcomes are
affected by the instructional strategies employed in the games, not by the games
themselves (Wolfe, 1997). Leemkuil et al. (2003), too, commented that there is
general consensus that learning with interactive environments such as games,
simulations, and adventures is not effective when no instructional measure or support
is added.
According to Thiagarajan (1998), if not embedded with sound instructional
design, games and simulations often end up as truncated exercises often mislabeled
as simulations. Gredler (1996) further commented that poorly developed exercises
are not effective in achieving the objectives for which simulations are most
appropriate—that of developing students’ problem solving skills. Lee (1999)
commented that for instructional prescription we need information dealing with
instructional variables, such as instructional mode, instructional sequence,
knowledge domain, and learner characteristics (Lee, 1999).
Reflection and Debriefing
Instructional strategies that researchers have suggested as beneficial to
learning from games and simulations are reflection and debriefing. Brougere (1999)
revised 11/20/05
69
argued that a game cannot be designed to directly provide learning; Reflexivity is
required to make transfer and learning possible. Games require reflection, which
enables the shift from play to learning. Therefore, debriefing (or after action review),
which includes reflection, appears to be an essential contribution to research on play
and gaming in education (Brougere, 1999; Leemkuil et al., 2003; Thiagarajan, 1998).
According to Garris et al. (2002), debriefing is the review and analysis of events that
occurred in the game. Debriefing provides a link between what is represented in the
simulation or gaming experience and the real world. It allows the learners to draw
parallels between game events and real-world events. Debriefing allows learners to
transform game events into learning experiences. Debriefing may include a
description of events that occurred in the game, analysis of why they occurred, and
the discussion of mistakes and corrective actions. Garris et al. (2002) argued that
learning by doing must be coupled with the opportunity to reflect and abstract
relevant information for effective learning to occur.
Summary of Games and Simulation Literature
Computer-based educational games fall into three categories: games,
simulations, and simulation games. While there is debate as to the specific
characteristics of each of these three media (e.g., Betz, 1995-1996; Crookall & Arai,
1995; Crookall et al., 1987; Dempsey et al., 2002; Duke, 1995; Garris et al., 2002;
Randel et al., 1992; Ricci et al., 1996), Gredler (1996) provides definitions for the
three media that provides some clear delineations between them. According to
Gredler, games consist of rules, can contain imaginative contexts, are primarily
revised 11/20/05
70
linear, and include goals as well as competition, either against other players or
against a computer (Gredler, 1996). Simulations display the dynamic relationship
among variables which change over time and reflect authentic causal processes.
Simulations are non-linear and have a goal of discovering causal relationships
through manipulation of independent variables. Simulation games are a blend of
games and simulations (Gredler, 1996). The author of this study does provide one
important modification to Gredler’s definitions.
When Gredler (1996) describes games a linear and simulations as non-linear,
she is referring to their goal structures—games have linear goal structures and
simulations have non-linear goal structures. According to this author, there is another
asepect of games or simulation interaction that can be described as either linear or
non-linear—the intervention structure of the media. Intervention refers to the actions
players or users are allowed to take at any given moment of the game or simulation.
In almost all instances of intervention, both media give at least two choices (e.g., quit
or continue, turn left or turn right, fight or run, increase something or decrease it).
Therefore, for both games and simulations, the intervention structure is non-linear.
While there also debate as to the definition of video games and, more
importantly, whether a computer-based game is a video game, the definitions of
computer-based game and video game are beginning to coincide and the two terms
are beginning to be used interchangeably (e.g., Greenfield et al., 1994, 1996;
Kirriemuir, 2002b; Okagaki & Frensch, 1994).
revised 11/20/05
71
Beginning with the work of Malone (1981), a number of constructs have been
described as providing the motivational aspects of games: fantasy, control and
manipulation, challenge and complexity, curiosity, competition, feedback, and fun.
Fantasy is defined as an environment that evokes “mental images of physical or
social situations that do not exist” (Malone & Lepper, 1987, p. 250). Malone &
Lepper (1987) also commented that fantasies can offer analogies and metaphors, and
Davis and Wiedenbeck (2001) argued that metaphors can help learners feel more
directly involved in a domain.
Control and manipulation promote intrinsic motivation, because learners are
given a sense of control over their choices and actions (deCharms, 1986; Deci,
1975). Challenge embodies the idea that intrinsic motivation occurs when there is a
match between a task and the learner’s skills (Bandura, 1977; Csikszentmihalyi,
1975; Harter, 1978). The task should be neither too hard nor too easy, otherwise, in
both cases, the learner would lose interest (Clark, 1999; Malone & Lepper, 1987).
According to Rieber (1996), curiosity and challenge are intertwined. Curiosity arises
from situations in which there is complexity, incongruity, and discrepancy (Davis &
Wiedenbeck, 2001). Malone and Lepper (1997) argued that curiosity is one of the
primary factors that drive learning.
While Malone (1981) defines competition as important to motivation, studies
on competition with games and simulations have resulted in mixed findings, due to
individual learner preferences, as well as the types of reward structures connected to
the competition (e.g., Porter et al. 1990-1991; Yu, 2001). Another motivational
revised 11/20/05
72
factor in games, feedback, allows learners to quickly evaluate their progress and can
take many forms, such as textual, visual, and aural (Rieber, 1996). Ricci et al. (1996)
argued that feedback can produce significant differences in learner attitudes,
resulting in increased attention to a learning environment. However, Clark (2003)
commented that feedback must be focused on clear learning goals and current
performance results.
The last category contributing to motivation, fun, is possibly an erroneous
category. Little empirical evidence exists for the construct. However, evidence does
support the related constructs of play, engagement, and flow. Play is entertainment
without fear of present of future consequences (Resnick & Sherer, 1994). Webster et
al. (1993) found that labeling software training as serious play improved motivation
and performance. Csikszentmihalyi (1975; 1990) defined flow as an optimal
experience in which a person is so involved in an activity that nothing else seems to
matter. According to Davis and Wiedenbeck (2001), engagement is the feeling of
working directly on the objects of interest in a world, and Garris et al. (2002) argued
that engagement can harness the motivational properties of computer games to
enhance learning and accomplish instructional objectives.
While numerous studies have cited the learning benefits of games and
simulations (e.g., Adams, 1998; Baker et al., 1997; Betz, 1995-1996; Khoo & Koh,
1998), others have found mixed, negative, or null outcomes from games and
simulations, specifically in the relationship of enjoyment of a game to learning from
the game (e.g., Brougere, 1999; Dekkers & Donatti, 1981; Druckman, 1995). One of
revised 11/20/05
73
the problems appears to be non-empirical studies claiming learning outcomes that
cannot be substantiated by the data (see Table 2), as well as empirical studies making
claims not supported by the data (e.g., see Carr & Groves, 1998 in Table 3). Another
problem seems to be the paucity of empirical studies. Of the several thousand articles
on game and simulation studies published in peer reviewed journals in the last 15
years, only 19 were empirical (see Table 3 and O’Neil & Wainess, in press). Another
issue in claims attributed to games or simulations is the inaccurate use of media
definition. For examples, the medium used by Ricci et al. (1996) was defined by the
researchers as a game, but the description of the medium met the criteria for a
simulation game. Therefore, any outcomes attributed to the use of games would have
been inaccurate.
Another claim related to the proposed educational benefit of games and
simulations is their motivational characteristics. The assumption is that motivation
always leads to learning. However, a number of researchers suggest that this
relationship may not be true (e.g., Brougere, 1999; Druckman, 1995; Salas, 1998).
Salomon (1984) even contended that a positive attitude can actually indicate less
learning. And Dekkers and Donatti (1981) found that motivation wanes over time, as
the novelty of the game or simulation subsides. While these various arguments
potentially explain the mixed findings with regards to the learning outcomes in
games and simulation research, there is another argument which may provide a better
explanation of the mixed finding.
revised 11/20/05
74
There appears to be consensus among a large number of researcher that the
negative, mixed, or null findings might be related to a lack of sound instructional
design embedded in the games (de Jong & van Joolingen, 1998; Garris et al., 2002;
Gredler, 1996; Lee, 1999; Leemkuil et al., 2003; Thiagarajan, 1998; Wolfe, 1997).
These researchers suggest that it is the instructional design embedded in a medium
and not the medium itself that leads to learning. Instructional design involves the
implementation of various instructional strategies. Among the various strategies,
reflection and debriefing have been cited as critical to learning with games and
simulations. Brougere (1999) argued that games cannot be designed to directly
provide learning; reflection is required to make transfer and learning possible.
Debriefing provides an opportunity for reflection (Brougere, 1999, Garris et al.,
2002; Thiagarajan, 1998).
Assessment of Problem Solving
According to O’Neil’s Problem Solving model (1999), successful problem
solving requires content understanding, problem solving strategies, and selfregulation. Therefore, proper assessment of problem solving should address all three
constructs.
Measurement of Content Understanding
Davis and Wiedenbeck (2001) commented that meaningful learning results
in an understanding of the basic concepts of the new material through its integration
with existing knowledge. Day et al. (2001), proposed knowledge maps as a method
to measure content understanding. According to Baker and knowledge or concept
revised 11/20/05
75
mapping is more parsimonious than traditional performance assessment (Baker &
Mayer, 1999). In knowledge mapping, “the learner constructs a network consisting
of nodes (e.g., key words or terms) and links (e.g., ‘is part of’, ‘led to’, ‘is an
example of’” (Baker & Mayer, 1999, p. 274). Each node represents a concept in the
domain of knowledge. Each link, which connects two nodes, represents the
relationship between the nodes; that is, the relationship between the two concepts
(Schau & Mattern, 1997). Knowledge structures are based on the premise that people
organize information into patterns that reflect the relationships which exist between
concepts and the features that define them (Day et al., 2001). Day et al. further
commented that, in contrast to declarative knowledge which reflects the amount of
knowledge or facts learned, knowledge structures represent the organization of the
knowledge.
As Schau and Mattern (1997) pointed out, learners should not only be aware
of concepts but also of the connections among them. In a training context,
knowledge structures reflect the degree to which trainees have organized and
comprehended the content of training (Day et al., 2001). Knowledge maps, which are
graphical representations of knowledge structures, have been used as an effective
tool to learn complex subjects (Herl et al., 1996) and to facilitate critical thinking
(West, Pomeroy, Park, Gerstenberger, & Sandoval, 2000). Several studies also
revealed that knowledge maps are not only useful for learning, but are a reliable and
efficient measurement of content understanding (Herl et al., 1999; Ruiz-Primo,
Schultz, & Shavelson, 1997). The results of a study by Day et al. (2001) indicated
revised 11/20/05
76
that knowledge structures are predictive of both skill retention and skill transfer and
can therefore be a viable indices of training outcomes. Ruiz-Primo et al. (1997)
proposed a framework for conceptualizing knowledge maps as a potential
assessment tool in science, because they allow for organization and discrimination
between concepts.
Ruiz-Primo et al. (1997) stated that, as an assessment tool, knowledge maps
are identified as a combination of three components: (a) a task that allows a student
to exhibit his or her content understanding in the specific domain (b) a format for the
student’s responses, and (c) a scoring system by which the student’s knowledge map
could be accurately evaluated. Chuang (2003) modified this framework to serve as
an assessment specification using a concept map. Researchers have successfully
applied knowledge maps to measure students’ content understanding in science for
both high school students and adults (e.g., Chuang, 2003; Herl et al., 1999; Schacter
et al., 1999; Schau et al., 2001). For example, Schau et al. (2001) used select-andfill-in knowledge maps to measure secondary and postsecondary students’ content
understanding of science in two studies. The results of the participant’s performance
on the knowledge maps correlated significantly with that of a multiple choice test, a
traditional measure of learning (r= .77 for eighth grade and r=. 74 for seventh grade),
providing validity to the use of knowledge maps to assess learning outcomes.
CRESST developed a computer-based knowledge mapping system, which
measures the deeper understanding of individual students and teams, and reflects
thinking processes in real-time (Chung et al., 1999; O’Neil, 1999; Schacter et al.,
revised 11/20/05
77
1999). The computer-based knowledge map has been used successfully in a number
of studies (e.g., Chuang, 2003; Chung et al., 1999; Hsieh, 2001; Schacter et al.,
1999).
In the four studies, the map contained 18 concepts of environmental science,
and seven links for relationships, such as “cause,” “influence,” and “used for.”
Subjects were asked to create a knowledge map in the computer-based environment.
In the study conducted by Schacter et al. (1999) students were evaluated by creating
individual knowledge maps, after searching a simulated World Wide Web
environment. In studies conducted by Chung et al. (1999), Hsieh (2001), and Chuang
(2003), two students constructed a group map cooperatively through networked
computers. Results of the cooperative studies showed that using networked
computers to measure group processes was feasible. Figures 2 and 3 shows a screen
shot of the knowledge mapping software used for this study, which is similar to the
knowledge map software used for the three studies discussed above.
Figure 2: Knowledge Map User Interface Displaying 3 Concepts and 2 Links
revised 11/20/05
78
As seen in Figure 2, the computer screen was divided into three major
sections. The bottom section was for selecting the interaction mode. The middle
section is where one of the team members constructed the knowledge map. The top
section contained four menu items: “Session,” “Add Concept,” “Available Links,”
and “About.” Figure 3 shows the drop-down menu that appeared when “Add
Concept” was clicked. Clicking when the mouse pointer was over a concept added
that concept to the knowledge map. Figure 2 shows three concepts that were added:
desk, safe, and key. Figure 2 also shows links that were added to the concept map by
(A) clicking on one concept on the screen, (B) holding the mouse button down,
dragging to another concept, and letting go, which opened a ‘link’ dialog box, then
(C) selecting an appropriate link from a pull-down menu on the dialog box, and (D)
clicking the OK button on the dialog box to close the dialog box and complete the
link process.
Figure 3: Adding Concepts to the Knowledge Map
revised 11/20/05
79
Measurement of Problem Solving Strategies
According to Baker and Mayer (1999), “Problem solving is cognitive
processing directed at transforming a given situation into a desired situation when no
obvious method of solution is available to the problem solver” (p. 272). Simply put,
problem solving is mental activity aimed at finding a solution to a problem. Problem
solving strategies, which are almost always procedural, can be categorized as
domain-independent (-general) and domain-dependent (-specific; Alexander, 1992;
Bruning, Schraw, & Ronning, 1999; O’Neil, 1999; Perkins & Salomon, 1989).
Domain-specific problem solving knowledge is knowledge about a particular field of
study or a subject, such as the application of equations in a math question, the
application of a formula in a chemistry problem, or the specific strategies to be
successful in a game. Domain-general problem knowledge is the broad array of
problem solving knowledge that is not linked with a specific domain, such as the
application of multiple representations and analogies in a problem solving task or the
use of Boolean search strategies in a search task (Chuang, 2003).
Transfer questions have been examined as an alternative to performing
transfer tasks. For example, in a recent study which involved computer-based
delivery of information on how lightning worked, to examine the split-attention
effect in multimedia learning, Mayer and Moreno (1998) assessed participants’
problem solving strategies through a list of transfer questions. There were four
transfer questions: “What could you do to decrease the intensity of lightning?”
“Suppose you see clouds in the sky, but no lightning. Why not?” “What does air
revised 11/20/05
80
temperature have to do with lightning?” and “What causes lightning?” The
researchers had generated a list of 12 acceptable responses to the four questions and
participants received points for matching those responses. Participant responses to
the transfer questions were positively correlated with performance, indicating that
transfer questions are a viable alternative to more traditional methods of measuring
retention and transfer, such as tests and novel problem solving (Mayer & Moreno,
1998).
Measurement of Self-Regulation
While Brunning et al. (1999) commented that some researchers believe selfregulation includes three core components—metacognitive awareness, strategy use,
and motivational control, according to the O’Neil Problem Solving model (O’Neil,
1999), self-regulation is composed of only two core components: metacognition and
motivation. Strategy use is a separate construct that encompasses domain-specific
and domain-general knowledge. Within the O’Neil (1999) model, metacognition is
comprised of two subcategories, planning and self-monitoring (Hong & O’Neil,
2001; O’Neil & Herl, 1998; Pintrich & DeGroot, 1990) and motivation encompasses
mental effort and self-efficacy (Zimmerman, 1994, 2000).
O’Neil and Herl (1998) developed a trait self-regulation questionnaire
examining the four components of self-regulation (planning, self-monitoring, mental
effort, and self-efficacy). As defined by the O’Neil Problem Solving model (O’Neil,
1999), of the four components, planning is the first step in problem solving, since
learners must have a plan to achieve the proposed goal. Self-efficacy is one’s belief
revised 11/20/05
81
in his or her capability to accomplish a task (Davis & Wiedenbeck, 2001), and
mental effort is amount of mental effort exerted on a task. Self-monitoring occurs
throughout problem solving and involves comparing one’s current state to the goal
state, to determine if the current strategy is effective or whether modifications should
be made.
In the trait self-regulation questionnaire developed by O’Neil and Herl
(1998), planning, self-monitoring, self-efficacy, and effort are assessed using eight
questions each, for a total of thirty-two questions. The reliability of this selfregulation inventory has been established in previous studies. For example, in the
research conducted by Hong and O’Neil (2001), the reliability estimates (coefficient
alpha) of the four subscales of self-regulation—planning, self-checking, mental
effort, and self-efficacy—were .76, .86, .83, and .85 respectively; The research has
also provided evidence for construct validity.
Summary of Problem Solving Assessment Literature
Problem solving is cognitive processing directed at achieving a goal when no
solution method is obvious to the problem solver (Baker & Mayer, 1999). In the
O’Neil Problem Solving model (O’Neil, 1999: see Figure 1 in this dissertation),
problem solving is comprised of three components: content understanding, problem
solving strategies, and self-regulation. Content understanding refers to domain
knowledge. Problem solving strategies can be categorized into two types: domainindependent (-general) and domain-dependent (-specific) problem solving strategies.
Self-regulation includes two sub-categories: metacognition and motivation.
revised 11/20/05
82
Metacognition is composed of self-monitoring and planning. Motivation is
comprised of effort and self-efficacy.
Knowledge maps have been shown to be a reliable and efficient method for
the measurement of content understanding. CRESST has developed a simulated
World Wide Web space that incorporates knowledge mapping software to evaluate
problem solving strategies such as information searching strategies and feedback
inquiring strategies. Research has shown that computer-based problem solving
assessments are economical, efficient and valid measures that employ contextualized
problems that require students to think for extended periods of time and to indicate
the problem solving heuristics they were using and why.
Problem solving strategies are almost always procedural (Alexander, 1992;
Bruning et al., 1999; O’Neil, 1999, Perkins & Salomon, 1989). Domain-specific
problem solving strategies are applied to a particular field of study or subject, such as
the application of an equation to solve a math question. Domain-general problem
solving strategies refer to a broad array of problem solving knowledge not linked to a
specific domain, such as the application of multiple representations and analogies in
a problem solving task or the use of Boolean search strategies in a search task
(Chuang, 2003). Problem solving strategy transfer questions have been shown to be
an effective alternative to performing problem solving transfer tasks (Mayer &
Moreno, 1998).
According to the O’Neil Problem Solving Model (O’Neil, 1999), selfregulation is composed of two core components: metacognition and motivation.
revised 11/20/05
83
Metacognition is further analyzed into planning and self-monitoring (Hong &
O’Neil, 2001; O’Neil & Herl, 1998; Pintrich & DeGroot, 1990). Motivation is
further analyzed into mental effort and self-efficacy (Zimmerman, 1994, 2000).
Planning is the first step in problem solving (O’Neil, 1999). Self-efficacy is one’s
belief in the capacity to achieve a proposed goal (Davis & Wiedenbeck, 2001).
Mental effort is the amount of mental effort exerted on a task (Davis & Wiedenbeck,
2001). Self-monitoring occurs throughout the problem solving process and involves
comparing one’s current state to a goal state, to determine if the current strategy is
effective or whether modifications should be made (O’Neil, 1999). O’Neil and Herl
(1998) developed a trait self-regulation questionnaire to examine the four selfregulation components (planning, self-monitoring, mental effort, and self-efficacy).
The questionnaire is comprised of 32 questions; eight questions for each of the four
self-regulation components. Research conducted by Hong and O’Neil (2001) has
shown reliability estimates for the planning, self-checking, mental effort, and selfefficacy portions of the questionnaire of .76, .86, .83, and .85 respectively. The
research also provided evidence for construct validity.
Scaffolding
As discussed earlier, cognitive load theory (Paas et al., 2003) is concerned
with methods for reducing the amount of cognitive load placed on working memory
during learning and problem solving activities. Clark (2003b) commented that
instructional methods must also keep the cognitive load from instructional
presentations to a minimum. Scaffolding is considered a viable instructional method
revised 11/20/05
84
that assists in cognitive load reduction. There are a number of definitions of
scaffolding in the literature. Chalmers (2003) defines scaffolding as the process of
forming and building upon a schema (Chalmers, 2003). In a related definition, van
Merrionboer et al. (2003) defined the original meaning of scaffolding as all devices
or strategies that support students’ learning. More recently, van Merrienboer, Clark,
and de Croock (2002) defined scaffolding as the process of diminishing (fading)
support as learners acquire more expertise. Allen (1997) defined scaffolding as the
process of training a student on core concepts and then gradually expanding the
training. To summarize, the four definitions of scaffolding involve the development
of simple to complex schema, all devices that support learning, the process of
diminishing (fading) support during learning, and the process of building learning
from basic concepts to complex knowledge, respectively. Ultimately, the core
principle embodied in each of these definitions is that scaffolding is concerned with
controlling the amount of cognitive load imposed by learning, and each reflects a
philosophy or approach to controlling or reducing that load. For the purposes of this
review, all four definitions of scaffolding will be considered.
As defined by Clark (2001), instructional methods are external
representations of internal cognitive processes that are necessary for learning but
which learners cannot or will not provide for themselves. They provide learning
goals (e.g., demonstrations, simulations, and analogies: Alessi, 2000; Clark 2001),
monitoring (e.g., practice exercises: Clark, 2001), feedback (Alessi, 2000; Clark
2001; Leemkuil et al., 2003), and selection (e.g., highlighting information: Alessi,
revised 11/20/05
85
2000; Clark, 2001). Alessi (2000) added that instructional methods include: giving
hints and prompts before student actions; providing coaching, advice, or help
systems; and providing dictionaries and glossaries. Jones et al. (1995) added
advance organizers, graphical representations of problems, and hierarchical
knowledge structures to the list of instructional methods. Each of these examples is a
form of scaffolding.
In learning by doing in a virtual environment, students can actively work in
realistic situations that simulate authentic tasks for a particular domain (Mayer et al.,
2002). A major instructional issue in learning by doing within simulated
environments concerns the proper type of guidance (i.e. scaffolding), that is, how
best to create cognitive apprenticeship (Mayer et al. 2002). Mayer and colleagues
(2002) also commented that their research shows that discovery-based learning
environments can be converted into productive venues for learning when appropriate
cognitive scaffolding is provided; specifically, when the nature of the scaffolding is
aligned with the nature of the task, such as pictorial scaffolding for pictorially-based
tasks and textual-based scaffolding for textually-based tasks. For example, in a
recent study, Mayer et al. (2002) found that students learned better from a computerbased geology simulation when they were given some graphical support about how
to visualize geological features, as opposed to textual or auditory guidance.
Graphical Scaffolding
According to Allen (1997), selection of appropriate text and graphics can aid
the development of mental models, and Jones et al. (1995) commented that visual
revised 11/20/05
86
cues such as maps and menus as advance organizers help learners conceptualize the
organization of the information in a program. A number of researchers support the
use of maps as visual aids and organizers (Benbasat & Todd, 1993: Chou & Lin,
1998; Chou et al., 2000; Farrell & Moore, 2000-2001; Ruddle et al, 1999)
Chalmers (2003) defined graphic organizers as organizers of information in a
graphic format, which act as spatial displays of information that can also act as study
aids. Jones et al. (1995) argued that interactive designers should provide users with
visual or verbal cues to help them navigate through unfamiliar territory. Overviews,
menus, icons, or other interface design elements within the program should serve as
advance organizers for information contained in the interactive program (Jones et al.,
1995). In addition, the existence of virtual bookmarks enables recovery from the
possibility of disorientation; loss of place (Dias, Gomes, & Correia, 1999). However,
providing such support devices does not guarantee learners will use them. For
example, in an experiment involving a virtual maze, Cutmore et al. (2000) found
that, while landmarks provided useful cues, males utilized them significantly more
often than females did.
Navigation maps
Cutmore et al. (2000) define navigation as “…a process of tracking one’s
position in a physical environment to arrive at a desired destination” (p. 224). A
route through the environment consists of either a series of locations or continuous
movement along a path. Cutmore et al. further commented that “Navigation becomes
problematic when the whole path cannot be viewed at once but is largely occluded
revised 11/20/05
87
by objects in the environment’” (p. 224). The occluding objects may include internal
walls or large environmental features such as trees, hills, or buildings. Under these
conditions, one cannot simply plot a direct visual course from the start to finish
locations. Rather, knowledge of the layout of the space is required. Navigation maps
or other descriptive information may provide that knowledge (Cutmore et al., 2000).
Effective navigation of a familiar environment depends upon a number of
cognitive factors. These include working memory for recent information, attention to
important cues for location, bearing and motion, and finally, a cognitive
representation of the environment which becomes part of a long-term memory, a
cognitive map (Cutmore et al., 2000). According to Yair et al. (2001), the loss of
orientation and “vertigo” feeling which often accompanies learning in a virtualenvironment is minimized by the display of a traditional, two-dimensional map. The
map helps to navigate and to orient the user, and facilitates an easier learning
experience. Dempsey et al. (2002) also commented that an overview of player
position was considered an important feature in adventure games.
A number of experiments have examined the use of navigation maps in
virtual environments. Chou and Lin (1998) and Chou et al. (2000) examined various
navigation map types, with some navigation maps offering global views of the
environment (global navigation map) and others offering more localized views (local
navigation map), based on the learner’s current location. One hundred twenty one
college students participated in the Chou and Lin (1998) study. Five groups were
created, based on four navigation map variations; a global map of the entire 94 node
revised 11/20/05
88
hierarchical knowledge structure (the entire hypermedia environment), a series of
local maps for each knowledge area of the environment, a tracking map that updated
according the participant’s location with the participant’s location always in the
center and showing one level of nodes above and two below the current position, and
a no-map situation. One group was assigned to the global map, one to the local map,
one to the tracking map, and one two no map. A fifth group had access to all three
map types (global, local, and tracking). After being given instruction on their
respective navigation tools and time to practice, subjects were given 10 search tasks
and an addition 30 minutes to browse the hypermedia environment, after which they
answered posttest questions and an attitude questionnaire. Subjects also created a
knowledge map.
Results of the Chou and Lin (1998) study indicated that the search efficiency
(search speed) for the all map and global map groups were significantly lower than
for the other three groups (local map, tracking map, and no map), indicating benefits
from using the global map or all maps (which included the global map). There was
not significance between the all map and global map groups. Knowledge map
creation for the all map and global map groups were also significantly higher than
for the tracking map group, but not the local map or no map groups. Overall, the
results of the Chou and Lin (1998) study suggest that use of a global map or use of a
combination of maps, including the global map, results in greater search efficiency
and greater content understanding (as indicated by knowledge map development)
then either local maps or no map. Additionally, there were no differences found
revised 11/20/05
89
between the use of a local map versus no map, suggesting no cognitive value to a
local map. With regard to attitude, there were no difference by map type for any of
the attitudinal scales, including attitude toward the learning experience, usability of
the system, and disorientation.
As with Chou and Lin (1998) study, the Chou et al. (2000) study, which
involved over one hundred college students, showed that the type of navigation map
used affected performance. However, some findings were in contrast to the earlier
study. Those who used a global navigation map performed significantly better than
both the local map and no map groups with regard to the number of areas visited in
order to accomplish the search task. There was no difference in performance between
the local map and the no map groups. The number of times areas were revisited was
also significantly lower for the global map group than for either the no map or the
local map groups, but there was no difference between the no map and local map
groups. In the third measure, development of a knowledge map, the no map group’s
performance was significantly higher than the local map’s performance. The global
map group’s score was slightly lower than then no map group’s score and fell just
short of significance over the local map’s score. This differed from the earlier from
the earlier study which found a significant difference between the global map over
no map, suggesting that map use might not be a primary factor in developing content
understanding.
Results of the Chou et al (2000) study indicated that map type can affect
performance in a search task. A global map resulted in better performance than a
revised 11/20/05
90
local map or no map with regards to navigation (search speed and revisiting sites),
while performance on knowledge map creation by the no map group was
significantly better than for those who use a local map and only slightly better than
for those who used a global map. In other words, accomplishment of a problem
solving task was best with a global map while understanding of a problem solving
task or environment was best with either no map or with a global map. Results of
the two Chou and colleague studies (Chou & Lin 1998 & Chou et al., 2000) suggest
that global map use can improve search speed and reduce revisiting locations. The
mixed results of the two studies suggest that map use may not influence content
understanding.
According to Tkacz (1998), soldiers use navigation maps as tools, which
involve spatial reasoning, complex decision making, symbol interpretation, and
spatial problem solving. In her study involving 105 marines, Tkacz (1998) examined
the procedural components of cognitive maps required for using and understanding
topographic navigation maps, stating that navigation map interpretation involves
both top-down (retrieved from long-term memory) and bottom-up (retrieved from
the environment and the navigation map) procedures. Therefore, Tkacz examined the
cognitive components underlying navigation map interpretation to assess the
influence of individual differences on course success and on real world position
location. In addition, Tkacz, related position location ability to video game
performance in a simulated environment. Performance measures consisted of realworld position location (in the field), a map reading readiness exercise (using a map),
revised 11/20/05
91
and simulated travel in a videogame environment (a 3-dimensional maze, with
movement in six directions; North, South, East, West, Up, and Down). The goal of
the maze was to move as quickly as possible through the 125 room structure to a
goal room and then find the exit door.
All participants completed a map reading pretest to assess basic map skills.
The treatment group then received 15 hours of geographical training covering six
topics: terrain association, contour intervals, elevation, landforms, slope type, and
slope steepness. After the training, all groups completed spatial tests and a
geography test. The geography test assessed the six skills covered in geographical
training the treatment group received. Additional participant data obtained from
armed services vocational aptitude tests administered to each participant during
military enlistment were also utilized.
According to Tkacz, the geographical instruction significantly improved the
ability to perform terrain association and relate the real world scenes to
topographical map representations. Results of the study also indicated that
orientation and, to a lesser extent, reasoning ability are important for map
interpretation. Video game performance was affected by all spatial skills, and
particularly orientation and mental rotation (visualization), with high ability subjects
escaping the maze faster than lower ability subjects. Video game performance was
also affected by map reading ability, with better performance by those demonstrating
better map reading performance. It should be noted that, while Tkacz referred to the
revised 11/20/05
92
maze as a game, it appears to fit the Gredler’s (1996) definition of a simulation
game, not a game.
Mayer et al. (2002) commented that a major instructional issue in learning by
doing within simulated environments concerns the proper type of guidance, which
they refer to as cognitive apprenticeship. The investigators used a geological gaming
simulation, the Profile Game, to test various types of guidance structures (i.e.,
strategy modeling), ranging from no guidance to illustrations (i.e., pictorial aids) to
verbal descriptions to pictorial and verbal aids combined. The Profile Game is based
on the premise, “Suppose you were visiting a planet and you wanted to determine
which geological feature is present on a certain portion of the planet’s surface”
(Mayer et al., p. 171). While exploring, you cannot directly see the features, so you
must interpret data indirectly, through probing procedures. The experimenters
focused on the amount and type of guidance needed within the highly spatial
simulation.
Though a series of experiments, Mayer et al. (2002) found that pictorial
scaffolding, as opposed the verbal scaffolding, is needed to enhance performance in
a visual-spatial task. In the final experiments of the series, participants were divided
into verbal scaffolding, pictorial scaffolding, both, and no scaffolding groups.
Participants who received pictorial scaffolding solved significantly more problems
than did those who did not receive pictorial scaffolding. Students who received
strategic scaffolding did not solve significantly more problems than students who did
not receive strategic scaffolding. While high-spatial participants performed
revised 11/20/05
93
significantly better than low-spatial students, adding pictorial scaffolding to the
learning materials helped both low- and high-spatial students learn to use the Profile
Game. Students in the pictorial-scaffolding group correctly solved more transfer
problems than students in the control group. However, pictorial scaffolding did not
significantly affect the solution time (speed) of either low- or high-spatial
participants. Overall, adding pictorial scaffolding to the learning materials lead to
improved performance on a transfer task for both high- and low-spatial students in
the Profile Game (Mayer et al., 2002).
Contiguity effect
The contiguity effect addresses the cognitive load imposed when multiple
sources of information are separated (Mayer & Moreno, 2003; Mayer & Sims, 1994;
Mayer et al., 1999; Moreno & Mayer, 1999). There are two forms of the contiguity
effect: spatial contiguity and temporal contiguity. Temporal contiguity occurs when
one piece of information is presented prior to other pieces of information (Mayer &
Moreno, 2003; Mayer et al., 1999; Moreno & Mayer, 1999). Spatial contiguity
occurs when modalities are physically separated (Mayer & Moreno, 2003). This
study is concerned with spatial contiguity, since the printed navigation maps will be
spatially separated from the 3-D video game environment. Contiguity results in splitattention (Moreno & Mayer, 1999).
Split Attention Effect
When dealing with two or more related sources of information (e.g., text and
diagrams), it’s often necessary to integrate mentally corresponding representations
revised 11/20/05
94
(e.g., verbal and pictorial) to construct a relevant schema to achieve understanding.
When different sources of information are separated in space or time, this process of
integration may place an unnecessary strain on limited working memory resources,
resulting in impairment in learning (Atkinson et al., 2000; Mayer & Moreno, 1998;
Tarmizi & Sweller, 1988). Mayer (2001) commented that the split attention effect
can be resolved by placing the components near each other; for example, placing text
labels near their related imagery in an illustration. In this study, the printed
navigation maps are spatially separated from the 3-D video game environment,
thereby inducing the split-attention effect.
Summary of scaffolding Literature
Depending upon the researcher, scaffolding has several meanings: the
process of forming and building upon a schema (Chalmers, 2003); all devices or
strategies that support learning (van Merrionboer et al., 2003), the process of
diminishing support as learners acquire expertise (van Merrionboer et al., 2002); and
the process of training a student on core concepts and then gradually expanding the
training. What these four definitions have in common is that scaffolding is related to
providing support during learning, to control or limit cognitive load.
Clark (2001) described instructional methods as external representations the
internal metacognitive processes of selecting, organizing, and integrating.
Instructional methods also provide learning goals (Alessi, 2000; Clark, 2001),
monitoring (Clark, 2001), feedback (Alessi, 2000; Clark, 2001; Leemkuil et al.,
2003), selection (Alessi, 2000; Clark, 2001), hints and prompts (Alessi, 2000), and
revised 11/20/05
95
various advance organizers (Jones et al., 1995). Each of these components either
reflects a form of scaffolding or reflects a need for scaffolding.
Mayer et al (2002) argued that a major instructional issue in learning by
doing within simulated environments concerns the proper type of guidance (i.e.,
scaffolding). One form of scaffolding is graphical scaffolding. According to Allen
(1997), selection of appropriate text and graphics can aid the development of mental
models, and Jones et al. (1995) commented that visual cues such as maps help
learners conceptualize the organization of the information in a program (i.e., the
learning space). A number of studies have supported the use of maps as visual aids
and organizers (Benbasat & Todd, 1993: Chou & Lin, 1998; Ruddle et al, 1999,
Chou et al., 2000; Farrell & Moore, 2000-2001)
According to Allen (1997), selecting of appropriate text and graphics can aid
the development of mental models. Jones et al. (1995) commented that visual cues
such as maps and menus as advance organizers help learners conceptualize the
organization of information. Graphic organizers arrange information in a graphic
format, which act as spatial displays of information that can also act as study aids
(Chalmers, 2003).
Cobb argued that cognitive load can be distributed to external media (Cobb,
1997). One type of external media is a navigation map. When navigating occluded
environments, where obstructions prevent viewing of or knowledge of an entire path,
navigation maps can provide that knowledge (Cutmore et al, 2000). According to
revised 11/20/05
96
Yair et al. (2001), the disorientation that accompanies learning in a virtual
environment can be minimized by use of a traditional, two-dimensional map.
A number of experiments have examined the use of navigation maps in
virtual environment. Chou and Lin (1998) examined the use of various map types to
navigation during a search and information gathering task in a web-like environment.
Five map variations were examined: global map, two local map types, no map, and
all maps. Those using the global map or all maps performed searches more
efficiently (faster and with less revisiting) than those using the local maps or no
maps. Results of knowledge map creation was mixed, with the global and all map
groups performing better than one local map type but not the other local map type or
the no map group, suggesting that map type may not affect content understanding.
Based on results of the 1998 study, the Chou et al (2000) study examined
map use with three map types: global map, local map, and no map. Similar to the
first study, the global map group performed significantly higher than the local and no
map groups. Also similar to the first study, the number of revisits to web pages was
significantly lower for the global map group as compared to the local and no map
groups. In contrast to the first study, knowledge map creation by the no map group
performed significantly higher than either the local map group. The global map
group’s performance was equivalent to the no map group, but fell short of being
significantly better than the local map group. Results of the two Chou and colleague
studies (Chou & Lin 1998 & Chou et al., 2000) suggest that global map use can
revised 11/20/05
97
improve search speed and reduce revisiting locations. The mixed results of the two
studies suggest that map use may not influence content understanding.
Tkacz (1998) stated that navigation map interpretation involves both topdown (retrieved from long-term memory) and bottom-up (retrieved from the
environment and the navigation map) procedures. Tkacz (1998) examined the
cognitive components underlying navigation map interpretation that assess the
influence of individual differences. Tkacz also related position location ability to
video game performance in a simulated environment (a maze). Results of the Tkacz
(1998) study indicated that orientation, and to some extent reasoning ability, were
important for map interpretation. Video game performance was affected by all
spatial skills, and particularly by orientation and mental rotation (visualization).
Video game performance was also affected by map reading ability. While Tkacz
referred to the maze as a game, it appears to fit Gredler’s (1996) definition of a
simulation game, not a game.
Using a geological simulation game, the Profile Game, Mayer et al. (2002)
examined various types of guidance structures, ranging from no guidance to
illustrations (i.e., pictorial aids) to verbal descriptions to pictorial and verbal aids
combined. In the Profile Game, participants needed to determine surface features of
an environment, without directly observing the features. Results of the experiment
indicated that the type of scaffolding provided should be aligned with the type of
task. In other words, graphical scaffolding should be provided during graphical tasks,
revised 11/20/05
98
auditory scaffolding for auditory tasks, and textual scaffolding for textual tasks
(Mayer et al., 2002).
While graphical scaffolding appears to be beneficial, there are potential
problems associated with this type of scaffolding. One such problem is referred to as
the contiguity effect, which refers to the cognitive load imposed with multiple
sources of information are separated (Mayer & Moreno, 2003; Mayer et al., 1999).
There are two forms of the contiguity effect: spatial contiguity and temporal
contiguity. Temporal contiguity occurs when one piece of information is presented
prior to other pieces of information Spatial contiguity occurs when information is
physically separated (Mayer & Moreno, 2003). This study potentially imposes
spatial contiguity, since the navigation map is presented on a piece of paper which,
depending on where the participant places the map, is separated from the computer
screen. The contiguity effect results in split attention (Moreno & Mayer, 1999).
According to the split attention effect, when information is separated by space
of time, the process of integrating the information may place an unnecessary strain
on limited working memory resources (Atkinson et al., 2000; Tarmizi & Sweller,
1998, Mayer, 2001). Placing the information next to each other can reduce the effect
(Mayer, 2001).
Summary of the Literature Review
Cognitive Load Theory is based on the assumptions of a limited working
memory with separate channels for auditory and visual/spatial stimuli, and a virtually
unlimited capacity long-term memory that stores schemas of varying complexity and
revised 11/20/05
99
levels of automation (Brunken et al., 2003). According to Paas et al. (2003),
cognitive load refers to the amount of load placed on working memory. Cognitive
load can be reduced through effective use of the auditory and visual/spatial channels,
as well as schemas stored in long-term memory. There are three types of cognitive
load that can be described in relation to a learning or problem solving task: intrinsic
cognitive load (load from the actual mental processes involved in creating schema),
germane cognitive load (load from the instructional processes that deliver the to-belearned content), and extraneous cognitive load (all other load). An important goal of
instructional design is to balance intrinsic, germane, and extraneous cognitive loads
to support learning outcomes (Brunen et al., 2003).
Working memory refers to the limited capacity of holding and processing
chunks of information. Miller (1956) defines that limitation as five to nine chunks.
Paas et al. (2003) added that limitations might be greater when dealing with novel
information, with working memory only able to handle as few as two or three novel
chunks of information.
The three components of working memory (the central executive, the
visuospatial sketchpad, and the phonological loop) are limited in capacity and
temporary (Baddeley, 1986; Brunken et al., 2003). By contrast, long term memory,
which stores information as schema, is permanent and has an unlimited capacity
(Tennyson & Breuer, 2002). Schemas, which are cognitive constructs that
incorporate multiple elements of information into a single element (Paas et al.,
2003), reduce working memory load by treating those multiple elements as one
revised 11/20/05
100
chunck of information. Through practice, schemas can also operate under automatic,
rather than controlled, processing, requiring minimal working memory resources
(Clark, 1999; Kalyuga et al., 2003; Mousavi et al., 1995). Because of their cognitive
benefits, the primary goals of instruction are construction (chunking) of and
automation of schemas (Paas et al., 2003).
Elaboration and reflection are processes involved in the development of
schemas and mental models. Elaboration consists of creating of a semantic event
(Kees & Davies, 1990) and reflection encourages learners to examine information
and processes (Atkinson et al., 2003). According to Allen (1997), mental models,
which are internal representations of our understanding of external processes, differ
from schema which can model other types of knowledge, not just processes.
Metacognition (also know as the central executive), is the management of
cognitive processes (Jones et al., 1995), as well as the awareness of ones own mental
processes (Anderson et al., 2001). The metacognitive components appearing in most
cognitive models are selecting (attending to relevant information), organizing
(building connections between pieces of information), and integrating (connecting
new information to prior knowledge; Harp & Mayer, 1998).
Meaningful learning is defined as deep understanding of the material and is
reflected in the ability to apply what was taught to new situations; i.e., transfer or
problem solving transfer (Mayer & Moreno, 2003). Meaningful learning requires
effective metacognitive skills (Jones et al., 1995). Related to meaningful learning is
mental effort, which refers to the cognitive capacity allocated to a task. Mental effort
revised 11/20/05
101
is affected by motivation, which in turn cannot exist without goals (Clark, 2003d).
Goals are further affected by self-efficacy; the belief in one’s ability to successfully
carry out a particular behavior (Davis & Wiedenbeck, 2001).
While mental effort is affected by motivation, one does not necessarily lead
to the other (Clark, 2003d; Salomon, 1983). A number of factors can affect
motivation, including prior knowledge, strategy knowledge, personal-motivation
states (e.g., self-efficacy and intrinsic motivation) and knowledge of oneself (e.g.,
goals and self-perceptions; Browkowski et al., 1990). The difficulty of a task also
affects motivation. Tasks that are too easy or too hard tend to reduce motivation
(Clark, 1999). Expectancy-value theory proposes that the probability of behavior
depends on the value of a goal and the expectancy of attaining the goal (Coffin &
MacIntyre, 1999). Task value is affected by a number of factors including the
intrinsic value of a goal and its attainment value (Corno & Mandinah, 1983).
Related to meaningful learning is problem solving, which is “cognitive
processing directed at transforming a given situation into a desired situation when no
obvious methods of solution is available to the problem solver” (Baker & Mayer,
1999, p. 272). O’Neil’s Problem Solving model (O’Neil, 1999) defines three core
constructs of problem solving: content understanding, problem solving strategies,
and self-regulation. Most of these components is further defined by subcomponents.
Content understanding refers to domain knowledge. Problem-solving strategies refer
to both domain-specific and domain-independent strategies. Self-regulation is
revised 11/20/05
102
comprised of metacognition (planning and self-monitoring) and motivation (mental
effort and self-efficacy; O’Neil, 1999, 2002).
Learner control, which is inherent in interactive computer-based media,
allows for control of pacing and sequencing (Barab et al., 1999). It also can induce
cognitive overload in the form of disorientation—loss of place (Chalmers, 2003)—
and is a potential source for extraneous cognitive load. These issues may be a cause
of mixed reviews of learner control (Bernard, et al, 2003; Niemiec et al., 1996;
Steinberg, 1989), particularly in relationship to novices versus experts (Clark,
2003c).
Computer-based educational games fall into three categories: games,
simulations, and simulation games. Games consist of rules, can contain imaginative
contexts, are primarily linear, and include goals as well as competition, either against
other players or against a computer (Gredler, 1996). Simulations display the dynamic
relationship among variables which change over time and reflect authentic causal
processes. Simulations have a goal of discovering causal relationships through
manipulation of independent variables. Simulation games are any blend of games
and simulations (Gredler, 1996). The terms computer-based game and video game
are used interchangeably (Kirriemuir, 2002b). While games have been described as
linear and simulations as non-linear, this refers to the goal structures of the media. In
terms of intervention structure, both media are non-linear. In other words, at each
intervention point the user or participant can select from either two choices (e.g., quit
or continue, increase or decrease something, go left or go right).
revised 11/20/05
103
Beginning with the work of Malone (1981), a number of constructs have been
described as providing the motivational aspects of games: fantasy, control and
manipulation, challenge and complexity, curiosity, competition, feedback, and fun.
Fantasy evokes “mental images of physical or social situations that do not exist”
(Malone & Lepper, 1987, p. 250). Control and manipulation promote intrinsic
motivation, because learners are given a sense of control over their choices and
actions (deCharms, 1986; Deci, 1975). Challenge embodies the idea that intrinsic
motivation occurs when there is a match between a task and the learner’s skills
(Bandura, 1977; Csikszentmihalyi, 1975; Harter, 1978). For challenge to be
effective, the task should be neither too hard nor too easy, otherwise the learner will
lose interest (Clark, 1999; Malone & Lepper, 1987). Curiosity is related to challenge
and arises from situations in which there is complexity, incongruity, and discrepancy
(Davis & Wiedenbeck, 2001).
Studies on competition with games and simulations have resulted in mixed
findings, due to individual learner preferences, as well as the types of reward
structures connected to the competition (see, for example, Porter et al., 1990-1991;
Yu, 2001). Another motivational factor in games, feedback, allows learners to
quickly evaluate their progress and can take many forms, such as textual, visual, and
aural (Rieber, 1996). Ricci et al. (1996) argued that feedback can produce significant
differences in learner attitudes, resulting in increased attention to a learning
environment. However, Clark (2003) commented that feedback must be focused on
clear performance goals and current performance.
revised 11/20/05
104
The last category contributing to motivation, fun, is possibly an erroneous
category. Little empirical evidence exists for the construct. However, evidence does
support the related constructs of play, engagement, and flow. Play is entertainment
without fear of present of future consequences (Resnick & Sherer, 1994).
Csikszentmihalyi (1975, 1990) defines flow as an optimal experience in which a
person is so involved in an activity that nothing else seems to matter. According to
Davis and Wiedenbeck (2001), engagement is the feeling of working directly on the
objects of interest in a world, and Garris et al. (2002) argued that engagement can
help to enhance learning and accomplish instructional objectives.
While numerous studies have cited the learning benefits of games and
simulations (e.g., Adams, 1998; Baker et al., 1997; Betz, 1995-1996; Khoo & Koh,
1998), others have found mixed, negative, or null outcomes from games and
simulations, specifically in the relationship of enjoyment of a game to learning from
the game (e.g., Brougere, 1999; Dekkers & Donatti, 1981; Druckman, 1995). Part of
the problem comes from unsupported claims from non-empirical studies (see Table
2) and even from empirical studies (e.g., Carr and Groves, 1998). However, there
appears to be consensus among a large number of researchers with regards to the
negative, mixed, or null findings, suggesting that the cause might be a lack of sound
instructional design embedded in the games (de Jong & van Joolingen, 1998; Garris
et al., 2002; Gredler, 1996; Lee, 1999; Leemkuil et al., 2003; Thiagarajan, 1998;
Wolfe, 1997). Among the various instructional strategies, reflection and debriefing
have been cited as critical to learning with games and simulations.
revised 11/20/05
105
An important component in research on the effectiveness of educational
games and simulations is the measurement and assessment of performance outcomes
from the various instructional strategies embedded into the games or simulations,
such as problem solving tasks. Problem solving is cognitive processing directed at
achieving a goal when no solution method is obvious to the problem solver (Baker &
Mayer, 1999). The O’Neil Problem Solving model (O’Neil, 1999) includes three
components: content understanding; solving strategies—domain-independent (general) and domain-dependent (-specific)—and self-regulation, which is comprised
of metacognition and motivation. Metacognition is further composed of selfmonitoring and planning, and motivation is comprised of effort and self-efficacy.
Knowledge maps are reliable and efficient for the measurement of the content
understanding portion of the O’Neil Problem Solving model, and CRESST has
developed a simulated World Wide Web-based knowledge mapping environment to
evaluate problem solving strategies.
Problem solving can place a great amount of cognitive load on working
memory. Instructional strategies have been recommended to help control or reduce
that load. One such strategy is scaffolding. While there are a number of definitions of
scaffolding (e.g., Chalmers, 2003; van Merrionboer et al., 2002; van Merrionboer et
al., 2003), what they all have in common is that scaffolding is an instructional
method that provides support during learning. Clark (2001) described instructional
methods as external representations the internal processes of selecting, organizing,
and integrating. Instructional methods provide learning goals, monitoring, feedback,
revised 11/20/05
106
selection, hints, prompts, and various advance organizers (Alessi, 2000; Clark, 2001;
Jones et al., 1995; Leemkuil et al., 2003). Each of these components either reflects a
form of scaffolding or reflects a need for scaffolding
One form of scaffolding is graphical scaffolding. A number of studies have
reported the benefits of maps, which is a type of graphical scaffolding (Benbasat &
Todd, 1993: Chou & Lin, 1998; Chou et al., 2000; Farrell & Moore, 2000-2001;
Ruddle et al, 1999). While Chou and colleagues (Chou & Lin, 1998; Chou et al.,
(2000) found that certain type of maps (global navigation maps) can benefit search
efficiency, in terms of speed and revisit rates, it is unclear whether map type affects
content understanding. According to Chou and colleagues, map types also do not
appear to affect continuing motivation. Tkacz (1998) found that individual
differences affect map interpretation—particularly one’s orientation and reasoning
ability. Tkacz also found that video game performance is affected by all spatial skills
(particularly orientation and mental rotation). In contrast, Mayer et al. (2002) found
that graphical support aided in content understanding, regardless of spatial ability,
and that use of graphical aids in a graphical task resulted in higher transfer than nonuse of aids or use of other types of aids (e.g., textual or verbal).
While navigation maps can reduce or distribute cognitive load (Cobb, 1997),
they also have the potential to add load, ultimately counteracting their possible
positive effects. The spatial contiguity effect addresses the cognitive load imposed
when multiple sources of information are separated (Mayer & Moreno, 2003) and the
split attention effect, which is related to the contiguity effect, occurs when dealing
revised 11/20/05
107
with two or more related sources of information (Atkinson et al., 2000). Therefore,
while navigation maps can provide valuable cognitive support for navigating virtual
environments, such as computer-based video games, the potential for extra load
caused by split attention must be considered and, where possible, addressed. Mayer
(2001) proposed that the split attention effect can be resolved by placing the
components near each other; for example, placing text labels near their related
imagery in an illustration. In this study, the use of a navigation map separate from
the screen where the game appears is expected to introduce additional cognitive
load—no viable solution was found to resolve this situation in this study.
revised 11/20/05
108
CHAPTER 3
METHODOLOGY
Research Questions and Hypotheses
Research Question 1: Will the problem solving performance of participants
who use a navigation map (the treatment group) in a 3-D, occluded computer-based
video game (i.e., SafeCracker®) be better than the problem solving performance of
those who do not use the map (the control group)?
Hypothesis 1: Participants who use a navigation map (the treatment group)
will exhibit significantly greater content understanding than participants who do not
use a navigation map (the control group).
Hypothesis 2: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy retention than participants who do not
use a navigation map (the control group).
Hypothesis 3: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy transfer than participants who do not
use a navigation map (the control group).
Hypothesis 4: There will be no significant difference in self-regulation
between the navigation map group (the treatment group) and the control group.
However, it is expected that higher levels of self-regulation will be associated with
better performance.
Research Question 2: Will the continuing motivation of participants who
use a navigation map in a 3-D, occluded computer-based video game (i.e.,
revised 11/20/05
109
SafeCracker®) be greater than the continuing motivation of those who do not use the
map (the control group)?
Hypothesis 5: Participants who use a navigation map (the treatment group)
will exhibit a greater amount of continuing motivation, as indicated by continued
optional game play, than participants who do not use a navigation map (the control
group).
Research Design
This research consisted of two studies: a pilot study and a main study. The
design of the main study was a true experimental posttest only, 2 by 2 repeated
measures design with randomized assignment of participants. It involved two groups
(one treatment group, which used a navigation map, and one control group, which
did not use a navigation map) and occasions (one after the first game and one after
the second game). Each occasion consisted of creation of a knowledge map and
responding to a problem solving strategies questionnaire which elicited both
retention and transfer responses. Participants were randomly assigned to either the
treatment or control group. Group sessions involved only one group type: either all
treatment participants or all control participants. Due to limited availability of
computers, session size was limited to a maximum of three participants. At the end
of the approximately 90 minute session, participants were debriefed and allowed to
continue playing on their own for up to 30 additional minutes (to assess continuing
motivation).
revised 11/20/05
110
Study Sample
University of Southern California (USC) Human Subjects approval was
requested on June 17, 2004. Revisions were requested on July 15 for the recruitment
flyer and the Informed Consent Form. Changes to these two forms were made and
resubmitted on July 22, 2004. On July 26, 2004, the USC Institutional Review Board
(IRB) approved all forms, allowing participants to be contacted and the experimental
sessions to begin.
Pilot study sample. The pilot study sample consisted of two participants and
was conducted September 28 and 29, 2004. The purpose of the pilot study was to
review the procedures and instruments that were to be utilized in the main study.
The sample for the pilot study was a convenience sample and consisted of one
female approximately 49 years 4 months of age and one male approximately 32
years and 8 months of age. Both subjects were graduates of a southwestern
university. Both participants had a reasonable level of computer proficiency,
virtually no video game experience, and no prior experience with the game
SafeCracker©.
Main study sample. Between November 11, 2004 and March 21, 2005,
seventy-one English-speaking adults, ranging in age from 19 years and 4 months to
31 years and 11 months, participated in the main study. The average participant age
was a few days less than 23 years old. All the participants for the main study were
undergraduate students, graduates, or graduate students of a southwestern university.
revised 11/20/05
111
Solicitation of participants for the main study. Participation was solicited
through several methods. The primary method was a standard paper sized—8 and a
half by 11 inch—flyer (Figure 4) posted in various locations within five of the
university’s schools; business, engineering, communication, cinema, and education.
These schools were chosen for a number of reasons, including: how many locations
within their facilities they allowed flyers to be posted at; ease of, or ability to get,
approval to post flyers; a belief by the researcher that their students might be
interested in participating in a video game study.
Figure 4: Participant Solicitation Flyer
Flyers were also sent via email attachment to two of the university’s student
organizations that the researcher believed would include students potentially
revised 11/20/05
112
interested in this type of study. The organizations were a student video game
development group and a student television and film special effects group. The
researcher was the faculty advisor to the video game group. Flyers were also posted
around campus at locations approved for display of announcements. These locations
included student congregation areas, major outdoor pathways, and parking structure
stairwells. The flyer (Figure 4) included a statement that participants would be paid
$15 for approximately 90 minutes of participation and participants must have no
prior experience playing the personal computer- (PC-) based video game
SafeCracker® (Daydream Interactive, 1995/2001). Email contact information was
provided on the flyer.
Randomized assignment for the main study. As email requests for
participation in the study arrived, each participant was randomly assigned to either
the treatment or control group. Participant information was entered, in the order in
which their emails were received, into a Microsoft Excel 2002 spreadsheet for
tracking purposes. The spreadsheet was used for other logistical issues related to the
study. Randomized assignment was accomplished using a random number generator
within a Microsoft Excel 2002 spreadsheet. When the participant’s last name was
entered and either the enter or tab key was pressed on the computer, the random
number generator would display a number between 0.000000000 and 1.000000000,
in increments of .000000001. If the number was from 0.000000000 to 0.500000000,
the participant was assigned to the Control group. If the number was from
0.500000001 and 1.000000000, the participant was assigned to the Treatment group.
revised 11/20/05
113
Various study times were selected for each group and particpants were sent a list of
those times relevant to their group. Participants responded by listing one or more
times during which they could participate. From the responses, the researcher
scheduled participants to best fill each available time slot.
Number of particpants whose data were analyzed. The data from 64
participants were analyzed; 33 from the treatment group (the navigation map) and 31
from the control group (no map). A total of 71 students participated in the study,
with 68 completing the study. Thirty-four of those completing the study received the
treatment, the navigation map, and thirty-four were in the control group and did not
receive the navigation map. The navigation map was a topological (overhead) floor
plan of the game’s playing environment; a mansion. Figure 5 shows the navigation
map used for the first of two games played.
Figure 5: Sample Navigation Map
revised 11/20/05
114
Those in the treatment group also received instruction on how to read the
map and how to use the map for planning and navigation (see the section entitled
“Introduction to Using the Navigation Map” later in this chapter for information on
the map training). Those in the control group did not receive the navigation map and
were only given brief instruction on how to navigate the mansion without a map (see
the section entitled “Script for the Control Group on How to Navigate the Mansion”
later in this chapter for the script administered to the control group).
Of the 71 students that participated in the study, 68 completed the study
(three participants did not complete the study due to computer errors), but the data
from only 64 participants were included for analysis in this study. The main
experiment took place between November 11, 2004 and March 21, 2005. Near the
end of the experimental phase, two of the computers began to exhibit problems. One
computer began to freeze (quit accepting or responding to user input). The other
computer began to intermittently display an error during the second round of game
play during a session. In most cases, turning off the computers before various phases
of the study alleviated or prevented problems. However, near the end of the data
collection phase of the study, the computer that had intermittently been freezing
began to regularly freeze. Three participants who used that computer had to end the
study early and not enough data was collected by either participant to be analyzable
These were the three subjects that had not completed the study (causing the reduction
from 71 participants to 68). From that point onward, that computer was not used in
the study, limiting participation to only two per session. For the other computer that
revised 11/20/05
115
had been exhibiting problems, restarting the computer at various phases worked well
for all but one participant. This participant had to leave the study early and not
enough data was collected for analysis.
In one session involving two participants, the researcher inadvertently had the
participants overwrite a file with some of their prior data, making the comparison
between the occasion 1 data and the occasion 2 data impossible. Those two
participant’s data were not included in the data analysis. This reduced the number
from 68 to 66. The two final participants not included in the data analysis had to
leave very early in the study due to computer problems. In both cases, the
participants had only been shown how to use one software package (the Knowledge
Mapping software). It was determined at the time to be acceptable to have those two
participants return to complete the study at a later date without compromising the
validity of their data. They did return and completed the study. However, after
reconsideration, it was decided that the instruction the two participants had received
the first time they participated made them different than the other participants.
Therefore, their data was not included for analysis. This reduced the number of
participants whose data were analyzed from 66 to 64.
Hardware
The pilot study took place in the home office of the researcher. The computer
utilized for the pilot study was a 450 MHz (megahertz) desktop computer made by
Tiger Direct (http://www.tigerdirect.com) with 64 MB (megabytes) of RAM
revised 11/20/05
116
(random-access memory), a standard computer keyboard, a 3-button mouse, and a
21” CRT (cathode-ray tube) monitor.
The main study took place in the campus office of the researcher, where three
computers were set up for the study. A table was set up for two of the three
computers to be placed side by side. One of those computers was a Pentium 200,
NeTPower Symmetra computer with 128 MB of RAM that originally ran the
Windows NT® operating system, but was installed with the Windows 98® operating
system for the study. The computer configuration included a standard computer
keyboard, a 3-button mouse, and a 17” CRT monitor. The computer placed next to
the NeTPower Symmetra on the table was a Sony PCG-F520, Pentium III laptop
computer with 192MB of RAM, and running the Windows 98 operating system. The
laptop’s keyboard was used, but an external USB (universal serial-bus) 2-button
mouse was added. A 14” CRT monitor was attached to this computer, because the
laptop’s built-in LCD (liquid crystal display) monitor was not very good; the
displayed images were very dark and had low contrast, and the screen exhibited a lot
of reflection and glare, making visibility difficult. The NeTPower Symmetra’s power
case was placed on the desk between the two computers, to reduce the visibility by
participants of each other’s monitor; The researcher was concerned that a participant
might be distracted by the imagery on another participant’s screen.
The third computer was a Dell Latitude D500, Pentium M laptop computer
with 256MB RAM, and running the Windows 98 operating system. As with the other
laptop, this laptop’s keyboard was used, but a serial bus 2-button mouse was added.
revised 11/20/05
117
This computer was placed on a lateral file cabinet. A monitor was not attached to this
laptop, because its built-in 12” LCD screen produced a satisfactory picture.
The three computers were placed so that participants could not easily see
what other participants were doing and participants had sufficient room to use the
mouse and to write on paper. The primary mode of computer input and interaction
during the study was via the mouse. The only time the keyboard was used was for
entering a file name when saving various types of data. Two files were saved during
one phase (occasion 1) of the study and two files were saved during a second phase
of the study (occasion 2), for a total of four files.
Instruments
A number of instruments were included in the study: a demographic, game
play, and game preference questionnaire (see the next section, entitled
“Demographic, Gameplay, and Game Preference Questionnaire), two task
completion forms (see Figures 6 and 7), a self-regulation questionnaire (Appendix
A), the computer-based video game SafeCracker® (see the section entitled
“SafeCracker”), a navigation map of the game’s environment (see Figures 8 and 9),
a problem solving strategy retention and transfer questionnaire (see the section
entitled “Domain-Specific Problem Solving Strategies Measure”; and knowledge
mapping software (see the section entitled “Knowledge Map”).
Demographic, Gameplay, and Game Preference Questionnaire
At the start of the experiment, a questionnaire was administered to elicit
gender, age, amount of weekly video game play, and preferred game types. For
revised 11/20/05
118
gender, participants marked either the male or female check box. For age,
participants entered both the number of years and the number of months. For amount
of weekly video game play, participants checked one of four boxes: none, 1 to 2
hours, 3 to 6 hours, and greater than 6 hours.
The game types section listed 8 items: Puzzle games, RTS games, FPS games,
Strategy games, Role Playing games, Arcade games, PC games, and Console games.
The first five items in the game types section were game genres; types of games. The
last two items were game platforms; specific combinations of hardware and software.
The sixth item, Arcade games, was both a game genre and a platform; a type of game
or a combination of hardware and software. For each of the game types, participants
entered a number from 0 to 5, with 1 indicating low interest and 5 indicating high
interest. Participants were prompted to enter a zero if they did not play that game
type or did not know the particular type of game type or what the initials meant. It
was determined by the researcher that those who played RTS or FPS games, in
particular, would know what those terms meant, since the terms were commonly
used by players of those particular game genres; RTS stands for Real-Time Strategy
and FPS stands for First-Person Shooter (also known as first-person perspective). If
a participant asked what a term meant, he or she was prompted to enter a zero.
The last two game types were gaming platforms. PC games, which stand for
Personal Computer games, refers to games played on personal, or home, computers
(PCs), such as an Apple computer or Windows-based computer. Console games were
revised 11/20/05
119
those games played on gaming consoles, such as PlayStation®, X-Box®, or
Nintendo® game consoles.
Arcade games referred to both a genre and a platform. As a genre, arcade
games are short, and often rapid reaction, games with short playing durations and
only one or two goals. As a game platform, arcade games historically refers to
games played on large stand-alone gaming consoles like those found in public
arcades. Today, however, arcade style games are also available on home computers
(PCs).
The divisions for amount of weekly game play included in the questionnaire
(none, 1 to 2 hours, 3 to 6 hours, and greater than 6 hours) were based on a study
conducted in 1996 by the Media Analysis Laboratory, Simon Fraser University,
Burnaby, British Columbia, Canada. The study surveyed 647 children ranging in age
from 11 to 18, with 80% between the ages of 13 and 15. Six hundred forty six
participants completed the survey (351 male and 295 female). Based on the findings
of this study, which indicated that most children surveyed played video games
between 1 and 6 hours per week, the four divisions used in this study were created.
Information on the British Columbia study can be found at
http://www.mediaawareness.ca/english/resources/research_documents/studies/
video_games/vgc_vg_and_television.cfm
Task completion form
Immediately before the start of each game (the game was played twice during
the study), participants were handed a Task Completion form. Figure 6 shows the
revised 11/20/05
120
task completion form for the first game of the pilot study. Figure 7 shows the task
completion form for the second game of the pilot study. The task completion forms
served two purposes. First, they provided the researcher with data on which safes
were opened. Second, they provided participants with an advance organizer for tasks
to be completed during each game.
The task completion forms listed the names of the rooms that were involved
in a particular game and the safes that could be found in each room. Players were
told to mark off (check the boxes for) each safe they opened and to be sure to mark
them off as soon as a safe was opened, so as not to forget which safes were opened
during a game. At the end of each game, players were prompted to check the form to
ensure all opened safes were marked off.
Figure 6: Task Completion Form 1 for Pilot Study
revised 11/20/05
121
Figure 7: Task Completion Form 2 for Pilot Study
Self-Regulation Questionnaire
A trait self-regulation questionnaire (Appendix A) designed by O’Neil and
Herl (1998) was administered to assess each participant’s degree of self-regulation,
which is one of the three components of problem solving ability as defined by
O’Neil (1999). Reliability of the instrument ranges from .89 to .94, as reported by
O’Neil & Herl (1998). The 32 items on the questionnaire were composed of eight
items each for the four self-regulation factors in the O’Neil (1999) Problem Solving
model (see Figure 1): planning, self-monitoring, self-efficacy, and effort. For
example, item 1 (Appendix A) “I determine how to solve a task before I begin.” is
revised 11/20/05
122
designed to assess a participant’s planning ability; and item 2 “I check how well I am
doing when I solve a task” was to evaluate a participant’s self-monitoring. The
response format for each item was a Likert-type scale with four possible responses;
almost never, sometimes, often, and almost always. The self-regulation form was
administered in printed format, with participants using either a pen or pencil to enter
responses (a number from 1 to 4) for each question. Responses were later entered
into a Microsoft Excel 2002 spreadsheet and totals for each for the four selfregulation factors were generated using Excel’s SUM function.
SafeCracker
The non-violent, PC-based video game SafeCracker® (Daydream Interactive,
1995/2001) was selected for this study, as a result of a feasibility study by Wainess
and O’Neil (2003). The purpose of the feasibility study was to recommend a video
game for use as a platform for research on cognitive and affective components of
problem solving, based on the O’Neil (1999) Problem Solving Model (see Figure 1).
According to Wainess and O’Neil (2003), a primary factor for selecting
SafeCracker® was time constraints. During the feasibility study, it had been decided
that participants should not be required to spend more than one and a half hours in
any study for which the game would be used. In addition, it was desirable to include
multiple iterations of gameplay within that time period. With SafeCracker, players
would be able to learn the controls and interface and enter the main game
environment (a mansion) in approximately 15 minutes. Using only two or three of
the mansion’s approximately 50 rooms could provide a large enough set of tasks, in
revised 11/20/05
123
the form of clues and objects to find and safes to open, to examine complex problem
solving in 10 to 20 minutes, allowing for multiple problem solving tasks using
different room combinations. Table 4 replicates the same list of game characteristics
as found in Table 1 but adds a final column indicating the characteristics of
SafeCracker. From Table 4, it can bee seen that SafeCracker met the characteristics
of a simulation-game: It met most of the characteristics of a game; however, it
included elements of a simulation. It contained the simulation element of causeeffect relationships through its puzzle designs. It contained a goal structure that could
be considered primarily non-linear, as with simulations. And it did not contain the
game element of constraints, privileges, and penalties. Therefore, while meetings
most of the characteristics of a game, it included characteristics that were decidedly
those of a simulation, making SafeCracker a simulation-game.
revised 11/20/05
Table 4
Characteristics of Games, Simulations, and SafeCracker
Characteristic
Game
Simulation
Combination of ones
Yes (via human or
Yes
actions plus at least
computer)
one other’s actions
Rules
Defined by game
Defined by
designer/developer
system being
replicated
Goals
To win
Requires strategies to
achieve goals
Includes competition
Yes
Includes chance
Has consequences
System size
Reality or Fantasy
Situation Specific
Represents a
prohibitive
environment (due to
cost, danger, or
logistics)
Represents authentic
cause-effect
relationships
Requires user to reach
own conclusion
May not have definite
end point
Contains constraints,
privileges, and
penalties (e.g. earn
extra moves, lose
turn)
Linear goal structure
Linear intervention
Is intended to be
playful
To discover
cause-effect
relationships
Yes
124
SafeCracker
Yes (via
computer
puzzles)
Defined by
game
designer/dev
eloper
To win
Yes
Against computer or
other players
Yes
Yes (e.g., win/lose)
Whole
Both
Yes
Yes
No
Yes
Yes
Whole or Part
Both
Yes
Yes
Against
computer
Yes
Yes
Whole
Reality
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
No
Yes
No
Yes
No
No
Yes
No
Yes
No
No
No
No
No
Yes
revised 11/20/05
125
The plot of SafeCracker is that the player is a highly trained security
specialist applying for a position as head of security at Crabb and Sons, a prestigious
security company. The company’s primary business is to manufacture custom made
safes ranging from fairly standard box safes to deceptive and complex hidden safes.
As part of the job application, the player must break into the premises of Crabb and
Sons, a mansion, during the night and navigate the building to find and open 34
safes. And the player has only 12 hours to do it.
To open some of the safes, the player must collect clues such as wiring
diagrams, tools such as keys, and other objects such as a cassette tape. Some safes
are easily solved through trial and error. Other safes require prior knowledge, such as
knowing who Lafayette was. Several of the safes chosen for this study could be
opened via trial and error, some needed clues, keys, and/or other objects, one
required prior knowledge (needing the word Lafayette to be entered), and one
required an understanding of math sequences or prior knowledge of Pascal’s
Triangle (see http://mathforum.org/workshops/usi/pascal).
Navigation map
Gameplay in SafeCracker takes place in a two story mansion. For the
purposes of this study, three rooms on the first floor were utilized for each of two
games involved in the study. The two games had one room in common, for a total of
five different rooms. A navigation map, in the form of a topological floor plan of the
first floor of the mansion, was downloaded from http://www.gameboomers.com/
wtcheats/pcSs/Safecracker.htm. The navigation map was subsequently modified
using Adobe® Photoshop® 6.5, to alter the view of the navigation map from onepoint perspective to a flat 2-D image, to remove unnecessary artifacts, to remove
revised 11/20/05
126
room numbers for each room, to add the appropriate name to each room in
accordance with names displayed on the game’s interface, and to add a compass
symbol to the top right side of the map. Two navigation maps were created. Figure 8
shows the final, modified version of the navigation map used for game 1. Figure 9
shows the final, modified version of the navigation map used for game 2. For each
map, three rooms were also darkened using Adobe® 6.5, to represent the three rooms
containing the safes needing to be opened in those games.
Figure 8: Navigation Map for Game 1
revised 11/20/05
127
Figure 9: Navigation Map for Game 2
Based on the work of Chou and Lin (1998), these maps (Figures 8 and 9)
would be considered a global navigation map. There maps are considered global
because they provides information on the whole of the environment; all rooms on the
floor. A local map would have shown the details of a particular room, such as the
locations of furniture and safes. A local map might also have shown the locations of
objects and clues within the room necessary for opening the safes.
The three darkened rooms in each navigation map represented the three
rooms involved in each game. The three darkened rooms were included on the maps
handed to the participants and participants were informed that, as explained during
the navigation map training session (discussed later in this document), those rooms
represented the three rooms that contained the safes that were to be opened and all
the clues and items needed for opening the safes. Also notice that one room, the
revised 11/20/05
128
Technical Design Room, was included in both games. The various rooms of the first
floor were examined by two researchers, to determine the best set of rooms to use for
this and two other studies (see Chen, 2005; Shen, in preparation). The considerations
were (A) to choose three rooms for each trial, (B) to have the safes require the least
amount of domain-specific prior knowledge beyond knowledge that every university
student should know, (C) and to ensure that all clues and other objects needed for
opening the required safes were contained within those rooms.
For the first game, the three rooms selected were the Reception Room, which
was the room used during the training session. One safe in that room was opened by
the participants during training. The other safe in the Reception Room had a
programming flaw that rendered the safe unable to be opened at times. Therefore, for
the first game, participants began by opening a game already in progress. In that
game, the two safes from the Reception Room were already opened, the contents
appeared in the game interface’s inventory section, ready to be used when necessary,
and the participants were located in the Reception Room facing north. The direction
was an arbitrary decision by the researcher. Because the game’s interface contained a
compass that participants could use to help with navigation, the direction was told to
the participants to prime them to use the compass.
Participants were told that the safes from the Reception Room were opened
and they were directed to examine the items in their inventory. Participants were not
told why the safes were already opened; they were not told about the flaw in the
game’s programming for one of the Reception Room’s safes. Before beginning to
revised 11/20/05
129
play the game, participants were reminded to search for clues and to write down any
information they deemed important.
For the second game, it was determined by the researchers that the three best
rooms, taking in consideration the rooms already visited for the first game, would
include the Technical Design Room, which had been used in the first game. For the
second game, participants also began by opening a game already in progress. In this
game, the safes from the other two rooms from the first game, the Reception Room
and the Small Showroom, were already opened and their contents placed in the
participant’s inventory, ready for use. The participants began this game in the
Technical Design Room; they were facing north; once again, this direction was an
arbitrary decision by the researcher and participants we told the direction to prime
them to use the interface’s compass.
Participants were told that the safes from the other two rooms were opened.
They were also directed to their inventories and told that all the contents from the
safes from those two rooms were in their inventory. Participants were also told that,
even if they had already opened the safes in the Technical Design room during the
first game, they would need to open those safes again. Each of the safes in the
Technical Design room had been opened by 20 participants in the first game. Twelve
participants had opened both safes in the first game. Before beginning to play the
game, participants were reminded to search for clues and to write down any
information they deemed important. It was also suggested that they might want
revised 11/20/05
130
revisit the Reception Room and the Small Showroom, if they had not looked at all
the clues in those rooms during the prior game.
Knowledge Map
In this study, participants were instructed to create a knowledge map using a
computer-based software program, to evaluate their problem solving content
understanding after playing SafeCracker. According to Plotnick (1997), a knowledge
map, referred to as a concept map by Plotnick, is a “graphical representation where
nodes (points or vertices) represent concepts, and links (arcs or lines) represent the
relationships between concepts” (p. 81). He also commented that the concepts and
links are labeled on the map and the links could be unidirectional, bi-directional, or
non-directional.
During the study, participants played SafeCracker twice and completed a
knowledge map after each game session. The computerized knowledge map used in
this study had been successfully applied to other studies (e.g., Chuang, 2003; Chung
et al., 1999; Hsieh, 2001; Schacter et al., 1999). Appendix B lists the knowledge map
specification used in this study (adapted from Chen, 2005). The Knowledge Map
used in this study only offered unidirectional links, but links could be added in each
direction to create bi-directional relationships between concepts.
Content understanding measure. Content understanding measures were
computed by comparing the semantic content score of a participant’s knowledge map
to the average semantic content score of three subject matter experts. According to
Mayer (2003), semantic knowledge refers to a person’s “factual knowledge about the
revised 11/20/05
131
world” (p. 15). Therefore, the semantic content score derived from the knowledge
map represented a participant’s factual understanding of the concepts and
propositions involved in the game SafeCracker. The experts for this study were
researchers from a prior study involving knowledge map creation when playing the
game SafeCracker (Chen, 2005). Three expert knowledge maps were created for that
study and were used in this study. Figures 10, 11, and 12 show the three expert
SafeCracker knowledge maps.
Figure 10: Expert SafeCracker Knowledge Map 1
revised 11/20/05
Figure 11: Expert SafeCracker Knowledge Map 2
132
revised 11/20/05
133
Figure 12: Expert SafeCracker Knowledge Map 3
The three expert maps (Figures 10, 11, and 12) are based on the general
concepts and propositions relevant to problem solving in the game SafeCracker as a
whole, not the concepts and propositions specific to a room or a safe. For a
description of the process involved in determining and creating the three expert
knowledge maps, see Chen (2005).
Figure 13 shows a sample of a portion of a knowledge map that might have
been created by a participant during this study. It contains four concepts (key, safe,
catalog, and clue) and unidirectional links from key to safe, safe to key, safe to clue,
and catalog to clue. The specific nature of each link is displayed along the link’s
path. For example the link from key to safe included the phrase used for, indicating
revised 11/20/05
134
the proposition of “key used for safe.” The concepts are read in the direction of the
arrow and the text of the link is placed between the text of the two concepts.
Figure 13: Sample Participant Knowledge Map for the Game SafeCracker®
key
used for
safe
requires
contains
catalog
contains
clue
Scoring the knowledge map. The following describes how the participant’s
knowledge maps were scored. A semantic score was calculated based on the
semantic propositions—two concepts connected by one link in the experts’
knowledge maps. Every proposition in a participant’s knowledge map was compared
against each proposition in the three experts’ maps. A match was scored as one
point. The average score of the three expert comparisons would be the semantic
proposition score for the student map.
An example of how to score a knowledge map of SafeCracker is shown in
Table 5. Table 5 contains the scoring data extracted from the knowledge map
disiplayed in figure 13 above.
revised 11/20/05
135
Table 5
An Example of Participant Knowledge Map Scoring
Concept 1
Links
Concept 2 Expert 1
Expert 2 Expert3
Key
Used for
Safe
1
1
1
Safe
Requires
Key
1
1
0
Catalog
Contains
Clue
1
0
1
Safe
Contains
Clue
0
0
1
Final score = total points ÷ number of experts = 8 ÷ 3 = 2.67
The following describes how the data in Table 5 were scored. First, the scores
for the semantic propositions (two concepts plus their link, such as ‘key used for
safe’) were calculated based on whether the same semantic propositions, appeared in
each of the three expert maps. Each time there was a match, the participant’s
semantic proposition was scored with one point. If there wasn’t a matching semantic
proposition in an expert’s map, the participant’s semantic proposition received a
score of zero. Therefore, the score a participant could receive for a semantic
proposition ranged from zero (finding no matching proposition in any expert map) to
three (finding a match in all three expert maps).
The average score of the results of comparison to all three expert maps
became the participant’s semantic proposition score across all three expert maps
would be the semantic score of the participant’s map, that is, the total score from all
three expert comparisons was divided by three. If for example, all three experts were
matched, for a total of three points, the semantic proposition score the participant
received would be one; 3/3 = 1. If the participant had matched only two experts, the
participant would have received .66; 2/3 = .66. If the participant had matched only
one expert, the participant would have received .33; 1/3 = .33. And if the participant
revised 11/20/05
136
had not matched any expert, the participant would have received zero; 0/3 = 0. The
final score for a knowledge map would be determined by adding all the matches
received and dividing the total matches by three. Table 5 above shows that the four
participant semantic propositions matched expert maps eight times. The total of eight
was then divided by three giving the participant a knowledge map score of 2.67.
Domain-Specific Problem Solving Strategies Measure
In this study, a problem solving instrument successfully employed by Richard
Mayer and Roxanna Moreno (see for example, Mayer, 2001; Mayer & Moreno,
1998; Mayer et al., 2003; Moreno & Mayer, 2000, 2004) was modified to measure
the domain specific problem solving strategies of the game SafeCracker. In one
study, Mayer and Moreno (2003) measured retention by having participants respond
to an opened-ended domain-specific statement, “Please write down an explanation of
how lightening works.” Acceptable answers, referred to as idea units, were defined
by the researchers.
An idea unit is a proposition. Brunning et al. (1999) defined a proposition as
“the smallest unit of meaning that can stand as a separate assertion” (p. 54).
Brunning et al. further asserted that propositions are more complex than the concepts
they include. According to Brunning and colleagues, while concepts are relatively
elemental categories, “propositions can be thought of as the mental equivalent of
statements or assertions about observed experience and about the relationships
among concepts. Propositions can be judged to be true or false” (p. 54). These
descriptions of a ‘propositon’ support the earlier definition of a ‘semantic
revised 11/20/05
137
proposition’ as two concepts plus their link, such as ‘key used for safe.’ In
accordance with Brunning et al.’s (1998) definition and descriptions, this statement
is more complex than the concepts it includes, it’s a mental equivalent of an
assertion, and it can be judged as true or false.
Participants’ responses (i.e., idea units) in the Mayer and Moreno (2003)
study were then compared to the experts’ idea units and considered acceptable if the
content matched, regardless of exact wording. Idea units defined by the researchers
included “air rises,” “water condenses,” “water and crystals fall,” and “wind is
dragged downward.” Each response a participant wrote that matched an idea unit
received one point. Retention scores were determined by totaling the number of
matches, with a higher score indicating higher retention.
In the same study by Mayer and Moreno (2003), participants were given the
transfer question, “Suppose you switch on an electric motor, but nothing happens.
What could have gone wrong?” For this question, the researchers generated a list of
acceptable idea units (answers) such as “the wire loop is stuck,” or “the wire is
severed or disconnected from the battery” and participants’ responses were
compared to these idea units. As with the retention responses, one point was given to
each response that matched one of the researchers’ idea units, regardless of wording.
A participant’s transfer score was the sum of the matches, with higher scores
indicating greater transfer.
The problem solving strategy questions designed for this dissertation research
were one retention question and one transfer question relevant to the problem solving
revised 11/20/05
138
tasks in SafeCracker of finding rooms and opening safes. Table 6 lists the problem
solving strategy transfer retention and transfer questions.
Table 6
Problem Solving Strategy Retention and Transfer Questions
Question Type
Question
Retention
List the ways you found rooms and opened safes.
Transfer
List some ways to improve the design of the game play for
opening safes.
Participants were given four sheets of paper clipped together. At the top of
page one was the retention question “List the ways you found rooms and opened
safes.” Following the retention question were forty-two double-spaced and number
lines spanning to the end of page two. At the top of page three was the transfer
question “List some ways to improve the design of the game play for opening safes.”
Following the transfer question were forty-two double-spaced and numbered lines
spanning to the end of page four.
Using the logic for a related type of transfer test administered by Mayer,
Sobko, and Mautone (2003), the transfer question constitutes a transfer test because
the participants must select and adapt what was learned in the game to fit the
requirement of the question. Instead of simply being cued to recall what had
occurred, which is the function of the retention question, participants had to judge
which aspects of the game and game play were relevant to the question and had to
determine how to link that information to their responses to the transfer question. In
short, the transfer question required the participants to go beyond simply recalling
revised 11/20/05
139
the game and game play experience, although recalling relevant portions of the game
and game play were certainly a component of the solution.
To answer the transfer question, participants had to recall what they had done
in the game. Next they had to consider what events could have been improved by
modifying the game in some way. Then they needed to consider exactly how the
game functioned and determine what was present that shouldn’t have been present or
what wasn’t present that should have been. For example, one participant’s response
to the transfer question was, “make dials bi-directional.” Another participants’
response was, “… a little indicator of a person moving along with some kind of a
map should be given.” A third participant responded, “If clue was already used,
remove from list.” All three responses represent functionality or features that did not
exist in the game. Participants had to infer the benefit of their addition by what was
or wasn’t already in the game and by their gaming experience. Therefore, their
response represented a form of transfer.
As with the Mayer and Moreno (2003) study, each participant response in
this study was extracted into an idea unit and scored against expert idea units. Two
sets of expert idea units were used in this study; one set for the problem solving
strategy retention question and one set for the problem solving strategy transfer
question. The expert idea units for the problem solving strategy retention question
were developed by three researchers, through a three step process. First, each
researcher reviewed a set of problem solving strategy retention idea units created by
Chen (2005) for a study that also used the game SafeCracker. The problem solving
revised 11/20/05
140
strategy retention question for that study differed from the retention question for this
study. The retention question for the Chen study was, “Write an explanation of how
you solve the puzzles in the rooms” while the retention question for this study was
“List the ways you found rooms and opened safes.”
The Chen retention question centered around opening (solving) safes (the
puzzles), while the retention question for this study centered around finding rooms in
addition to opening safes. After reviewing the list from the Chen study, each expert
independently generated a list of idea units applicable to the problem solving
strategy retention question for this study. Then, through discussion among the three
researchers of the various idea units that had been generated, a single list was created
representing the agreed upon 28 idea units for this study. Table 7 lists the problem
solving strategy retention idea units for this study.
Table 7
Idea Units of the Problem Solving Strategy Retention Question
1 Scan, observe, analyze, recognize, and/or compare rooms and/or room
features.
2 Scan, observe, analyze, recognize, and/or compare safes and/or safe
features.
3 Walking and/or turning.
4 Search for rooms and/or safes.
5 Search for clues, hints, keys, tools, or other objects.
6 Recognize or examine clues, hints, keys, tools, or other objects.
7 Find or pick up clues, hints keys, tools, or other objects.
8 Use clues, hints, keys, tools, or other objects.
9 Attempt to open safes through trial and error.
10 Attempt to open safes through organized/methodical method.
11 Use interface’s room indicator.
12 Use interface’s compass.
13 Use map/floor plan.
14 Remember items, clues, and/or hints.
revised 11/20/05
141
Table 7 (continued)
Idea Units of the Problem Solving Strategy Retention Question
15 Remember diagrams/images, such as safe solutions or map.
16 Draw images/diagrams and/or jot down notes.
17 Recognize and/or interpret feedback
18 Trial and error/Guessing.
19 Apply elimination or methodical method.
20 Figure out the direction to a room or safe.
21 Plan before doing.
22 Determine what the problem and/or difficulty is.
23 Determine safe’s procedure, pattern, system, or sequence.
25 Make connection between clues and/or hints.
26 Apply subject knowledge, such as math or science.
27 Use real-life and/or game-related common sense/knowledge.
28 Use logic.
The expert idea units for the problem solving transfer question for this study
were developed by three researchers, through a three step process. First, each
researcher reviewed a set of problem solving strategy transfer idea units created by
Chen (2005) for a study that also used the game SafeCracker. The problem solving
strategy transfer question for the Chen study differed from the transfer question for
this study. The transfer question for the Chen study was, “List some ways to improve
the fun or challenge of the game” while the transfer question for this study was “List
some ways to improve the design of the game play for opening safes.”
The Chen (2005) transfer question centered around the “game” as a whole,
while the transfer question for this study centered around the “opening safes” portion
of the game. However, the process of opening safes did include the need to find the
safes as well as the need to find and collect relevant clues and tools. Also, the Chen
transfer question involved improving fun or challenge. The transfer question for this
revised 11/20/05
142
study involved improving game play, which is less specific than “fun” and
“challenge,” but could include both fun and challenge.
After reviewing the list of expert transfer idea units from the Chen study,
three experts independently generated a list of idea units applicable to the problem
solving strategy transfer question for this study. Then, through discussion among the
three researchers of the various idea units that had been generated, a single list was
created representing the agreed upon 21 idea units for this study. Table 8 lists the
problem solving strategy retention idea units for this study.
Table 8
Idea Units of the Problem Solving Strategy Transfer Question
1 Add new rules to the game.
2 Modify the amount or kind of in-game help, advise, instruction, and/or
demonstration.
3 Create new ways of giving in-game help to the players.
4 Modify the amount or type of pre-game help, instruction, and/or
demonstration.
5 Create new ways of giving pre-game help, instruction, and/or
demonstration.
6 Modify the complexity, patterns, or procedures for opening safes.
7 Create new safe types, safe features, or methods for opening safes.
8 Modify the complexity or procedures for finding rooms or safes.
9 Create new room features.
10 Modify the complexity or procedures for finding clues, tools, objects,
and/or hints.
11 Create new clue, tool, objects, or hint features.
12 Modify existing functionality/features in the user interface (helpers,
tools, interface elements, controls, etc.).
13 Create new interface features (helpers, tools, interface elements,
controls, etc.).
14 Increase the connection between rooms, safes, clues, and/or objects.
15 Increase the amount, type, and/or function of audio used in the game.
16 Create more opportunities for interaction with the game.
revised 11/20/05
143
Table 8 (Continued)
Idea Units of the Problem Solving Strategy Transfer Question
17 Modify the background story elements of the game to be more
meaningful and/or interesting.
18 Modify the time allotted for the game or game elements.
19 Modify existing elements in the game to alter the complexity of the
problem solving experience.
20 Create new elements to the game to alter the complexity of the problem
solving experience.
21 Add other players in the game to compete or cooperate.
Scoring of the Problem Solving Strategies Retention and Transfer Reponses.
Scoring of the problem solving strategy retention and transfer responses was
a three step process. In step one, two researchers independently reviewed each
participant response to determine if a single response represented more than one idea
unit. If it did, the response was divided into multiple responses, each containing a
single idea unit. Then the two lists of responses were compared. With an original
total of 1448 participant responses comprised of both the problem solving strategy
retention and problem solving strategy transfer responses, by breaking some
responses into multiple idea units, the new list contained 1513 responses; an addition
of 65 responses. The two researchers had agreed on all but 38 of those additions,
therefore, agreeing on 1475 out of 1513 idea units, which represented 97.5%
agreement. Next, the two researchers examined each of the 38 discrepancies and
reached agreement on whether each was or wasn’t a separate idea unit. This process
resolved all discrepancies and resulted in the addition of seven more idea units, for a
total of 1520 idea units.
revised 11/20/05
144
Next, independently, the two researchers assigned an expert idea unit to each
of the 1520 participant responses, assigning expert retention idea units to the
participants’ retention responses and expert transfer idea units to the participant’s
transfer responses. Then, the two lists were compared. The two researchers had
agreed on 1178 of the 1520 idea units, which was 77.5% agreement. As with the
prior process, the two researchers reviewed and resolved each of the 342
disagreements. Since the two lists of participant responses and related idea units now
matched, one list was removed leaving just one list of problem solving strategy
retention and transfer responses with their related idea units.
Procedure for the Pilot Study
There were two pilot study participants. By flip of a coin, one participant
was randomly assigned to the treatment group (the navigation map) and the other
participant was assigned to the control group (no map). The pilot study was
conducted one participant at a time. Each of the two studies (one treatment and one
control) took approximately 91 minutes to administer and began with introducing the
participant to the objective of the experiment, describing the experiment as an
examination of methods that might help student performance when using a video
game for learning, but not discussing the issue of navigation maps. The introduction
took approximately three minutes. Next participant and the researcher signed a
consent form and the participant was assigned a three-digit number that had been
randomly generated prior to the study. The three-digit number was used for
revised 11/20/05
145
confidentiality purposes; by assigning a number to each participant, all that
participant’s data would be associated with a number, not a name.
Administration of Demographic and Self-Regulation Questionnaires.
Following the brief three minute introduction, participants were asked to fill
out the demographic and self-regulation questionnaires. See the earlier sections
“Demographic, Game Play, and Game Preference Questionnaire” and “SelfRegulation Questionnaire” for complete descriptions of the items contained in the
two questionnaires. Participants were told they would have eight minutes to fill out
the questionnaires.
Introduction to Using the Knowledge Mapping Software.
Following administration of the demographic and self-regulation
questionnaires, participants were introduced to the knowledge mapping software.
Ten minutes of the study were allocated to this process. Participants were asked to
start the knowledge mapping software. Once started, knowledge mapping was
explained. Participants were told that knowledge mapping involved concepts and
links. It was explained that a concept was an idea or word and could represent
something concrete like house or something abstract like love. It was then explained
that two concepts could be linked based on some sort of relationship. Relationships
included causal relationships, temporal or chronological relationships, or simple
relationships; indicating ways in which the two concepts were connected.
Examples of all three relational types were given, by selecting examples
from several random domains, such as the causal relationship of ‘learning leads to
revised 11/20/05
146
knowledge.’ The components of each relationship were explained. In the example of
‘learning leads to knowledge,’ it was explained that learning and knowledge were the
concepts and the phrase ‘leads to’ was a causal connection between the two
concepts; In other words, learning causes knowledge. Next it was explained that
research had found that a person’s ability to create a knowledge map of a domain
was directly related to that person’s understanding of the domain; the more accurate
and complete the knowledge map, the greater that person understood the domain.
Then the Knowledge Mapping interface was explained by describing the
function of the ‘Add Concepts’ menu item and the three on-screen buttons (see
Figure 2). Participants were asked to click on a concept to add it to the screen.
Participants were then prompted to move the concept around. Next participants were
asked to add a few more concepts.
Participants were then asked to click the Link button and told that would
switch them to a mode that would allow links to be created between concepts.
Participants were told to click and drag from one concept to another concept. That
caused a dialog box to open. Participants were prompted to click on the dialog box’s
pull-down menu, to see a list of available links. Participants were then prompted to
click on a link, which caused the dialog box to close and an arrow to be drawn from
one concept to the other, with the link text they had selected appearing along the link
arrow’s path.
Participants were asked to create several more links, after which they were
told to click the third mode button, the Erase button. Next, participants were asked to
revised 11/20/05
147
click on the words of a link and saw the link disappear. Next they were prompted to
click on a concept that had at least one link connected to it and watched as both the
concept and its links disappeared. Participants were reminded that once they entered
a mode (Move, Link, Erase), all they could do was that mode. And they were told
there was no undo button on the software. So if they accidentally deleted a concept
and, therefore, all links going to or from that concept, they would need to recreate
the concept and all its links. It was suggested they change to link or move mode, as
soon as they were done erasing items, to prevent any unwanted erasures. Participants
were asked if they understood how to use the software. Upon receiving a positive
response, they were shown how to exit the software.
Introduction of the Game SafeCracker
Participants were told they would next learn the game, Safecracker, and were
prompted to open SafeCracker by clicking on an icon on the desktop. Over the next
15 minutes, participants were guided through entering the game, finding the mansion
(the main game play area of SafeCracker), entering the mansion, searching the first
room, and opening one safe. During this 15 minute period, participants using the
navigation map (the treatment group) were also taught to read the navigation map
and to plan and find paths. The navigation map group was also given some strategies
for playing the game (see the next section on “Introduction to Using the Navigation
Map”). For navigation and strategy instructions given to the control group, see the
section “Script for the Control Group on How to Navigate the Mansion.”
revised 11/20/05
148
To ensure equivalent training on using SafeCracker, all participants received
the same instruction, by use of a script. The next paragraph begins the script that was
used for the pilot study. As will be discussed later in the “Adjustments to the
SafeCracker instructions” subsection under “Results of the Pilot Study,” a number of
changes were made to the SafeCracker training script, as a result of observations,
discussions, and participant comments that occurred during and after the pilot study.
See the script under the main study section of this dissertation, to see the final
version of the script after modifications were made based on feedback from the pilot
study. Note: In the script, the term beat, which appears in the script, is a common
term in script writing and refers to a “momentary pause in dialog or action” (Armer,
1988, p. 260). The term long pause does not have a history in script writing and is
used here to indicate a pause of at least two seconds. Most text in parentheses,
including ‘beat’ and ‘long pause’ are notes to the researcher as reminders or cues
during delivery of the script. Text in all uppercase letters were cues to deliver that
text with greater emphasis than other text. As an exception, text in all uppercase
letters, but in parentheses, were reminders to the researcher.
SafeCracker training script. Thank you for participating. In this study, you
will be asked to accomplish a series of tasks. The tasks will be to locate and open
various safes in various rooms. In order to open some of these safes, you will need to
find certain items. You will be told which rooms to visit. Those rooms contain all the
items needed and all the safes you will need to open. You do not need to spend time
revised 11/20/05
149
in any other rooms. Even though the mansion has two floors, all the rooms you will
visit are on the first floor. Do not go to the second floor.
Your goal is to open all the safes in the rooms you are given. For each room,
you will be told the room’s name (e.g., the Small Showroom). Together, we will
walk through the steps needed to find and enter the mansion. Then, we will walk
through searching the first room and opening one safe. After that, you will given the
number and name of several rooms and will be required to find the rooms and open
the safes. Let’s work our way into the mansion.
GETTING INTO THE MANSION: You see the game’s start menu with four
main buttons. Don’t do anything until I tell you to. Once I tell you to click the
“new” button to begin a new game, you’ll see the game’s main interface screen and a
phone. The phone will be ringing. As move your cursor to the top part of the phone’s
hand piece, you’ll notice that cursor symbol is a double circle. When you’re over the
part of the phone piece you can click, the cursor will turn into a double circle with a
hand. This hand symbol indicates you’re over something you can grab. You will then
click on the phone with the left mouse button. Before you click the hand piece, be
sure you’re prepared to listen carefully to the message. It only plays once. Also,
you’re going to need to write down a four-digit number. Have paper and pencil
ready. Once the message is complete, you will click on the phone piece hook to hang
up the phone. Once you’ve hung up the phone, music will begin playing, to turn it
off, click the off button on the right side of the screen. Go ahead, now, and click the
“new” button.
revised 11/20/05
150
(Once everyone’s done listening to the message) Click in the large center
screen and, while keeping the left mouse button depressed, move the mouse left and
right. The scene will pan left and right. If you stop moving but continue to hold the
mouse button down, the scene continues to pan. The wider you moved the mouse,
the faster the scene will pan. You can also use the left and right cursor arrows on
your keyboard to pan left and right; try that. In addition, you can move the mouse up
and down or use the up and down arrow keys to tilt your view upward or downward.
Now let’s exit the phone booth. Rotate until you see the phone booth door,
then click to open the door. Once the door is open, you can click to move outside the
phone booth. The cursor symbol that indicates you can move forward is a doublecircle with an upward facing arrow. Once outside the phone booth, rotate until you
see the lit two story mansion across the street. Now listen to my next series of
instructions before doing anything. You need to move down the street to the
crosswalk just before the mansion. Then cross the street. Next, you’ll click a couple
times to move along the sidewalk until you’re in front of the mansion’s gate. Go
ahead now and move to the mansion’s front gate.
Move your cursor until you get the circle and hand symbol, when you point
to the small lock on the center of the gate. Then click. This locks you into a close up
view of the three tumblers on the lock. Each contains a symbol. Go ahead a take a
moment to try opening the lock. You can rotate the tumblers by clicking on them.
(Wait one minute). If you haven’t opened the lock yet, set the three tumblers to
music symbols and the lock will open. Then move to the front door. You’ll need to
revised 11/20/05
151
navigate around the fountain to get to the front door. Once you’re at the front door,
click on the keypad box to the left side of the door. Then click on the appropriate
buttons, to enter the four digit code you wrote down at the phone booth. Once the
code is accepted, you can click on the door to open it and then click the inner door to
open it as well. Then click to move into the mansion. Go ahead and take a few
seconds to look around and move around the room you’re in. Do not leave the room.
(Wait 15 seconds) Now let’s collect some objects. Navigate around the desk
until you’re facing the computer on the right. It’s extremely important that you do
not click on anything unless I tell you to. (Wait for everyone to get to the correct
position). Click on the blue coffee mug. The item shows up in the small left viewer,
where you can rotate it. Next, click on the piece of paper to the left of the blue cup. It
contains some diagrams. If you move your cursor toward the bottom of the paper, a
down arrow symbol appears. Click it to see more of the bottom portion of the paper.
You can move the cursor to the top and click to return to the top portion of the paper.
Click the back button to exit viewing the paper. Find the two other pieces of paper on
the desk. Only one can be clicked. Now, let’s open a safe.
On either side of the front wall of the room are safes. The left side has a
brown and gold safe, the right side a blue safe. Move to the blue safe. Click on the
safe to lock yourself onto the safe. To exit the safe, click the BACK button on the
right side of the screen. Go ahead and try that, then click to lock yourself back onto
the safe. To open the safe, you need to set the three dials to the correct numbers. Set
the three dials, then click the safe handle. The three lights will either flash green or
revised 11/20/05
152
be a steady green. The lights from top to bottom represent the three dials from left to
right. If you select the correct number, the light will be a steady green. Once all three
lights are a steady green, the safe will open. Go ahead and open the safe. Before
leaving the safe, be sure to click on each of the objects in the safe, to add them to
your inventory. Once you leave the safe, you cannot reopen it.
NOW, SHOW THE IMPORTANT INTERFACE COMPONENTS (E.G.,
THE ROOM NAME INDICATOR).
FOR MAP USERS, READ THE “SCRIPT FOR INTRODUCING MAP TO
PARTICPANTS”
FOR NON-MAP USERS, READ THE “SCRIPT FOR THE CONTROL
GROUP ON HOW TO NAVIGATE THE MANSION”
THEN, ANNOUNCE THE FIRST TASK AND THE ROOMS INVOLVED.
FOR THE MAP USERS, HAND OUT THE APPROPRIATE MAP.
Introduction to Using the Navigation Map.
Those in the navigation map group (the treatment group) were next
introduced to reading the navigation map and were given instruction on path finding
and path planning. They also received instructions on strategies for playing the
game. To ensure equivalent learning by all those in the treatment group, a script was
utilized (see below). Those in the control group were given simple guidelines for
navigation. See the next section entitled “Script for the Control Group on How to
Navigate the Mansion” for information on the script given to the control group.
revised 11/20/05
153
To support the navigation map training script, a special version of the
navigation map, a training map, was created (Figure 14), displaying a portion of the
floor plan, along with shaded portions, labels, and arrows, as aids to the script.
Figure 14: Training Map
The training map was handed to each participant on a standard sheet of white
paper and contained the title “How to read the map.” As participants viewed the
training map, the script was read. In addition to training on map reading, path
planning, and path finding, the script included some strategies for playing the game.
The script also contains two words or phrases in parentheses: beat and long pause.
The term beat is a common term in script writing and refers to a “momentary pause
in dialog or action” (Armer, 1988, p. 260). The term long pause does not have a
history in script writing and is used here to indicate a pause of at least two seconds.
The following script was read to the navigation map group.
revised 11/20/05
154
Training map script. This is a map of the first floor of the mansion. You will
use this map to help navigate to the various rooms. Currently, you’re in, the
reception room, the large room in the middle of the bottom portion of the map. The
map shows all the rooms on the first floor with their related names. These will match
the names of the rooms that contain the safes you will be asked to open and the items
that will help you to open the safes. The names also match the names that will appear
in the name indicator on your interface, which you have already been shown. You
will not need to visit any other rooms, unless they are along a path you take in order
to get to a required room. In addition to the room names, the map also shows the
locations of the doors in each room. If you need to, you are allowed to write on this
map.
Let’s take a moment to learn how to read the map and use the map. This map
shows a portion of the bottom floor and includes text labels describing of the most
important map features. On the left side are four labels. The top label on the left
(point to the label) contains the words “room name” and points to the name of the
room entitled “Big Showroom.” Take a look and you’ll see that every room has a
label. Those areas that do not have names are either closets or bathrooms. For each
of your two tasks, you will be told the names of the rooms you must visit. As shown
to you earlier, there is a room name indicator on the interface. You will use this
indicator to determine which room you are in.
The middle label on the left contains the word “stairs” and points to a block
of black and gray stripes. That pattern indicates stairs. Notice that there’s another set
revised 11/20/05
155
of stairs just to the right and a small set of stairs connecting the two (point to these
features).
On the left side, near the bottom is a label with the word “door.” Gaps or
open spaces between rooms indicate doors. Every room has at least one door and
most have several doors. (Point to several door openings.)
On the left side, at the bottom is a label with the words “Main Entrance” and
an arrow pointing to the door you came through to enter the mansion. This is the
only door on the map that is not indicated using an opening or gap. (Point to the
door.)
In the middle of the map is a label with the word “toilet” (point to the label).
The arrow points to a small circle, which is the symbol for a toilet (point to the
circle). There are other bathrooms in the mansion that have toilets, but for some
reason, the people who created this map chose to only show this toilet.
On the right side of the map are three labels. The one on the far right side of
the map and containing the words “points north” points to a symbol with a black
circle, a spike pointing upward, and two spikes pointing downward. This is a typical
map indicator that shows direction for “North.” The spike to points upward is
pointing “north.” (point to the tip of the spike).
On right side, just below and to the left of the “points north” label is a label
with the word “closet.” As mentioned before, closests and bathrooms don’t have
room names. The one exception is the room with the toilet. That room’s name is
revised 11/20/05
156
W.C., which stands for “water closet.” Water closet is a term used in England for
bathroom.
The last label is at the bottom on the right side of the mansion and contains
the word “door.” The three arrows emanating from that label point to three more
examples of doors.
The last part of the map to show you is the darkened rooms. In the map
you’re looking at, there are three darkened rooms. They are “reception,” the “small
showroom,” and the “technical design” room (Point to the three rooms). Just above
the technical design room is a dark label with the words “your task takes place in the
shaded rooms.” As already mentioned, you will be given two tasks. For each task
you will be given a map. Each map will have a different set of darkened rooms,
indicating the rooms you must visit in order to complete each task. While you are
allowed to visit other rooms, your time to complete each task is limited, so it is best
to not waste time visiting unnecessary rooms.
As a first step for each task, it is recommended that you examine the map to
determine the shortest or most efficient paths for getting from room to room, and to
return to the various rooms. As an example, in the current map, since you’re already
in the reception room, it would be logical to move next to the small showroom and
then the technical design room. That gives you the shortest path between the three
rooms.
Take a moment to think about the path you’d take to get from the Reception
to the Designer’s room. (Wait about 30 seconds).
revised 11/20/05
157
To get from the Reception to the Designer’s room, you’d first move to the
Small Showroom, by going through the door on the right side of the Reception room.
Then, to move from the Small Showroom to the Designer’s Room, you’d use the
door on the right side of the Small Showroom Room. To get back to either the Small
Showroom or the Reception, you’d simply reverse your path.
Once you have a plan for how you will navigate to and from rooms, than you
would begin moving around, collecting items and attempting to open safes.
Do you have any questions?
Script for the Control Group on How to Navigate the Mansion
While the navigation map group (the treatment group) was given not only
detailed instruction on how to read the navigation map, but were also given
instruction on how to plan or find paths, the control group was given only limited
instruction on navigation. As with the navigation map group’s script, the script for
the control group included strategies for playing the game. The following script was
read to the control group.
For each of your two tasks, you will need to navigate to three rooms and
return to the rooms by retracing your path. For each task, you will be told the name
of the rooms you need to visit. As just shown, the interface includes a window that
displays the name of the room you’re in. Be sure to keep track of your room location.
Because you will need to find your way and than find your way back, use whatever
method you think will help to keep track of where you’ve been and the path you’ve
taken.
revised 11/20/05
158
First Game.
After players were given instruction on navigating the environment, they
were given their first Task Completion Form (see Figure 6), which listed two of the
three rooms involved in the study and the safes they would need to open in those
rooms. Participants were told to mark off safes as they opened them. They were then
prompted to open a game already in progress. Once the game was open, they were
then told the names of the three rooms involved in the first game (Reception room,
Small Showroom, and Technical Design room) and told to take note that only two
rooms (Small Showroom and Technical Design room) and their safes were listed on
the Task Completion Form. Participants were told that they were currently in the
third room (Reception Room) and the safes for that room had already been opened
for them and the safes’ contents were in their inventory.
Those in the treatment group were then handed their navigation map for the
first game (see Figure 8). They were told that the shaded rooms for the first game
were the same three rooms that were shaded on the learning map. Both groups were
reminded not to forget to look at objects in the rooms, including the room they were
currently in, the Reception Room. Finally, participants were told they would have 15
minutes to find and open the safes and were told to begin. After fifteen minutes,
participants were prompted to save their game and exit SafeCracker. During the save
process, participants were given instruction on how to name their file. They were
told to use the three digit number they were randomly assigned and to add a hyphen
and a one to the end of the number. For example, if the participant’s number was
revised 11/20/05
159
803, the filename would be 803-1. Participants were told that the next time they
saved the game they would enter their number and a hyphen followed by the number
two (e.g., 803-2).
Creating the Knowledge Map (Occasion 1).
Participants were told to start the Knowledge Mapping software. After asking
whether they had any questions, participants were told they would have seven
minutes to create a knowledge map and were told to begin. At the end of seven
minutes, participants were asked to click the X icon at the top right corner of the
screen, to exit the software. That caused a ‘save’ dialog box to open. As with the
save process for the game SafeCracker, participants were prompted to use the three
digit number they were randomly assigned and to add a hyphen and a one to the end
of the number. For example, if the participant’s number was 803, the filename would
be 803-1. Participants were told that the next time they saved the knowledge map
they would enter their number and a hyphen followed by the number two (e.g., 8032).
Problem Solving Strategy Questionnaire (Occasion 1).
Participants were handed the Problem solving Strategies questionnaire and
told how to fill it out. They were told the questionnaire was four pages long and
contained two questions—a retention and a transfer question. Participants were told
each question involved two pages with the first question beginning on page 1 and the
second question beginning on page 3. Participants were further told to start on
question 1 (the retention questions) and to not go to question 2 (the transfer
revised 11/20/05
160
questions) until told to do so. Last, they were told they would be given two minutes
per question and were then told to begin. After two minutes, participants were
prompted to switch to the second question, the transfer question, located on page
three of the questionnaire. Participants were also told to remain on the second
question and not to return to the first question. They were also reminded to keep
writing until they were told to stop.
Second Game.
Upon collecting the Problem solving Strategies questionnaires, participants
were prompted to restart SafeCracker and to open a different game that was already
in progress. While the program was opening up, participants were handed their
second Task Completion Form (see Figure 7). They were told that one of the rooms
was a room included on the first task; the Technical Design room. They were told
that they would need to open the safes in that room even if they had opened them in
the first game. Those in the Treatment group were handed the navigation map for the
second game, which included the three darkened rooms for that game (see Figure 9).
Once SafeCracker was started and the appropriate game in progress was
opened, participants were told that the safes from the rooms in the first game that
weren’t part of the second game had been opened and their contents added to their
inventory. Participants were reminded that they would have 15 minutes for this game
and were told to begin. After 15 minutes, participants were prompted to save their
game, using their three digit number, along with a hyphen and the number two, as
the filename, and to exit SafeCracker.
revised 11/20/05
161
Knowledge Map and Problem Solving Strategy Questionnaires (Occasion 2)
Participants were next prompted to restart the Knowledge Mapping software
and were given seven minutes to create their second knowledge map. At the end of
seven minutes, the participants were prompted to exit the software and to save their
file using their three digit number, a hyphen, and the number two as the filename.
Last, following the same procedures as for the first problem solving strategy
questionnaire, participants were given their second problem solving strategy
questionnaire (which was identical to the first problem solving strategy
questionnaire) and prompted to respond one question at a time. They were given a
total of four minutes for the questionnaire; two minutes per question.
Debriefing and Extra Play Time
Upon completion of the second problem solving strategies questionnaire,
participants were told the study was over. They were asked what they thought of the
game and if it was similar to games they’ve played or games they liked. If
appropriate, they were asked what types of games, and even specific games, they
liked. They were also asked if they had any questions. Finally, participants were told
they could continue playing the game if they were interested. The debriefing process
took approximately three minutes. The offer of extra play time was for collecting
data on continuing motivation. If a participant chose to continue playing, he or she
was coded as exhibiting continuing motivation; regardless of the amount of time he
or she continued to play. If a participant did not choose to continue playing, he or she
revised 11/20/05
162
was coded as not exhibiting continuing motivation, even if he or she had indicated a
desire to continue playing.
Timing Chart for Pilot Study
Table 9 lists the activities encompassing the pilot study and the times
allocated with each activity, and ends with total time for the study (91 minutes) plus
the optional additional 30 minutes.
Table 9: Time Chart of the Pilot Study
Activity
Introduction and study paperwork
Self-regulation and demographic questionnaires
Introduction to knowledge mapping software
Introduction to SafeCracker for both groups and map
reading and navigation for the treatment group
First game (3 rooms) plus task completion form
Knowledge map creation (occasion 1)
Problem solving strategy retention and transfer
questionnaire (occasion 1)
Second game (3 rooms) plus task completion form
Knowledge map creation (occasion 2)
Problem solving strategy retention and transfer
questionnaire (occasion 2)
Debriefing
TOTAL
Optional additional playing time
Time Allocation
3 minutes
8 minutes
10 minutes
15 minutes
15 minutes
7 minutes
4 minutes
15 minutes
7 minutes
4 minutes
3 minutes
91 minutes
Up to 30 minutes
Results of the Pilot Study
Overall, the instruments and procedures in the pilot study worked well. But
there was some need for modification and improvement of some of the instructions
given to participants. The first modification involved the amount of time allotted to
participants for filling out the demographic and self-regulation forms. Participants
were told they had eight minutes to fill out the forms. While both participants
revised 11/20/05
163
completed both forms well within the eight minutes allotted, comments from one of
the participants indicated that unnecessary stress had been added by feeling that time
was limited. It was decided that, for the main study, participants would not be told
how much time they had, but would be prompted to finish soon, if time was running
out. In addition, because both participants in the pilot study finished well within the
eight minute time frame, it was determined that the time allotted for filling out the
demographic and self-regulation forms could be reduced from eight minutes to seven
minutes. This revision was made for the main study.
Adjustments to the knowledge mapping instruction. There were several small
problems discovered with the introduction of knowledge mapping. The first problem
was the introduction of extraneous cognitive load. Extraneous load refers to the
cognitive load imposed by unnecessary materials (Harp & Mayer, 1998; Mayer,
Heiser, & Lonn, 2001; Moreno & Mayer, 2000; Renkl & Atkinson, 2003; Schraw,
1998). In the pilot study, participants were asked to open the software. Once opened,
participants were told what knowledge mapping was. This explanation took
approximately one minute. Because the software was open, participants were
attending to the software while, at the same time, receiving information on
knowledge mapping. This imposed unnecessary, or extraneous, cognitive load. It
was decided that, for the main study, participants would be told about knowledge
mapping and then prompted to start the software.
Another important problem with the explanation of knowledge mapping was
with the examples given for types of links. Three types of knowledge map links were
revised 11/20/05
164
described: temporal links, causal links, and simple relational links. In the pilot study,
examples were given from three randomly selected domains. It was decided for the
main study that all three examples should be within the same domain. To support
this, since all participants would be at, or had been at, the same southwestern
university, the domain was for the knowledge mapping instruction would be that
southwestern university, and all three link examples would be related to that
university. It was also determined that knowledge mapping instruction could be
reduced from the 10 minutes allotted for the pilot study to just eight minutes for the
main study.
A number of other small changes were made to the knowledge mapping
instructions. In particular, a strategy component was added to the main study. In the
main study, participants would be explicitly told that EVERY concept was
applicable to the game SafeCracker. The following instruction was also added to the
end of the knowledge mapping instruction for the main study:
Since every concept is applicable to SafeCracker, and therefore
should be used in your knowledge map, a recommended strategy is
to begin your knowledge map by opening the ‘add concept’ pulldown menu and clicking on every concept. Then move the
concepts around so you can see all of them. Then switch to link
mode and start making connections.
Adjustments to the SafeCracker instructions. Several small flaws were found
with the script for the SafeCracker instruction. One example is how participants were
revised 11/20/05
165
introduced to panning their view within the game environment. In the original script,
as part of the panning instructions, participants were told to “click the mouse in
middle of the screen and don’t let go.” While this seemed to the researcher to be an
obvious, explicit command, participants varied in where on the screen they clicked,
including toward the bottom or to the far right side. A participant also asked, “Which
middle? The middle of the monitor or the middle of the main window on the
interface.” For the main study, the script was changed to “click the mouse one or two
inches to the right of the phone’s hand piece and don’t let go.” That seemed to
alleviate the problem found in the pilot study.
As with the knowledge mapping instruction, it was determined that strategy
instruction needed to be added to the SafeCracker instruction. In a related
observation, it was noticed that both participants in the pilot study forgot to search
for clues. In an attempt to improve searching and search strategies, the following is
an example of instruction added to the script for the main study regarding a piece of
paper placed on a desk in the game: “Go ahead and click on it. Notice the diagrams.
These might be important for opening a safe. You might want to write them down
later, when you start playing the game.” The following search instructions,
reminders, and strategies were added to the end of the instructions for the navigation
map group in the main study.
Once you have a plan for how you will navigate to and from
rooms, then you would begin moving around, collecting items
and attempting to open safes. REMEMBER, IT IS VERY
revised 11/20/05
166
IMPORTANT THAT YOU LOOK AT ALL THE ITEMS IN
ROOMS, TO FIND CLUES THAT MIGHT HELP OPEN
SAFES. Not everything gets added to your inventory. You may
need to write things down.
The following search instructions, reminders, and strategies were added to
the instructions for the control group in the main study.
Once you have a plan for how you will navigate to and from
rooms, then you would begin moving around, collecting items
and attempting to open safes. REMEMBER, IT IS VERY
IMPORTANT THAT YOU LOOK AT ALL THE ITEMS IN
ROOMS, TO FIND CLUES THAT MIGHT HELP OPEN
SAFES. Not everything gets added to your inventory. You may
need to write things down.
An important change was the addition of time added for navigation
map training for the treatment group. Originally, 15 minutes was allotted
to SafeCracker instruction. While this was sufficient time for the control
group, the treatment group needed more time. It was determined that eight
extra minutes were needed for map instructions in the main study.
Adjustments to the problem solving strategy questionnaire instructions. The
next change involved the problem solving strategy questionnaire. In the pilot study,
one participant switched to the second question before the two minutes allotted for
answering the first question were up. It was determined that, for the main study,
revised 11/20/05
167
participants would be explicitly told “do not go to the second question until told to
do so. Continue to work on the first question for the full two minutes. And once I tell
you to go to the second question, do not return to the first question; stay on the
second question.”
Adjustments to the task completion form. One of the safes listed in the Task
Completion Form was the Strongbox in the Storeroom, which was connected to the
Technical Design room. This room and safe appeared on the task completion form
for both tasks (Task 1 and Task 2). While this was an accurate description of the safe
and its location, the strongbox was inside a drawer in a file cabinet. This confused
participants in the pilot study. For the main study, the text on both Task Completion
forms was changed from “Strongbox (in storeroom)” to “Strongbox (file cabinet in
storeroom). Since this safe appeared on both task completion forms, the text was
changed on both forms. See Figures 6 and 7 for the Task Completion forms used in
the pilot study and Figures 15 and 16 for the Task Completion forms used in the
main study.
In summary, running the pilot study resulted in confirming the utility of all
instruments. All instruments worked as expected, none were extraneous, and no
additional instruments were needed. However, modifications were made to the Task
Completion forms (see Figures 6 and 7 for the original forms and Figures 15 and 16
for the revised forms)) and the SafeCracker instructions (see the relevant sections
under “Pilot Study” and “Main Study” for the original and revised scripts). Changes
were also made to the study timeline (see Table 10 at the end of the descriptions of
revised 11/20/05
168
the main study), because it was discovered that some processes could occur more
quickly, while one instruction (map instruction) needed additional time. The pilot
study timeline encompassed 91 minutes (see Table 9). The main study timeline
would encompass 96 minutes (see Table 10).
Procedure for the Main Study
The main study began with introducing the participants to the objective of
the experiment, describing the experiment as an examination of methods that might
help student performance when using a video game for learning, but not discussing
the issue of navigation maps. Next, participants and the researcher signed a consent
form and participants were assigned a three-digit number that had been randomly
generated prior to the study. This process took approximately three minutes.
Demographic and Self-Regulation Questionnaires.
Following the brief introduction, participants were asked to fill out the
demographic and self-regulation questionnaires (see the sections “Demographic,
Game Play, and Game Preference Questionnaire” and “Self-Regulation
Questionnaire” for descriptions of these questionnaires). Participants were given
seven minutes to fill out the two forms, but were not told there was a time limit. If
the seven minute time limit was imminent, and it appeared a participant might not
finish in time, that participant was told he or she only had a minute or two left. This
only happened once during the study (one participant) and that participant was given
an extra minute to finish the questionnaires. Most participants finished filling out the
two questionnaires within five minutes. For the Demographic, Game Play, and Game
revised 11/20/05
169
Preference Questionnaire, if a participant asked what a game term meant, he or she
was prompted to enter a zero for their Likert-type response.
Introduction to Using the Knowledge Mapping Software.
Following administration of the demographic and self-regulation
questionnaires, participants were introduced to the knowledge mapping software.
This process took approximately eight minutes. Before being asked to start the
knowledge mapping software, knowledge mapping was explained to the participants,
by using their southwestern university as the domain. During the explanation, after
numerous concepts were listed, such as school, university, classroom, book, teacher,
student, sorority, fraternity, study, and party, three link examples were given. For an
example of a temporal link, the phrase “Study before tests” was given, where before
was the temporal link. For an example of a causal link, the phrase “Studying
improves grades” was given, where improves was the causal link. For a simple
relational link, the phrase “Classrooms contain book” was given, where contains was
the relational link. So that participants understood the reason for creating a
knowledge map, they were told that research has provided strong evidence that a
person’s ability to create a knowledge map is directly related to that person’s
understanding of a subject matter; that is, the more accurate and the more complete
the knowledge map, the better a person understands a domain.
Participants were then prompted to start the knowledge mapping software.
Once the software was started, the interface was explained, by describing the
function of the ‘Add Concepts’ menu item and the three on-screen buttons (see
revised 11/20/05
170
Figure 2). Participants were asked to click on a concept to add it to the screen.
Participants were then asked to add three more concepts “for a total of four
concepts.” Participants were next prompted to open the ‘Add Concept’ pull-down
menu and to take note that the four concepts they selected were grayed out in the
menu. It was explained that a concept could only be added once, since it could have
as many links going to it or coming from it as desired. Participants were also told
that every concept in the pull-down menu applied to the game SafeCracker, therefore
every concept could be used in a knowledge map.
Next, participants were then prompted to move the four concepts around to
form a large box. Once completed, the three ‘Mode’ buttons near the bottom of the
screen were explained, as well as the display box to the right of the buttons. It was
explained that the reason they (the participants) were able to move the concepts
around was because they were in Move mode, as indicated by the word Move in the
display box. The participants were told that once they clicked on a mode button, they
would remain in that mode until they clicked another mode button. They were then
asked to click the Link mode button and were pointed to the display box to see that it
now showed the word Link.
Participants were told to click in the middle of one concept and drag to the
middle of another concept before letting go of the mouse, in order to generate a link.
Once they did so, and the link dialog box opened, participants were told to click on
the dialog box’s pull-down menu and look at the choices given for links, such as
contains, leads to, or requires. They were asked to pick a link and not to worry about
revised 11/20/05
171
the appropriateness of the link. They were told that, for now, the researcher was only
concerned that they learn to use the software and didn’t care whether they created an
accurate knowledge map.
After participants successfully created the first link, they were prompted to
add at least five more links. They were told that, for the purposes of instruction, one
concept needed to have only one link, either going to it or coming from it, and that
the rest of the concepts could have as many links attached to them as desired. Once
enough links were added by all participants, the participants were asked to click on
the Erase mode button, to switch to Erase mode. They were prompted to look at the
display window to see that it now displayed the word Erase.
Participants were told not to click on anything until explicitly told to do so.
They were then told that the way to erase a link was to click on the words attached to
the link (such as causes or leads to). Participants were then prompted to click on the
word of the link that was connected to the concept that had only one link. That left a
concept with not links attached to it. After that, participants were told that the way to
erase a concept was to click directly on the concept. They were then prompted to
click on the concept that no longer had any links. Next, individually, the researcher
told each participant a specific concept to click on. The concept selected was
whichever concept had the largest number of links connected to it. Upon clicking the
concept, the concept and all its links were erased. Some participants were so
surprised that they made an audible sound of shock. Participants were told that the
software had no undo button, and that if they accidentally erased a concept and all its
revised 11/20/05
172
links, they would need to recreate them. They were told that the moment they were
done erasing, they should immediately switch to either Move or Link mode, to avoid
erasing accidentally. They were then prompted to switch to Move mode.
Because participants had begun with 4 concepts and had erased two of
them, each participant now only had two concepts on the screen. In almost all cases,
there was also a link between those two remaining concepts. If there wasn’t, those
participants who didn’t have a link present were prompted to add a link, by switching
to Link mode, and then prompted to switch back to Move mode. Then participants
were asked to click and drag one of the concepts and were shown that, as a concept
was moved, its links moved with it. They were reminded that all the concepts in the
software applied to SafeCracker and that a recommended strategy for creating a
knowledge map was to begin by adding all the concepts to the screen and then move
the concepts around so they could begin creating links. They were told that, as the
screen got crowded, they could move concepts around and those concepts’ links
would go with them. Participants were then asked if they understood how to use the
software. Upon receiving a positive response, they were shown how to exit the
software and how to save their file when asked to do so later. See the section
“Introduction to Using the Knowledge Mapping Software” under the Pilot Study for
details on file saving.
Introduction of the Game SafeCracker.
A script was used to introduce the game SafeCracker, to ensure equivalent
learning by all participants. The script used for the main study was the results of
revised 11/20/05
173
revisions made to the script for the pilot study, based on observations, discussions
and participants comments that occurred during and after the pilot study. The
procedures for teaching SafeCracker were the same as those used in the pilot study,
with the addition of reminders to all participants to check for clues as they played the
game and that not all clues were added to the inventory, so they might need to write
some things down on the scratch paper they were provided. Participants were also
told they could get as many pieces of scratch paper as desired. Fifteen minutes were
allotted to teaching SafeCracker. The following is the script used for the main study.
SafeCracker training script. In this study, you will be asked to accomplish a
series of tasks. The tasks will be to locate and open various safes in various rooms in
a mansion. In order to open some of these safes, you will need to find certain items.
You will be told which rooms to visit. Those rooms contain all the items needed and
all the safes you will need to open. You do not need to spend time in any other
rooms. Even though the mansion has two floors, all the rooms you will visit are on
the first floor. Do not go to the second floor.
Your goal is to open all the safes in the rooms you are given. For each room,
you will be told the room’s name, for example, the Small Showroom. Together, we
will walk through the steps needed to find and enter the mansion. Then, we will walk
through searching the first room and opening one safe. After that, you will be given
the names of several rooms and will be required to find the rooms and open the safes.
From this point on, DON’T DO ANYTHING UNLESS I EXPLICITLY TELL YOU
revised 11/20/05
174
TO with phrases like “OKAY, DO IT NOW” or “GO AHEAD AND DO IT.” Does
everyone understand?
(GETTING INTO THE MANSION.) Once I tell you to click the “new”
button to begin a new game, you’ll see the game’s main interface screen and a
phone. You’ll be in a phone booth facing the phone. The phone will be ringing, but
you won’t hear it because the sound is turned off on your computer. As you move
your cursor to the phone’s hand piece, a hand symbol will appear on your cursor.
That means you can click on the phone piece to remove it from its hook. Anytime
you see a hand symbol it means you can click on something. Once you remove the
hand piece, a voice will begin speaking. Unfortunately, you won’t be able to hear it
because, as already mentioned, the sound is turned off on your computer. So right
now, I’m going to tell you the most important information you would have heard.
And I need you to write it down on your scratch paper as I say it to you. That
information is a four-digit code you’re going to need to use in order to enter the
mansion. That code is 1923. Write that down now; 1923.
Once you remove the hand piece, wait about five seconds, then move your
cursor back to the hook. When you see the hand symbol, click to hang up the phone,
then wait for further instructions. To repeat, you will click the phone piece, wait
about five seconds, and click the handle to hang up the phone. After that, you will do
nothing. That includes not moving the cursor. Do you have any questions? Okay, go
ahead now and click NEW to start the game.
revised 11/20/05
175
(Once everyone’s done listening to the message). Move your cursor about
two inches to the right of the hand piece (wait), hold down the cursor, and move your
mouse left and right. Your view will pan left and right. The further you move left or
right, the faster the scene will pan. You can also move the cursor up or down to tilt
up or down a little. If you stop moving the mouse but continue to hold the mouse
button down, the scene will continue to pan. (long pause.) You can also use the left,
right, up, and down cursor arrows pan or tilt your view.
Now let’s exit the phone booth. Rotate until you see the sidewalk. When you
let go of the cursor, you should see the hand symbol. If you don’t see the hand
symbol, rotate 180 degrees and look at the sidewalk going the other direction. Then
click once to open the phone booth door and a second time to exit the phone booth.
Only click twice. After that, wait for instructions. Okay, go ahead and do that now.
(Once everyone’s outside the phone booth). Now, I’m going to give you
some instructions and it’s very important that you do absolutely nothing until I tell
you to (beat). Across the street is a lit up, two story mansion. Notice how much of
your sidewalk you can currently see in front of you? When I tell you to, you’re going
to slowly rotate to the right. You’ll stop when you can see as much of the mansion as
possible, while still seeing as much of your sidewalk as you currently see. Go ahead
and do that, then wait for further instructions.
Once again, I’m going to give you a series of instructions and it’s very
important that you do not do anything until I explicitly tell you to (beat.). Do you see
those white stripes crossing the street? That’s a crosswalk. You’re going to walk
revised 11/20/05
176
down your sidewalk, turn right and cross the street. Then you’ll turn left and walk
further down the other sidewalk. Then you’ll turn to the right and face the mansion
(beat). More specifically, you’ll take two steps down your sidewalk. You’ll turn to
the right and take two steps to cross the street. You’ll turn to the left and take two
more steps to walk further down the other sidewalk. Then you’ll turn right and face
the mansion. Okay, go ahead and get to the mansion.
(Once everyone’s facing the gate). You should be facing a gate, with the
mansion in the background. See the lock on the gate. Go ahead and click on it.
You’ll need to see the hand symbol in order to click (wait). Now, notice the three
tumblers on the lock. Go ahead and click on the tumblers, and you’ll notice that all
three can be rotated. If you rotate them to the correct pattern, the lock will open. I’ll
give you one minute to try. Go ahead now and try to open the lock.
(After one minute). If you haven’t opened the lock yet, set the three tumblers
to music symbols and the lock will open. Go ahead. (Once all locks are opened) Now
go ahead and move to the front door. You’ll need to navigate a little bit around that
fountain that’s up ahead. (Once everyone’s at the front door). Notice how the front
entrance has two doors. I want you do rotate so that you see the area just to the left of
the left door. Go ahead and start rotating. Notice that small gray box? Go ahead and
click on it (long pause). Now, using your mouse, click the four digits I had you write
down earlier, then click on the ENTER button on that keypad. Go ahead and do that
now. Remember to use the mouse to click on numbers, rather than using your
revised 11/20/05
177
keyboard. (Once everyone’s gained access). Now click once to open the first door.
Now click again to open the second door and a third time to enter the mansion.
(Once everyone’s inside the mansion). Rotate left and right and you’ll notice
you’re in front of a reception desk. The secretary’s chair to your left is facing a
computer. Rotate and you should be able to see the back of the monitor. Go ahead
and navigate around until you’re facing that computer (long pause). Once you’re
there, click on the computer screen once and then wait. Go ahead. (Once everyone’s
at the computer). Click once more on the computer screen. Then wait about five
seconds.
(Once everyone is looking at the game minesweeper on the screen). Now try
to move your mouse to look around. Notice how nothing happens. This is because
you’re ‘locked’ on to the computer screen. Whenever you’re ‘locked’ onto
something, you can’t do anything else until you BACK away from that object. To do
that, click the BACK button that’s on the right side of the screen. Go ahead and do
that. (Once everyone’s backed up). Now, click on the blue cup to the left of the
computer. A larger view of the cup appears in a window at the bottom of the screen.
You can grab the cup on that window and rotate it. Go ahead and try that. (After
everyone’s rotated the cup).
To the left of the blue cup is a piece of paper. Go ahead and click on it.
Notice the diagrams. These might be important for opening a safe. You might want
to write them down later, when you start playing the game. If you move your cursor
near the bottom of the paper, a down arrow cursor appears, indicating that you can
revised 11/20/05
178
click to see more of the paper. Go ahead and do that. You can also click to the right
to see more of the paper. And click near the top to see the top portion of the paper.
(Once everyone’s seen all parts of the paper). Go ahead and click your back
button (long pause). Now rotate to the right and find the next piece of paper. Go
ahead and click on that. You can move your cursor to the bottom and click to see
more. With this paper, you can’t click to see the right portion (wait). Go ahead and
click the back button. Rotate a little more to the right and you’ll see a third piece of
paper. Go ahead and click on that paper. You can click down on the paper and to the
right to see more. This paper is filled with diagrams that might be helpful for
opening safes (beat.). Once you’ve seen the whole paper, click the back button and
then don’t do anything else. (Wait until everyone’s seen both papers).
I don’t want you to click on anything else, but notice there are several books.
Some contain potentially useful information. If you rotate around, you’d see other
items around the desk that might be worth looking at, including more papers and
more books. Go ahead and rotate around. (wait a few seconds)
Now I’d like you to face that computer screen you went to earlier (long
pause). See that blue safe in the background? I want you to navigate to it. Go ahead
(beat). When you get there, click on the safe to lock onto it and don’t do anything
else. BE SURE NOT TO CLICK ON ANYTHING (long pause). You’ll know you’re
locked onto the safe when the button on the right side of the screen says BACK.
(Once everyone is near the safe). Once again, wait until I tell you to before
clicking on anything (beat). The safe has three red lights, three white dials, and a
revised 11/20/05
179
handle just below the middle dial. The safe will open when the three dials are set to
the correct numbers (beat). The red lights are connected to the dials. The left dial
controls the top red light. The middle dial controls the middle red light. And the right
dial controls the bottom red light.
Go ahead and click the handle now and you’ll notice one of the lights
remains a solid green while the other two are flashing green. If a light remains green,
it means that its dial is set correctly. You can keep clicking on the handle if you want
(pause). When you click, the middle light stays solid green, so the middle dial must
be set correctly. I’ll want you to adjust the dials until all the lights stay green. Go
ahead and do that now. And remember, the middle dial is correct so don’t move it.
(Once everyone’s opened the safe). Click on the piece of paper. Click it again
to make it go away. Notice it’s been added to your inventory on the bottom right
window of the screen. Click on it in the inventory to open it up again. Click on it
once more to make it go away (beat). Now, click on the coins to add those to your
inventory as well. Some items can be used to open safes; for example, keys might be
used to open a lock or coins might be inserted into a coin slot (long pause). To use an
item, you click on the item to make it active. Right now, your coins are active.
Notice your inventory text? And notice that window to the left of the inventory that
shows a 3D image of the coins? Now, notice the vertical button between them that
says “USE.” That’s the button you click in order to use something. Go ahead and
click it now (long pause). If you were at something that could use the coins, they
revised 11/20/05
180
would have been used. But since you weren’t, you might be able to use them later
(beat). Now, back away from the safe.
(Wait until everyone’s backed away from the safe). Now click on the safe to
try to lock onto it. Notice you can’t and a red error message appears at the top of the
screen. That message states that the safe has already been cracked. ONCE YOU
CLOSE A SAFE, YOU CAN NEVER OPEN IT AGAIN. SO BE SURE TO
COLLECT ALL ITEMS FROM A SAFE BEFORE BACKING AWAY.
Notice that area where the red warning sign appeared? It now says
“Reception.” That’s the name of the room you’re in (beat). When it’s not displaying
an error message, that’s the room indicator window and it lets you know the name of
the room you’re in (beat). Also notice the compass at the bottom right of the
computer screen. You might find that compass helpful for determining which
direction you’re facing or moving.
(FOR MAP USERS, READ THE “SCRIPT FOR INTRODUCING MAP
TO PARTICPANTS”)
(FOR NON-MAP USERS, READ THE “SCRIPT FOR THE CONTROL
GROUP ON HOW TO NAVIGATE THE MANSION”)
Introduction to Using the Navigation Map.
Those in the navigation map group (the treatment group) were next
introduced to reading the navigation map and were given instruction on path finding
and path planning. They also received instructions on strategies for playing the
game. To ensure equivalent learning by all those in the treatment group, a script was
revised 11/20/05
181
utilized (see below). Those in the control group were given simple guidelines for
navigation. See the next section entitled “Script for the Control Group on How to
Navigate the Mansion” for information on the script given to the control group.
The script for the main study on how to use the navigation map was revised
after the pilot study, based on observations by the researcher and comments from the
participants. To support both the original and revised versions of the script, a special
version of the navigation map, a training map, was created (see Figure 14),
displaying a portion of the floor plan, along with shaded portions, labels, and arrows,
as aids to the script.
The training map was handed to each participant on a standard sheet of white
paper and contained the title “How to read the map.” As participants viewed the
training map, the script was read. In addition to training on map reading, path
planning, and path finding, the script included some strategies for playing the game.
The script also contains two words or phrases in parentheses: beat and long pause.
The term beat is a common term in script writing and refers to a “momentary pause
in dialog or action” (Armer, 1988, p. 260). The term long pause does not have a
history in script writing and is used here to indicate a pause of at least two seconds.
The following script was read to the navigation map group.
Training map script. This is a map of a portion of the first floor of the
mansion. You will use a map similar to this to help navigate to the various rooms.
Currently, you’re in the reception room, the large room near the middle of the
bottom portion of the map. The map shows some of the rooms on the first floor with
revised 11/20/05
182
their related names. These will match the names of the rooms that contain the safes
you will be asked to open and the items that will help you to open the safes. The
names also match the names that appear in the name indicator on your interface,
which you have already been shown. You will not need to visit any other rooms,
unless they are along a path you take in order to get to a required room. In addition to
the room names, the map also shows the location of the doors in each room. If you
need to, you are allowed to write on this map.
This map shows a portion of the bottom floor and includes text labels
describing the most important map features. On the left side are four labels. The top
label on the left contains the words “room name” and points to the name of the room
entitled “Big Showroom.” Take a look and you’ll see that every room has a name.
Those areas that do not have names are either closets or bathrooms. For each of your
two tasks, you will be told the names of the rooms you must visit. As shown to you
earlier, there is a room name indicator on the interface. You will use this indicator to
verify which room you are in.
The middle label on the left contains the word “stairs” and points to a block
of black and gray stripes. That pattern indicates stairs. Notice that there’s another set
of stairs just to the right and a small set of stairs connecting the two.
On the left side, near the bottom, is a label with the word “door.” Gaps or
open spaces between rooms indicate doors. Every room has at least one door and
most have several doors.
revised 11/20/05
183
On the left side, at the bottom, is a label with the words “Main Entrance” and
an arrow pointing to the door you came through to enter the mansion. This is the
only door on the map that is not indicated using an opening or gap. Once you began
moving around the reception room that door locked and cannot be opened. That’s
why it is not indicated by an opening.
In the middle of the map is a label with the word “toilet.” The arrow points to
a small circle, which is the symbol for a toilet. There are other bathrooms in the
mansion that have toilets, but for some reason, the people who created this map
chose to only show this toilet.
On the right side of the map are three labels. The one on the far right side of
the map and containing the words “points north” points to a symbol with a black
circle, a spike pointing upward, and two spikes pointing downward. This is a typical
map indicator that shows the direction for “North.” The spike that points upward is
pointing “north.”
On the right side, just below and to the left of the “points north” label is a
label with the word “closet.” As mentioned before, closests and bathrooms don’t
have room names. The one exception is the room with the toilet. That room’s name
is W.C., which stands for “water closet.” Water closet is a term used in England for
bathroom.
The last label is at the bottom on the right side of the mansion and contains
the word “door.” The three arrows emanating from that label point to three more
examples of doors.
revised 11/20/05
184
The last part of the map to show you is the darkened rooms. In the map
you’re looking at, there are three darkened rooms. They are “reception,” the “small
showroom,” and the “technical design” room. Just above the technical design room
is a dark label with the words “your task takes place in the shaded rooms.” As
already mentioned, you will be given two tasks. For each task, you will be given a
map. Each map will have a different set of darkened rooms, indicating the rooms you
must visit in order to complete each task. While you are allowed to visit other rooms,
your time to complete each task is limited, so it is best to not waste time visiting
unnecessary rooms.
As a first step for each task, it is recommended that you examine the map to
determine the shortest or most efficient paths for getting from room to room, and
return to the various rooms. As an example, in the current map, since you’re already
in the reception room, if you wanted to go to the “Small Showroom,” you’d use the
right door of the reception room to enter the “Small Showroom.” If you wanted to go
from the small showroom to the technical design room, there are no doors leading
directly from one room to the other; there are no openings. Instead, you’d need to go
first to your right and enter the “Designer’s room.” Then you move up the left side of
that room and through a door that leads into the “Technical Design room.” To return
to the “Small Showroom,” you’d simply reverse your path.
Once you have a plan for how you will navigate to and from rooms, then you
would begin moving around, collecting items and attempting to open safes (Pause).
Remember, it is very important that you look at all the items in rooms, to find clues
revised 11/20/05
185
that might help open safes (beat). Not everything gets added to your inventory. You
may need to write things down (long pause). Do you have any questions?
Script for the Control Group on How to Navigate the Mansion
While the navigation map group (the treatment group) was given not only
detailed instruction on how to read the navigation map, but were also given
instruction on how to plan or find paths, the control group was given only limited
instruction on navigation. As with the navigation map group’s script, the script for
the control group included strategies for playing the game. The following script was
read to the control group.
For each of your two tasks, you will need to navigate to three rooms and
return to the rooms by retracing your path. For each task, you will be told the name
of the rooms you need to visit. As just shown, the interface includes a window that
displays the name of the room you’re in. Be sure to keep track of your room location.
Because you will need to find your way and than find your way back, use whatever
method you think will help to keep track of where you’ve been and the path you’ve
taken (beat). Note: You will need to go through other non-task related rooms to get
to your rooms. And remember, it is very important to look at all the items in rooms,
to find clues that might help open safes (beat). Not everything gets added to your
inventory; you may need to write things down (long pause). Do you have any
questions.
While eight extra minutes were needed for training the navigation map group
on use of the navigation map, only one or two minutes were needed for providing
revised 11/20/05
186
navigation guidance to the control group. Therefore, the control group’s total
participation time was approximately 6 minutes less than the navigation map group’s
total participation time (i.e., 90 minutes versus 96 minutes, respectively).
First Game.
As with the pilot study, after participants were given instruction on
navigating the environment, they played their first game. This phase of the study
began with handing participants their first Task Completion Form (Figure 15). For a
complete listing of instructions given for the task completion form, see the “First
Game” section under the topic “Pilot Study.”
Figure 15: Task Completion Form 1 for Main Study
Those in the treatment group were then handed their navigation map for the
first game (see Figure 8). As with the pilot study, they were told that the shaded
revised 11/20/05
187
rooms for the first game were the same three rooms that were shaded on the learning
map. Both groups were reminded not to forget to look at objects in the rooms,
including the room they were currently in, the Reception Room. Finally, participants
were told they would have 15 minutes to find and open the safes and were told to
begin. After 15 minutes, participants were prompted to save their games and exit the
software using the same procedures as used in the pilot study.
Creating the Knowledge Map (Occasion 1)
Participants were next prompted to start the Knowledge Mapping software.
After asking whether they had any questions, participants were told they would have
seven minutes to create a knowledge map and told to begin. After seven minutes,
participants were prompted to save their files and exit the software using the same
procedures as in the pilot study (see the section “Creating the Knowledge Map
(Occasion 1)” under the topic “Pilot Study”).
Problem Solving Strategy Questionnaire (Occasion 1)
As with the pilot study, participants were given the first problem solving
strategy questionnaire for the first game and told how to fill it out. They were told to
stay on the first question until told to go to the second question and that once they
were on the second question they were to remain there and not go back to the first
question. Participants were given four minutes for the questionnaire, at two minutes
per question.
revised 11/20/05
188
Second Game
Procedures for the second game were the same as for the second
game of the pilot study. They were also handed the Task Completion form for the
second task (Figure 16). This form had been modified from the Task Completion
form used in the pilot study (see Figure 7). The wording for the Strongbox safe in the
Technical Design room was changed from “Strongbox (in storeroom)” to “Strongbox
(file cabinet in storeroom).” Those in the navigation map group were handed the
navigation map for the second game (see Figure 9), which included the darkened
rooms of the game. Those in the navigation map group were reminded that the
darkened rooms represented the rooms that contained the safes they would need to
locate and open.
Figure 16: Task Completion Form 2 for Main Study
revised 11/20/05
189
All participants were told that they would begin the second game in the
Technical Design room, which was one of the three rooms included in the first game.
Participants were told they would need to open the safes in the Technical Design
room, even if they had already opened those safes in the first game. They were also
told that the safes from the other two rooms from the first game had already been
opened for them and the contents of those safes were in their inventory. Participants
were reminded that even though the safes from the other two rooms had been
opened, they might still want to revisit those rooms to look for clues.
Participants were told they would have 15 minutes for this game and were
prompted to begin. After 15 minutes were up, participants were asked to save their
game and exit the software using the same procedures given in the pilot study (see
the section “Second Game” under the topic “Pilot Study”).
Knowledge Map and Problem Solving Strategy Questionnaires (Occasion 2)
Participants were next prompted to restart the Knowledge Mapping software
and were given seven minutes to create their second knowledge map. After seven
minutes, participants were asked to save their files and exit the software using the
same procedures used in the pilot study (see the section “Knowledge Map and
Problem Solving Strategy Questionnaires (Occasion 2)” under the topic “Pilot
Study”). Last, participants were given their second Problem Solving Strategies
Questionnaire, which was identical to the first Problem Solving Strategies
Questionnaire and prompted to respond one question at a time, as with the prior
revised 11/20/05
190
questionnaire. They were given a total of four minutes for the questionnaire; two
minutes per question.
Debriefing and Extra Play Time
Upon completion of the second Problem Solving Strategies Questionnaire,
participants were told the study was over. They were asked what they thought of the
game and if it was similar to games they’d played or games they liked. If
appropriate, participants were asked what types of games, and even specific games,
they liked. They were also asked if they had any questions. Finally, participants were
told they could continue playing the game for up to 30 minutes if they were
interested. Debriefing took approximately three minutes. The offer of extra play time
was for collecting data on continuing motivation. If a participant chose to continue
playing, he or she was coded as exhibiting continuing motivation; regardless of the
amount of time he or she continued to play. If a participant did not choose to
continue playing, he or she was coded as not exhibiting continuing motivation, even
if he or she had indicated a desire to continue playing.
Timing Chart for Main Study
Table 10 lists the activities encompassing the main study and the times
allocated with each activity, and ending with total time (96 minutes) and the
optional 30 minutes of playing time. The Pilot Study activity “Introduction to
SafeCracker and Map Reading” which encompassed 15 minutes was divided into
two activities for the main study: one activity was “Introduction to SafeCracker”
which encompassed 15 minutes. The second activity, immediately following the
revised 11/20/05
191
“Introduction to SafeCracker,” was “Introduction to map reading for the treatment
group” and encompassed an additional 8 minutes. While those in the control group
did not receive map reading instructions, they did receive some navigational
instruction during this time, which required approximately two minutes. Therefore,
the amount of time required for the control group to complete the study was
approximately 6 minutes less than the time required for the treatment group to
complete the study: 90 minutes versus 96 minutes.
Table 10: Time Chart of the Main Study
Activity
Introduction and study paperwork
Self-regulation and demographic questionnaires
Introduction to knowledge mapping software
Introduction to SafeCracker
Introduction to map reading for the treatment group
First game (3 rooms) plus task completion form
Knowledge map creation (occasion 1)
Problem solving strategy retention and transfer
questionnaire (occasion 1)
Second game (3 rooms) plus task completion form
Knowledge map creation (occasion 2)
Problem solving strategy retention and transfer
questionnaire (occasion 2)
Debriefing
TOTAL
Optional additional playing time
Time Allocation
3 minutes
7 minutes
8 minutes
15 minutes
8 minutes
15 minutes
7 minutes
4 minutes
15 minutes
7 minutes
4 minutes
3 minutes
96 minutes
Up to 30 minutes
revised 11/20/05
192
CHAPTER 4
ANALYSIS AND RESULTS
Descriptive and inferential statistical results based on data collected from the
main study are presented. SPSS 13.0 for Windows program (2002) was used to
analyze the data. The data were analyzed to assess the study’s hypotheses.
Research Hypotheses
Hypothesis 1: Participants who use a navigation map (the treatment group)
will exhibit significantly greater content understanding than participants who do not
use a navigation map (the control group).
Hypothesis 2: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy retention than participants who do not
use a navigation map (the control group).
Hypothesis 3: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy transfer than participants who do not
use a navigation map (the control group).
Hypothesis 4: There will be no significant difference in self-regulation
between the navigation map group (the treatment group) and the control group.
However, it is expected that higher levels of self-regulation will be associated with
better performance.
Hypothesis 5: Participants who use a navigation map (the treatment group)
will exhibit a greater amount of continuing motivation, as indicated by continued
revised 11/20/05
193
optional game play, than participants who do not use a navigation map (the control
group).
Content Understanding Measurement
Content understanding was assessed through construction of knowledge
maps, created on two occasions; occasion 1 and occasion 2. Occasion 1 was after the
first game, and occasion 2 was after the second game. Table 11 shows the mean
scores and standard deviations for knowledge map creation for the control group,
navigation map group (the treatment group), and the both groups combined (total)
for two occasions (occasion 1 and occasion 2) and the amount of improvement for
each group from occasion 1 to occasion 2. Improvement is defined as a group’s
occasion 2 score minus the group’s occasion 1 score.
Table 11
Descriptive Statistics of Knowledge Map Occasion 1 and Occasion 2 Scores for the
Control Group, Navigation Map Group, and Both Groups Combined
Group
Mean
SD
Control (n = 31)
Occasion 1
6.57
2.85
Occasion 2
7.55
3.37
Improvement
1.00
1.17
Navigation Map (n = 33)
Occasion 1
6.51
2.30
Occasion 2
7.85
3.06
Improvement
1.34
2.75
Total (n = 64)
Occasion 1
6.54
2.56
Occasion 2
7.70
3.19
Improvement
1.16
2.68
For the control group, the mean scores for knowledge map construction
occasion 1 and occasion 2 were 6.57 and 7.55, respectively. For the navigation map
revised 11/20/05
194
group, the mean scores for knowledge map construction occasion 1 and occasion 2
were 6.51 and 7.85, respectively. For both groups combined, the mean scores for
knowledge map construction occasion 1 and occasion 2 were 6.54 and 7.70,
respectively. The mean scores for improvement in knowledge map construction from
occasion 1 to occasion 2 for the control group, the navigation map group, and both
groups combined were 1.00, 1.34, and 1.16, respectively.
There was no significant difference in the improvement of content
understanding between the control group and the navigation map group, t(62) = .54,
p = .59, Cohen’s d effect size index = .17. Cohen (1988) defined d as the difference
between the means divided by the standard deviation of each group and also defined
d = .2 as small, d = .5 as medium, and d = .8 as large effect sizes. The effect size of
.17 for the differences in improvement in knowledge mapping scores between the
control and navigation groups was below the smallest effect size, indicating a
negligible effect.
Another way to look at the knowledge map data is to calculate the percentage
of the mean scores for knowledge mapping by the control group, the treatment
group, and both groups combined compared to the mean scores of the three expert
maps. The scores for the expert maps were 90, 99, and 54. The mean score for the
expert maps was 81. The percentage of a group’s mean score is calculated by
dividing the group’s mean score by the experts’ mean score. For example, the mean
score for the control group for occasion 1 was 6.57 (see Table 11). Dividing that
score by the expert mean score (6.57 divided by 81) yielded a mean percentage of
revised 11/20/05
195
8.11% for the control group for occasion 1. That is, the control group’s mean score
represents 8.11% of the mean score achieved by the experts. As seen in Table 12,
mean percentages for the control group were 8.11% for the occasion 1 and 9.32% for
occasion 2. Mean percentages for the navigation map group were 8.04% for occasion
1 and 9.69% for occasion 2. Mean percentages for both groups combined were
8.07% for occasion 1 and 9.51% for occasion 2.
Table 12
Descriptive Statistics of the Percentage of Knowledge Map Occasion 1 and Occasion
2 Scores for the Control Group, Navigation Map Group, and Both Groups Combined
Group
Mean
SD
Control (n = 31)
Occasion 1
Occasion 2
Improvement
Navigation Map (n = 33)
Occasion 1
Occasion 2
Improvement
Total (n = 64)
Occasion 1
Occasion 2
Improvement
8.11%
9.32%
1.23%
3.52%
4.16%
1.44%
8.04%
9.69%
1.65%
2.84%
3.78%
3.40%
8.07%
9.51%
1.43%
3.16%
3.94%
3.31%
Improvement percentages for the control group, the navigation map group,
and both groups combined were 1.23%, 1.65%, and 1.43%, respectively. There was
no significant difference in the improvement of content understanding between the
control group and the navigation map group, t(62) = .54, p = .59, Cohen’s d effect
size index = .17. The effect size of .17 for the differences in improvement in
revised 11/20/05
196
knowledge mapping scores between the control and navigation groups was below the
smallest effect size, indicating a negligible effect.
The occasion 1 and occasion 2 knowledge map scores for the control group
were significantly correlated, r = .65, p < .01, as were those of the navigation map
group, r = .50, p < .01. A t-test also confirmed that no significant difference was
found between the occasion 1 scores of the two groups, t(62) = -.10, p = .92. A t-test
also confirmed there was no significant difference between the occasion 2 scores of
the two groups, t(62) = .37, p = .71.
A mixed-groups, repeated measures factorial ANOVA was also performed to
examine the effects of the use or non-use of a navigation map on content
understanding as exhibited through knowledge map construction on occasion 1 and
occasion 2. Table 13 shows the means for the conditions of the design. There was no
interaction between treatment and occasion F(1,62) = .29, p = .59. There was also no
main effect for group, F(1,62) = .03, p = .86. There was a main effect of occasion,
F(1,62) = 11.84, p = .001, with higher knowledge mapping scores in occasion 2 than
in occasion 1, for both the control group and the navigation map group.
Table 13
Knowledge Mapping Means by Group by Occasion
Group
KM Occasion 1
KM Occasion 2
Control (n = 31)
6.57 (2.85)
7.55 (3.37)
Treatment (n = 33)
6.51 (2.30)
7.85 (3.06)
6.54 (2.58)
7.70 (3.22)
Interrater Reliability of the Problem Solving Strategy Measure
7.06 (3.11)
7.18 (2.68)
revised 11/20/05
197
Two researchers independently assigned an expert idea unit to each of the
1520 participant problem solving strategy retention and transfer responses, assigning
expert retention idea units to the participants’ retention responses and expert transfer
idea units to the participant’s transfer responses. Then, the two expert lists were
compared. The two experts had agreed on 1275 of the 1520 idea units, which was
83.9% agreement (1275/1520 = .839).
The percentage of interrater agreement was then analyzed by problem solving
strategy retention responses and by problem solving strategy transfer responses. Of
the 1520 participant responses, 1021 were problem solving strategy retention
responses and 499 were problem solving strategy transfer responses. Table 14 shows
the number of responses for each of the 28 problem solving strategy retention idea
units by each of the two raters (see Table 7 for a description of the retention idea
units). Note that Table 14 indicates there were 29 idea units, plus an idea unit
numbered as zero. The 29th idea was an error entered by one of the experts during
coding. The zero indicates a participant response that did not fit into any idea unit.
There were 161 problem solving strategy retention responses coded with zero. For
example, on a number of occasions, participants responded with a single word, such
as clue or hint. Because an idea unit required a verb and a noun, single-word
response could rarely be matched with an idea unit. Those responses were coded as
zero. A few single word responses did receive matches. For example, map was
interpreted as use map and compass was interpreted as use compass, since that is the
only logical interpretation of the word within the context of SafeCracker. But since a
revised 11/20/05
198
word like clue might mean use clue, find clue, interpret clue, search for clue, etc.,
the experts were unable to determine an appropriate match.
Table 15 shows the number of responses for each of the 21 problem solving
strategy transfer idea units by each of the two raters (see Table 8 for a description of
the transfer idea units). Note, no participant response matched expert idea unit
number 21, therefore, that number does not appear in the chart. Similar to the
problem solving strategy retention responses, 134 of the participant responses to the
problem solving strategy transfer question did not match an expert idea unit and were
given a value of zero. For example, on a number of occasions, participants simply
responded that a particular game feature was difficult, but did not indicate whether it
needed modification. Those responses were coded as zero.
Table 14
Matrix of the Number of Participant Responses Assigned to Each Idea Unit in the
Problem Solving Retention Measure Based on Two Rater’s Scoring
Expert 1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
0 161 3 5 6
1 3
1 1
2
1 5 12
1
1
2 4 1 37 3 3
1
1
178 3
1
E 3 3 8
6 1
x 4 1
34 1
p 5 1
6 24 1
e 6
r 7
1
21
t 8
1 1 66 1
1
9 3
3
1
1 64
2 10
3 3
11 12 1
12 3 1
11
12
19
13
1
37
revised 11/20/05
14 1
15 2
16
17
18 1
19 2
20 1
21
22
23 2
24 2
25
26
27 1
28 1
Total 203
4
1
1
2
1
1
1
2
4
2
1
1
30 45 203 15 45 31 23 68 79 3 14 19 38 5
199
revised 11/20/05
Table 14 (Continued)
Matrix of the Number of Participant Responses Assigned to Each Idea Unit in the
Problem Solving Retention Measure Based on the Two Rater’s Scoring
Expert 1
Total
15 16 17 18 19 20 22 23 24 26 27 28 29
0
3 5 2 3
2
1
199
1
19
2
1
1
52
3
3
1
197
E 4
8
x 5
36
p 6
1
1
33
e
7
22
r
8
1
71
t
9
72
1
7
2 10
11
40
12
19
13
1
39
14
5
15 6
1
1
11
16
23
24
17
11
1 1
15
18
42
49
19
7 1
1
14
20
1
3
21
1
1
22
3 1
2
8
23
3 15
2
1
23
24
4
1 1
8
25
1
1
26
2
23
25
27
1
2
4
28
4 10
16
Total 6 24 23 50 9 7 6 23 2 25 13 11 1 1021
200
revised 11/20/05
201
Table 15
Matrix of the Number of Participant Responses Assigned to Each Idea Unit in the
Problem Solving Transfer Measure Based on the Two Rater’s Scoring
Expert 1
Total
E
x
p
e
r
t
2
0 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 17 18 19 20
0 134
134
1
1
1
54
2
54
18
3
18
2
4
2
2
5
2
43
6
43
13
7
13
5
8
5
15
10
15
3
11
3
101
12
101
40
13
40
30
14
30
9
15
9
6
16
6
4
17
4
12
18
12
4
19
4
3
20
3
Total 134 1 54 18 2 2 43 13 5 15 3 101 40 30 9 6 4 12 4 3
499
As can be see in Table 14, the number of idea units agreed upon for the
problem solving strategy retention responses was 823. Agreement is represented in
Table 14 by the diagonal listing of numbers (i.e., 161, 12, 37, 178 … 1, 23, 2, and
10). With a total of 1021 problem solving strategy retention responses, the two raters
agreed on 80.6% of the responses (823/1021 = .806). As can be seen in Table 15, the
number of idea units agreed upon for the problem solving strategy transfer responses
was 355. Agreement is represented in Table 15 by the diagonal listing of numbers
revised 11/20/05
202
(i.e., 134, 1, 54, 18, 2 … 4, 12, 4, and 3). With a total of 499 problem solving
strategy transfer responses, the two raters agreed on 90.6% of the responses (452/499
= .906). All 352 disagreements for the problem solving strategy retention and
transfer responses combined were resolved through discussions by the two raters,
ending the rating process with 100% agreement on all retention and transfer
responses. The data used for reporting the results of the problem solving strategy
measure represent the data after all rater disagreements were resolved; that is, after
100% agreement had been reached.
Problem Solving Strategy Measure
Problem solving strategy retention and transfer were assessed through a
problem solving strategy questionnaire which contained two questions: a retention
question a transfer question (see Table 6). The questionnaire was administered on
two occasions, occasion 1 and occasion 2. Occasion 1 was after the first game, and
occasion 2 was after the second game.
Retention question. As shown in Table 16, for the control group, the mean
scores for the problem solving strategy retention occasion 1 and occasion 2
responses were 6.19 and 5.71, respectively. For the navigation map group, the mean
scores for the problem solving strategy retention occasion 1 and occasion 2
responses were 6.06 and 5.55, respectively. For both groups combined, the mean
scores for the problem solving strategy retention occasion 1 and occasion 2
responses were 6.13 and 5.63, respectively. Mean scores for improvement from
revised 11/20/05
203
occasion 1 to occasion 2 for the control group, the navigation map group, and both
groups combined were -.48, -.52, and -.50, respectively.
Table 16
Descriptive Statistics of Problem Solving Strategy Retention Occasion 1 and
Occasion 2 Scores for the Control Group, Navigation Map Group, and Both Groups
Combined
Group
Mean
SD
Control (n = 31)
Occasion 1
6.19
3.48
Occasion 2
5.71
2.76
Improvement
-.48
2.87
Navigation Map (n = 33)
Occasion 1
6.06
2.30
Occasion 2
5.55
3.15
Improvement
-.52
2.25
Total (n = 64)
Occasion 1
6.13
2.91
Occasion 2
5.63
2.95
Improvement
-.50
2.55
There was no significant difference in the improvement of problem solving
strategy retention between the control group and the navigation map group, t(62) = .05, p = .96, Cohen’s d effect size index = .05. The effect size of .05 for the
differences in improvement between the control and navigation groups indicated a
negligible effect.
The occasion 1 and occasion 2 problem solving strategy retention scores for
the control group were significantly correlated, r = .60, p < .01, as were those of the
navigation map group, r = .70, p < .01. A t-test also confirmed that no significant
difference was found between the occasion 1 scores of the two groups, t(62) = -.18, p
revised 11/20/05
204
= .86. A t-test also confirmed there was no significant difference between the
occasion 2 scores of the two groups, t(62) = -.22, p = .83.
Another way to look at the problem solving strategy retention data is to
calculate the percentage of the mean scores for problem solving strategy retention by
the control group, the treatment group, and both groups combined compared to the
number of expert idea units created for the problem solving strategy retention
question. The experts defined 28 idea units related to the problem solving strategy
retention question (see Table 6). The percentage of a group’s means score is
calculated by dividing the group’s mean score by 28—the number of expert problem
solving strategy retention idea units. For example, the mean score for the control
group for occasion 1 was 6.19 (see Table 16). Dividing that score by the number of
expert idea units (6.19 divided by 28) yielded a mean percentage of 22.11% for the
control group for occasion 1. That is, the control group’s mean score for the total
number of problem solving strategy retention idea units generated is equal to 8.11%
of the total number of expert problem solving strategy retention idea units.
As seen in Table 17, mean percentages for the control group were 22.11% for
the occasion 1 and 20.39% for occasion 2. Mean percentages for the navigation map
group were 21.64% for occasion 1 and 19.82% for occasion 2. Mean percentages for
both groups combined were 21.89% for occasion 1 and 20.11% for occasion 2. There
was no significant difference in the improvement of problem solving strategy
retention between the control group and the navigation map group, t(62) = -.05, p =
.96, Cohen’s d effect size index = .05. The effect size of .05 for the differences in
revised 11/20/05
205
improvement between the control and navigation groups indicated a negligible
effect.
Table 17
Descriptive Statistics of the Percentage of Problem Solving Strategy Retention
Occasion 1 and Occasion 2 Scores for the Control Group, Navigation Map Group,
and Both Groups Combined
Group
Mean
SD
Control (n = 31)
Occasion 1
22.11%
12.43%
Occasion 2
20.39%
9.86%
Improvement
-1.71%
10.25%
Navigation Map (n = 33)
Occasion 1
21.64%
8.21%
Occasion 2
19.82%
11.25%
Improvement
-1.86%
8.04%
Total (n = 64)
Occasion 1
21.89%
10.39%
Occasion 2
20.11%
10.54%
Improvement
-1.79%
9.11%
The occasion 1 and occasion 2 problem solving strategy retention scores for
the control group were significantly correlated, r = .60, p < .01, as were those of the
navigation map group, r = .70, p < .01. A t-test also confirmed that no significant
difference was found between the occasion 1 scores of the two groups, t(62) = -.18, p
= .86. A t-test also confirmed there was no significant difference between the
occasion 2 scores of the two groups, t(62) = -.22, p = .83.
A mixed-groups, repeated measures factorial ANOVA was performed to
examine the effects of the use or non-use of a navigation map on problem solving
strategy retention and measured through a problem solving strategy retention test on
occasion 1 and occasion 2. Table 18 shows the means for the conditions of the
revised 11/20/05
206
design. There was no interaction between treatment and occasion F(1,62) = .00, p =
.96. There was also no main effect for group, F(1,62) = .71, p = .82. There was no
main effect of occasion, F(1,62) = 2.41, p = .13, with no differences in problem
solving strategy retention scores in occasion 1 than in occasion 2, for both the control
group and the navigation map group.
Table 18
Means for Problem Solving Strategy Retention by Group by Occasion
Group
PSS Retention
PSS Retention
Occasion 1
Occasion 2
Control (n = 31)
6.19 (3.48)
5.71 (2.76)
5.95 (3.12)
Treatment (n = 33)
6.06 (2.30)
5.55 (3.15)
5.81 (2.73)
6.13 (2.89)
5.63 (2.96)
Transfer question. As shown in Table 19, the mean scores for the problem
solving strategy transfer occasion 1 and occasion 2 responses for the control group
were 2.90 and 2.26, respectively. The mean scores for the problem solving strategy
transfer occasion 1 and occasion 2 responses for the navigation map group were 3.00
and 2.36, respectively. The mean scores for the problem solving strategy transfer
occasion 1 and occasion 2 responses for the both groups combined were 2.95 and
2.31, respectively. Mean scores for improvement from occasion 1 to occasion 2 for
the control group, the navigation map group, and both groups combined were -.29, .48, and -.39, respectively.
revised 11/20/05
207
Table 19
Descriptive Statistics of Problem Solving Strategy Transfer Occasion 1 and Occasion
2 Scores for the Control Group, Navigation Map Group, and Both Groups Combined
Group
Mean
SD
Control (n = 31)
Occasion 1
2.90
1.70
Occasion 2
2.26
1.73
Improvement
-.65
1.60
Navigation Map (n = 33)
Occasion 1
3.00
2.02
Occasion 2
2.36
1.93
Improvement
-.64
1.82
Total (n = 64)
Occasion 1
2.95
1.86
Occasion 2
2.31
1.83
Improvement
-.64
1.70
There was no significant difference in the improvement of problem solving
strategy transfer between the control group and the navigation map group, t(62) = .02, p = .98, Cohen’s d effect size index = .01. The effect size of .01 for the
differences in improvement between the control and navigation groups indicated a
negligible effect. The occasion 1 and occasion 2 problem solving strategy transfer
scores for the control group were significantly correlated, r = .56, p < .01, as were
those of the navigation map group, r = .58, p < .01. A t-test also confirmed that no
significant difference was found between the occasion 1 scores of the two groups,
t(62) = -.21, p = ..84. A t-test also confirmed there was no significant difference
between the occasion 2 scores of the two groups, t(62) = -.23, p = .82.
Another way to look at the problem solving strategy retention data is to
calculate the percentage of the mean scores for problem solving strategy transfer by
the control group, the treatment group, and both groups combined compared to the
revised 11/20/05
208
number of expert idea units created for the problem solving strategy transfer
question. The experts defined 21 idea units related to the problem solving strategy
transfer question (see Table 6). The percentage of a group’s means score is
calculated by dividing the group’s mean score by 21—the number of expert problem
solving strategy retention idea units. For example, the mean score for the control
group for occasion 1 was 2.90 (see Table 19). Dividing that score by the number of
expert idea units (2.90 divided by 21) yielded a mean percentage of 13.81% for the
control group for occasion 1. That is, the control group’s mean score for the total
number of problem solving strategy transfer idea units generated is equal to 13.81%
of the total number of expert problem solving strategy transfer idea units.
As seen in Table 20, mean percentages for the control group were 13.81% for
the occasion 1 and 10.76% for occasion 2. Mean percentages for the navigation map
group were 14.29% for occasion 1 and 11.24% for occasion 2. Mean percentages for
both groups combined were 14.05% for occasion 1 and 11.00% for occasion 2.
Table 20
Descriptive Statistics of the Percentage of Problem Solving Strategy Transfer
Occasion 1 and Occasion 2 Scores for the Control Group, Navigation Map Group,
and Both Groups Combined
Group
Mean
SD
Control (n = 31)
Occasion 1
13.81 %
8.10%
Occasion 2
10.76%
8.24%
Improvement
-3.05%
7.62%
Navigation Map (n = 33)
Occasion 1
14.29%
9.62%
Occasion 2
11.24%
9.19%
Improvement
-3.05%
8.67%
Total (n = 64)
Occasion 1
14.05%
8.86%
revised 11/20/05
Occasion 2
Improvement
11.00%
-3.05%
209
8.71%
8.10%
There was no significant difference in the improvement of problem solving
strategy transfer between the control group and the navigation map group, t(62) = .02, p = .98, Cohen’s d effect size index = .01. The effect size of .01 for the
differences in improvement between the control and navigation groups indicated a
negligible effect. The occasion 1 and occasion 2 problem solving strategy transfer
scores for the control group were significantly correlated, r = .56, p < .01, as were
those of the navigation map group, r = .58, p < .01. A t-test also confirmed that no
significant difference was found between the occasion 1 scores of the two groups,
t(62) = -.21, p = ..84. A t-test also confirmed there was no significant difference
between the occasion 2 scores of the two groups, t(62) = -.23, p = .82.
A mixed-groups, repeated measures factorial ANOVA was performed to
examine the effects of the use or non-use of a navigation map on problem solving
strategy understanding as measured through the problem solving strategy transfer
test on occasion 1 and occasion 2. Table 21 shows the means for the conditions of
the design. There was no interaction between treatment and occasion F(1,62) = .21, p
= .65. There was also no main effect for group, F(1,62) = .10, p = .75. There was a
main effect of occasion, F(1,62) = 3.33, p = .07, with no differences in problem
solving strategy transfer scores in occasion 1 than in occasion 2, for both the control
group and the navigation map group.
Table 21
Means for Problem Solving Strategy Transfer by Group by Occasion
revised 11/20/05
Group
Control (n = 31)
Treatment (n = 33)
PSS Transfer
Occasion 1
2.90 (1.70)
3.00 (2.02)
2.95 (1.71)
PSS Transfer
Occasion 2
2.26 (1.73)
2.36 (1.93)
2.31 (1.83)
210
2.58 (1.72)
2.68 (1.98)
Trait Self-Regulation Measure
Participants’ trait self-regulation was assess through a self-report instrument
developed by O’Neil and Herl (1998). The trait self-regulation questionnaire
Appendix A) contained 32 questions, eight each to evaluate the four self-regulation
traits represented in the O’Neil Problem Solving model (O’Neil, 1999): planning,
self-monitoring, mental effort, and self-efficacy.
Table 22 shows the mean scores for the four self-regulation factors for the
control group and the navigation map group. Mean scores for the four factors for the
control group were 24.87, 24.03, 23.74, and 25.55 for planning, self-monitoring,
effort, and self-efficacy, respectively. Mean scores for the navigation map group
were, 25.03, 23.91, 24.12, and 24.73 for the same order of factors. As expected, Ttests confirmed no significant difference by group was found between any of the
self-regulation measures: planning, t(62) = -.19, p = .85; self-monitoring, t(62) = .12, p = .91; effort, t(62) = .41, p = .69; self-efficacy, t(62) = -.76, p = .45.
Table 22
Descriptive Statistics of Trait Self-Regulation Scores for the Control Group and
Navigation Map Group
Control
Navigation Map
(n = 31)
(n = 33)
M
SD
M
SD
Scale
Planning
24.87
4.06
25.03
2.37
Self-Monitoring
24.03
4.70
23.91
3.58
revised 11/20/05
Effort
Self-Efficacy
23.74
25.55
3.92
4.85
24.12
24.73
211
3.53
3.74
An analysis of correlations was conducted between the four factors of the
trait self-regulation questionnaire scores and the knowledge map scores, against
problem solving strategy retention scores and problem solving strategy transfer
scores. Tables 23, 24, and 25 show the correlations for the control group. Tables 26,
27, and 28 show the correlations for the navigation map group. Tables 29, 30, and 31
show the correlations for both groups combined. As can be seen in Tables 23 and 24,
for the control group, there was no significant relationship between self-regulation
and either knowledge mapping performance or problem solving strategy retention.
However, as can be seen in Table 25, for the control group, there was a negative
correlation between planning ability and the amount of improvement in the problem
solving strategy transfer scores, with greater planning ability leading to poorer
problem solving strategy transfer performance , r = -.43, p < .05.
As can be seen in Tables 26 and 28, for the navigation map group, there was
not significant relationship between self-regulation and either knowledge mapping
performance or problem solving strategy transfer. However, as can be seen in Table
27, for the navigation map group, there was a positive correlation between mental
effort and the amount of improvement in the problem solving strategy retention
scores, with greater mental effort leading to greater problem solving strategy
retention, r = .38, p <.05. As can be seen in Tables 29, 30, and 31, for both groups
combined, there were not significant correlations between self-regulation and
revised 11/20/05
212
knowledge mapping performance, problem solving strategy retention, or problem
solving strategy transfer.
Table 23
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Knowledge Maps for the Control Group
KM
KM
KM
Occasion 1
Occasion 2
Improvement
Planning
.29
.22
-.03
Self-Monitoring
-.01
-.02
-.02
Effort
.24
.16
-.06
Self-Efficacy
.20
.19
.02
Table 24
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Problem Solving Strategy Retention Responses by the Control
Group
PSS
PSS
PSS
Retention
Retention
Retention
Occasion 1
Occasion 2
Improvement
Planning
.15
.04
-.14
Self-Monitoring
-.04
-.07
-.02
Effort
.19
.09
-.14
Self-Efficacy
.29
.27
-.10
Table 25
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Problem Solving Strategy Transfer Responses by the Control
Group
PSS
PSS
PSS
Transfer
Transfer
Transfer
Occasion 1
Occasion 2
Improvement
Planning
.21
-.19
-.43*
Self-Monitoring
-.04
-.29
-.28
Effort
.17
-.02
-.20
Self-Efficacy
.08
.03
-.05
* p < .05
Table 26
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Knowledge Maps for the Navigation Map Group
revised 11/20/05
KM
Occasion 1
-.20
.12
-.16
.09
KM
Occasion 2
-.01
.16
.18
.22
213
KM
Improvement
.16
.08
.34
.17
Planning
Self-Monitoring
Effort
Self-Efficacy
Table 27
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Problem Solving Strategy Retention Responses by the Navigation
Map Group
PSS
PSS
PSS
Retention
Retention
Retention
Occasion 1
Occasion 2
Improvement
Planning
.13
.19
.13
Self-Monitoring
-.07
.04
.12
Effort
.03
.29
.38*
Self-Efficacy
-.06
.04
.12
* p < .05
Table 28
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Problem Solving Strategy Transfer Responses by the Navigation
Map Group
PSS
PSS
PSS
Transfer
Transfer
Transfer
Occasion 1
Occasion 2
Improvement
Planning
.25
-.02
.25
Self-Monitoring
-.12
-.18
-.06
Effort
.10
.30
.21
Self-Efficacy
.27
.34
.07
Table 29
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Knowledge Maps for Both Groups Combined
KM
KM
KM
Occasion 1
Occasion 2
Improvement
Planning
.13
.14
.04
Self-Monitoring
.04
.05
.02
Effort
.07
.17
.14
Self-Efficacy
.16
.20
.08
revised 11/20/05
214
Table 30
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Problem Solving Strategy Retention Responses for Both Groups
Combined
PSS
PSS
PSS
Retention
Retention
Retention
Occasion 1
Occasion 2
Improvement
Planning
.14
.10
-.05
Self-Monitoring
-.05
-.02
.04
Effort
.12
.19
.08
Self-Efficacy
.17
.16
-.01
Table 31
Correlation Between Self-Regulation Components and Occasion 1, Occasion 2, and
Improvement for Problem Solving Strategy Transfer Responses for Both Groups
Combined
PSS
PSS
PSS
Transfer
Transfer
Transfer
Occasion 1
Occasion 2
Improvement
Planning
.02
-.12
-.14
Self-Monitoring
-.08
-.24
-.17
Effort
.13
.14
.02
Self-Efficacy
.17
.18
.01
Safe Cracking Performance
The problem solving performance outcomes for this study are based on the
O’Neil Problem Solving model (O’Neil, 1999), which defines problem solving in
terms of content understanding, problem solving strategies, and self-regulation (see
Figure 1). Problem solving strategy was measured with retention and transfer
questions similar to the methodology employed by Mayer (e.g., Mayer et al., 2002).
An alternative view of problem solving strategy outcomes in this study might be the
number of safes opened by participants. There were a possible 5 safes to open in
game 1 and 5 safes to open in game two. However, two of the safes in game 2 also
revised 11/20/05
215
appeared in game 1. If a participant opened those safes in game 1, he or she was
likely to open them quickly in game 2. Twenty participants opened one of the those
two safes in game 1 with forty-three opening that safe in game 2. Therefore, 23
participants who did not open that safe in game 1 opened it in game 2. For the other
of the two safes, twenty participants opened that safe in game 1 with thirty-eight
opening the safe in game 2. Therefore, 18 participants who did not open that safe in
game 1 opened it in game 2.
While in actuality, there were only 8 different safes in the two games
combined, each game involved opening 5 safes, for a total of 10 safe. Further, with
regards to the two safes that appeared in both games, those who opened them in the
first game also opened them in the second game, getting credit twice for the same
safe. While this skews the results, alternative approaches would also skew results.
For example, if participants were to only get credit once for opening one of the two
common safes, in which game do they get credit; game 1 or game 2? If game 1, then
the most safes they could get credit for in game 2 would be 3 safes. And if instead
credit was given in game 2, the most the participant could receive credit for in game
1 would be 3 safes. In either case, the participant score would incorrectly reflect
performance. The decision was made to give the participant credit for both games,
since it seemed fairer to give participants credit for safes opened, even if given credit
twice, than to not be given credit for safes opened. It should also be noted that, for
game 1, the experimenter opened two safes—the safes in the Reception room. And in
revised 11/20/05
216
game 2, the experimenter opened four safes—the two safes in the Reception room
and the two safes in the Small Showroom.
Table 32 shows the mean scores for the number of safes opened during each
occasion by group, as well as the means scores for the total number of safes opened
by the control group and the navigation map group. For the control group, as shown
in Table 32, the mean scores for the number of safes opened in occasion 1 and
occasion 2 were 2.68 and 2.32, respectively. For the navigation map group, the mean
scores for the number of safes opened in occasion 1 and occasion 2 were 2.70 and
2.21, respectively. Mean scores for total number of safes opened (occasion 1 plus
occasion 2) for the control group and navigation map group were 4.35 and 4.33,
respectively. Note, these scores reflect the safes opened by the participants, and do
not include the two safes opened by the experimenter for game 1 and the four opened
by the experimenter for game 2.
Table 32
Descriptive Statistics of the Number of Safes Opened During Occasion 1 and
Occasion 2, and the Total Number of Safes Opened by the Control Group,
Navigation Map Group, and Both Groups Combined
Group
Mean
SD
Control (n = 31)
Game 1
2.68
1.49
Game 2
2.32
1.58
Total Safes
4.35
2.21
Navigation Map (n = 33)
Game 1
2.70
1.31
Game 2
2.21
1.29
Total Safes
4.33
1.85
Total (n = 64)
Game 1
2.69
1.39
Game 2
2.27
1.43
revised 11/20/05
Total Safes
4.34
217
2.02
There was no significant difference in the number of safes opened during the
first occasion by the control group and the navigation map group, t(62) = .06, p =
.96, Cohen’s d effect size index = .01. The effect size of .01 for the differences in the
number of safes opened by the control and navigation groups during occasion 1
indicated a negligible effect. There was no significant difference in the number of
safes opened during the second occasion by the control group and the navigation
map group, t(62) = -.31, p = .76, Cohen’s d effect size index = .08. The effect size of
.08 for the differences in the number of safes opened by the control and navigation
map groups during occasion 2 indicated a negligible effect. There was no significant
difference in the total number of safes opened during both occasion 1 and occasion 2
by the control group and the navigation map group, t(62) = -.04, p = .97, Cohen’s d
effect size index = .01. The effect size of .01 for the differences in the total number
of safes opened by the control and navigation groups indicated a negligible effect.
A mixed-groups, repeated measures factorial ANOVA was performed to
examine the effects of the use or non-use of a navigation map on performance as
measured through the number of safes opened on occasion 1 and occasion 2. Table
33 shows the means for the conditions of the design. There was no interaction
between treatment and occasion F(1,62) = .15, p = ..70. There was also no main
effect for group, F(1,62) = .07, p = .89. There was a main effect of occasion, F(1,62)
= 6.28, p = < .05, with more safes opened in occasion 1 than in occasion 2, for both
the control group and the navigation group.
revised 11/20/05
Table 33
Means for the Number of Safes Opened by Group by Occasion
Group
PSS Transfer
PSS Transfer
Occasion 1
Occasion 2
Control (n = 31)
2.68 (1.49)
2.32 (1.58)
Treatment (n = 33)
2.70 (1.31)
2.21 (1.29)
2.69 (1.40)
2.27 (1.44)
218
2.50 (1.54)
2.46 (1.30)
Correlations were also generated for the number of safes opened in the first
and second occasions and the total number of safes opened (the number of safes
from the first occasion plus the number of safes from the second occasion) for the
control group (Table 34), the navigation map group (Table 35), and both groups
combined (Table 36). For both groups combined, there was a significant negative
relationship between amount of mental effort and the number of safes opened in the
first game, with more mental effort resulting in less safes opened, r = -.25, p < .05.
For the navigation map group, the same negative relationship between mental effort
and number of safes opened in the first game was found, with more mental effort
results in less safes opened, r = -.38, p < .05. This negative relationship was not
found for the control group, r = -.15, p = .43.
For both groups combined, a positive relationship was found between selfefficacy the number of safes opened in the second game, with more self-efficacy
resulting in more safes opened, r = .26, p < .05. This relationship was not found for
either the control group (r = .29, p = .11) or the navigation map group (r = .21, p =
.23). As expected, t-tests confirmed no significant difference by group was found
between any of the self-regulation measures: planning, t(62) = -.19, p = .85; self-
revised 11/20/05
219
monitoring, t(62) = -.12, p = .91; effort, t(62) = .41, p = .69; self-efficacy, t(62) = .76, p = .45.
Table 34
Correlation Between Self-Regulation Components and Number of Safes Opened by
the Control Group
Safes Opened in
Safes Opened in
Total number of
First Game
Second Game
Safes Opened
Planning
-.13
.10
.01
Self-Monitoring
-.15
.02
-.08
Effort
-.15
.09
-.08
Self-Efficacy
.07
.29
.19
* p < .05
Table 35
Correlation Between Self-Regulation Components and Number of Safes Opened by
the Navigation Map Group
Safes Opened in
Safes Opened in
Total number of
First Game
Second Game
Safes Opened
Planning
-.06
-.15
-.15
Self-Monitoring
-.31
-.26
-.30
Effort
-.38*
.02
-.10
Self-Efficacy
.10
.21
.19
* p < .05
Table 36
Correlation Between Self-Regulation Components and Number of Safes Opened for
Both Groups Combined
Safes Opened in
Safes Opened in
Total number of
First Game
Second Game
Safes Opened
Planning
-.10
.01
-.04
Self-Monitoring
-.22
-.09
-.17
Effort
-.25*
.06
.09
Self-Efficacy
.08
.26*
.19
* p < .05
Continuing Motivation Measure
revised 11/20/05
220
The term continuing motivation is defined by Malouf (1987-1988) as
returning to a task or a behavior without apparent external pressure to do so when
other appealing behaviors are available. Similarly, Story and Sullivan (1986)
commented that the most common measure of continuing motivation is whether a
student returns to the same task at a later time. Because continuing motivation
requires returning to a task, an extra half hour was set aside for participants to
continue playing SafeCracker, if they chose to do so. Further, because continuing
motivation would require continuing to play when other appealing behaviors were
available (Malouf, 1987-1988), indicating a desire to continue playing but not
actually continuing to play was not considered exhibition of continuing motivation.
Participants received a score of one if they continued playing, regardless of the
amount of time they continued to play, and a score of zero if they didn’t continue
playing.
Table 37 shows the mean scores for continuing motivation for both the
control group and the navigation map group. The mean score for continuing
motivation for the control group was .10, while the mean score for the navigation
map group was .15. The amount of continuing motivation was not significantly
different between the two groups, t(62) = .65, p = .52, Cohen’s d effect size index =
.17. The effect size of .17 for the differences in continuing motivation by the control
and navigation groups indicated a negligible effect.
Table 37
Descriptive Statistics of the Continuing Motivation Scores of the Control Group,
Navigation Map Group, and Both Groups Combined
revised 11/20/05
Group
Control (n = 31)
Continuing Motivation
Navigation Map (n = 33)
Continuing Motivation
Total (n = 64)
Continuing Motivation
Mean
SD
.10
.30
.15
.36
.13
.33
221
Tests of the Research Hypotheses
Hypothesis 1: Participants who use a navigation map (the treatment group)
will exhibit significantly greater content understanding than participants who do not
use a navigation map (the control group).
Tables 12 and 13 showed that the navigation map group did not have
significantly greater content understanding than the control group as measured by
knowledge map construction. While the amount of improvement for the navigation
map group (M = 1.34) was greater than the amount of improvement for the control
group (M = .98), the difference was not significant. Hypothesis 1 was not supported.
Hypothesis 2: Participants who use a navigation map (the treatment group)
will exhibit greater problem solving strategy retention than participants who do not
use a navigation map (the control group).
Tables 16 and 17 showed that the navigation map group did not retain more
problem solving strategies than the control group. For both groups, retention
decreased from occasion 1 to occasion 2. While the decrease in retention was more
pronounced for the control group (M = -.52) than for the navigation map group (M =
-.42), the difference between the two groups was not significant. Hypothesis 2 was
not supported.
revised 11/20/05
222
Hypothesis 3: Participants who use a navigation map (the treatment group) will
exhibit greater problem solving strategy transfer than participants who do not use a
navigation map (the control group).
Tables 18 and 19 showed that the navigation map group did not exhibit more
problem solving strategy transfer than the control group. For both groups, transfer
decreased from occasion 1 to occasion 2. While the decrease was more pronounced
for the navigation group (M = -.48) than for the control group (M = -.29), the
difference between the groups was not significant. Hypothesis 3 was not supported.
Hypothesis 4: There will be no significant difference in self-regulation
between the navigation map group (the treatment group) and the control group.
However, it is expected that higher levels of self-regulation will be associated with
better performance.
As indicated by Table 22, there were no significant differences in the selfregulation scores between the control group and the navigation map group. The mean
scores for planning, self-monitoring, effort, and self-efficacy were 24.87, 24.03,
23.74, and 25.55, respectively, for the control group and 25.03, 23.91, 24.12, and
24.73, respectively, for the navigation map group.
The latter part of the hypotheses, that higher levels of self-regulation would
be associated with better performance, was only partially supported. With regards to
knowledge mapping (Tables 23, 26, and 29) and problem solving strategy retention
(Tables 24, 27, and 30) and problem solving strategy transfer (Tables 25, 28, and 31)
two correlations existed between performance and self regulation. As shown in Table
revised 11/20/05
223
25, a negative correlation between planning ability and the amount of improvement
in the problem solving strategy transfer scores, r = -.43, p < .05. As shown in Table
27, a positive correlation between amount of mental effort and the amount of
improvement in the problem solving strategy retention scores, r = .38, p <.05.
Another indicator of performance was the number of safes opened (Tables 32
and 33). There were no differences between the mean scores of the two groups in
number of safes opened in occasion 1, occasion 2, or the total number of safes
opened. Several correlations were found between number of safes opened and selfregulation measures. For both groups combined (Table 36), there was a negative
correlation between mental effort and the number of safes opened in game 1 (r = .25, p < .05). As shown in Table 35, the same negative correlation was found in the
navigation group (r = -.38, p < .05). Table 36 also shows a positive correlation
between self-efficacy and the number of safes opened in game 2 for both groups
combined (r = .26, p < .05). There were no correlations between the control group
and self-regulation scores (Table 34). Hypothesis 4 was only partially supported.
Hypothesis 5: Participants who use a navigation map (the treatment group) will
exhibit a greater amount of continuing motivation, as indicated by continued optional
game play, than participants who do not use a navigation map (the control group).
Table 37 shows the mean scores for continuing motivation for the control
group (M = .10) and the navigation map group (M = .15). While the continuing
motivation score for the navigation map group was higher than the control group’s
revised 11/20/05
224
score, the difference between the two groups was not significant, t(62) = .65, p = .52.
Hypothesis 5 was not supported.
revised 11/20/05
225
CHAPTER 5
SUMMARY OF THE RESULTS AND DISCUSSION
The purpose of this study was to examine the effect of a navigation map on a
complex problem solving task in a 3-D, occluded, computer-based video game. With
one group playing the video game while using a navigation map (the treatment
group) and the other group playing the game without aid of a navigation map (the
control group), this study examined differences in problem solving outcomes as
informed by the O’Neil (1999) Problem Solving model. The O'Neil model delineated
problem solving into content understanding, problem solving strategies, and selfregulation. Five hypotheses were generated for this study. The first four addressed
the three components of the O’Neil Problem Solving model, asserting that those who
used a navigation map (the treatment group) would exhibit greater content
understanding (hypothesis 1), greater retention of problem solving strategies
(hypothesis 2), and greater transfer of problem solving strategies (hypothesis 3) than
those who did not use a navigation map (the control group). The fourth hypothesis
asserted that those with higher amounts of trait self-regulation would perform better
than those with lower amounts of trait self-regulation. The fifth hypothesis of the
study asserted that those who used the navigation map (the treatment group) would
exhibit greater continuing motivation than those who did not use the navigation map
(the control group), as exhibited by continued optional play of the game.
Summary of the Results
revised 11/20/05
226
Results of the data analysis indicated that the use of navigation maps did not
affect problem solving as measured by performance based to the O’Neil (1999)
Problem Solving model. Those using the navigation map (the treatment group) did
not score higher than those who did not use the navigation map (the control group) in
content understanding, problem solving strategy retention, and problem solving
strategy transfer. In addition, with some minor exceptions, higher levels of selfregulation were unrelated to higher levels of performance regardless of whether or
not a map was used. Lastly, those who used the navigation map (the treatment group)
did not exhibit higher continuing motivation than those who did not use the map (the
control group).
While the results of the data analysis in this study did not provide support any
of the study’s five hypotheses, examination of the results may provide insights not
only into why these results occurred in this study but to characteristics of gamebased problem solving environments and navigation maps that may inform the field
and affect not only future studies, but game design and instructional design as well.
While the purpose of this study was to examine the effect of navigation maps on
performance outcomes, other factors may have contributed to the lack of sufficient
influence by the navigation maps.
To explain the lack of statistical difference between the treatment group
(navigation map) and control group (no map), two effects should be examined: one
that would reduce or suppress the effects of the treatment (the navigation map) and
one that would inflate the outcomes measures for the control group. Since the
revised 11/20/05
227
hypotheses of this study are based on the cognitive load and graphical scaffolding
research and theories of Richard Mayer and John Sweller, plausible explanations
should fit within those frameworks. Suppression of outcomes by the treatment group
might well be explained by extraneous load theory and by the contiguity effect.
Inflation of the control group might well be explained by priming.
The contiguity effect proposes that separating items of importance either
spatially or temporally adds cognitive load. Since the navigation map was separated
from the game, spatial contiguity may have contributed to cognitive overload.
Extraneous load theories purport that attending to unnecessary items adds cognitive
load unrelated to the task, reducing the amount of cognitive capacity available for
processing necessary information or even contributing to cognitive overload. The
complex 3-D environment of the game SafeCracker was filled with visual and other
details that would be expected to add extraneous cognitive load. If the task of
navigating to rooms was not as cognitively challenging as expected, the addition of
the navigation map may have been detrimental, rather than beneficial.
Priming asserts that providing cues can help focus attention on important
tasks or details, which ultimately helps with metacognitive process involved in
learning and problem solving. Both groups were primed a number of times with
search and problem solving strategies, which might have aided both groups in
understanding procedures necessary for doing well in the SafeCracker. Those
strategies may have influenced both groups enough to offset any differences that
might have been fostered by navigation map usage, ultimately resulting in similar
revised 11/20/05
228
outcomes for the two groups. This chapter is divided into three sections. First will be
a discussion of possible explanations based on the contiguity effect and extraneous
load. Second will be a discussion of possible explanations based on strategy priming.
Last will be a summary of the discussions.
Discussion
Possible Effects from the Contiguity Effect and Extraneous Load
A major instructional issue in learning by doing within simulated
environments concerns the proper type of guidance (i.e. scaffolding; Mayer et al.
2002). Mayer and colleagues (2002) commented that scaffolding is an effective
instructional strategy and that discovery-based learning environments can become
effective learning environments when the nature of the scaffolding is aligned with
the nature of the task, such as pictorial scaffolding for pictorially-based tasks and
textual scaffolding for textually-based tasks.
However, while graphical scaffolding appears to be beneficial, there are
potential problems associated with use of this type of scaffolding. One such problem
is termed the contiguity effect, which refers to the cognitive load imposed when
multiple sources of information are separated (Mayer et al., 1999; Mayer & Moreno,
2003; Mayer et al., 1999; Mayer & Sims, 1994; Moreno & Mayer, 1999). There are
two forms of the contiguity effect: spatial contiguity and temporal contiguity.
Temporal contiguity occurs when one piece of information is presented prior to other
pieces of information. Spatial contiguity occurs when information is physically
revised 11/20/05
229
separated (Mayer & Moreno, 2003). The contiguity effect results in split attention
(Moreno & Mayer, 1999).
According to the split attention effect, when information is separated by
space or time, the process of integrating the information may place an unnecessary
strain on limited working memory resources (Atkinson et al., 2000; Mayer, 2001;
Tarmizi & Sweller, 1998). When dealing with two or more related sources of
information (e.g., text and diagrams), it’s often necessary to integrate mentally
corresponding representations (e.g., verbal and pictorial) to construct a relevant
schema to achieve understanding. When the sources of information are separated in
space or time, this process of integration may place an unnecessary strain on limited
working memory resources, resulting in impairment in learning (Atkinson et al.,
2000; Mayer & Moreno, 1998; Tarmizi & Sweller, 1988). The current study likely
imposed spatial contiguity, since the navigation map was presented on a piece of
paper which, depending on where the participant placed the map, was separated from
the computer screen. This study did not examine the impact of this additional
cognitive load; how it might have influenced problem solving outcomes, possibly
adding sufficient cognitive load to offset the cognitive load benefits expected from
the graphical scaffolding (the navigation map).
Extraneous load refers to the cognitive load imposed by unnecessary
(extraneous) materials (Harp & Mayer, 1998; Mayer, Heiser, & Lonn, 2001; Moreno
& Mayer, 2000; Renkl & Atkinson, 2003; Schraw, 1998). Seductive details, a
particular type of extraneous details, are highly interesting but unimportant elements
revised 11/20/05
230
or instructional segments that are often used to provide memorable or engaging
experiences (Mayer et al., 2001; Schraw, 1998). The seductive detail effect is the
reduction of retention caused by the inclusion of extraneous details (Harp & Mayer,
1998) and affects both retention and transfer (Moreno & Mayer, 2000).
Extraneous cognitive load (Renkl & Atkinson, 2003) is the most controllable
load, since it is caused by materials that are unnecessary to instruction. However,
those same materials may be important for motivation. Some research has proposed
that learning might benefit from the inclusion of extraneous information. Arousal
theory suggests that adding entertaining auditory adjuncts will make a learning task
more interesting, because it creates a greater level of attention so that more material
is processed by the learner (Moreno & Mayer, 2000). A possible solution to the
conflict of the seductive detail effect, which proposes that extraneous details are
detrimental, and arousal theory, which proposes that seductive details in the form of
interesting auditory adjuncts may be beneficial, is to include the seductive details,
but guide the learner away from them and to the relevant information (Harp &
Mayer, 1998). SafeCracker is a visually rich and immersive 3-D game environment
that, as with virtually any modern 3-D game, is fraught with extraneous and
seductive details—so much so, that guiding the player away from these details may
be impossible.
The point-of-view of the participants in SafeCracker is that they are standing
in the rooms of a mansion. Participants can “look” around, can “walk” to various
locations in a room, “open” doors, “enter” other rooms, and “pick up” and “look at”
revised 11/20/05
231
books, pieces of paper, and a variety of other items. Participants attempt to “open”
safes, by interacting with the safes’ locking and opening mechanisms (solving
puzzles), by “looking at” items contained in the participants’ inventories, and by
attempting to “use” objects, such as keys or coins to open the safes. While all of
these details make for a rich, visual interactive experience and for engaging
participants in the environment, they are extraneous to the two major goals of the
problem solving task—“finding” clues, rooms, and safes, and “opening” safes.
Because of the impact of the scope of extraneous details in SafeCracker and because
of an inability to draw participants away from those extraneous details to focus on
relevant details, as Moreno and Mayer (2002) suggested, extraneous detail effects
might very well explain the lack of significant differences between the performance
of the treatment and control groups; The extraneous and seductive details may have
placed enough extraneous cognitive load to offset the cognitive load benefits of the
navigation map.
In addition to the extraneous nature of the game environment, the navigation
map itself may have been an extraneous detail. The studies conducted by Chou and
colleagues (Chou & Lin, 1998; Chou et al., 2000) had provided the impetus for this
study. In line with the scaffolding and cognitive load research of Mayer and Sweller
(e.g., Atkinson et al., 2000; Mayer, 2001; Mayer & Moreno, 1998; Tarmizi &
Sweller, 1998), Chou and Lin (1998) examined the use of three map types: global
map, local map, and no map. The global map displayed the entire environment (a
hypertext, node-based environment), while the local map displayed only a portion of
revised 11/20/05
232
the environment. Chou and Lin (1998) found that knowledge map creation by those
who used a global navigation map in a search related problem solving task was
significantly better than by those who used either a local navigation map or no map.
The navigation map in this study displayed the whole environment (the
entire bottom floor of the mansion) and, thus, would be defined as a global map.
Therefore, the results of this study should have matched the results of the Chou and
Lin (1998) study—but they didn’t. However, results of this study did match the
results of the Chou et al. (2000) study, which found no differences in knowledge
map creation based on map type. Since results from that second Chou and colleagues
study differed from the results of other graphical scaffolding studies, including the
earlier Chou and colleague study, had been assumed, prior to this study, that the
results had either been an anomaly or the second Chou and colleagues study had
been flawed. However, it was also possible the second study wasn’t flawed and it
was the nature of the Chou environment that resulted in the mixed results based on
map type (global, local, and no map).
It is possible that the Chou environment (Chou & Lin, 1998; Chou et al.,
2000) was not complex enough need, or benefit from, a navigation map. If that were
the case, then the navigation map would have been an extraneous detail and any
cognitive load benefits of map usage might have been offset by the additional
cognitive load introduced by the presence and use of the map. This could explain the
mixed results of the two Chou and colleagues studies. It could also explain the
results of this study. For this study, it had been believed that the search portion of the
revised 11/20/05
233
problem solving task of finding and opening safes was sufficiently difficult enough
to require, and to benefit from, use of a navigation map. However, it is quite possible
that was not the case. If the navigation map were unnecessary, then adding the map
would have simply added extraneous cognitive load for the treatment group,
resulting in poorer performance than expected; the benefits from using the map
would have been negated by the additional extraneous cognitive load.
In summary, inclusion of the navigation map may not have provided the
cognitive benefits expected from the inclusion of graphical scaffolding. The
navigation map was separated from the main gaming environment (the computer
screen), which may have resulted in the contiguity effect and the split attention
effect, which would cause additional cognitive load. In addition, the general nature
of the SafeCracker environment might have been sufficiently filled with extraneous
details as to offset any benefits from use of a navigation map. Lastly, the search
portion of the problem solving task of searching for and opening safes may not have
been sufficiently difficult to benefit from use of a navigation map. If so, rather than
providing cognitive benefit, the navigation map may have acted as an extraneous
detail, resulting in reduced performance.
Possible Effects from Strategy Training
Priming is a cognitive phenomenon where a stimulus (e.g., word or sound)
readies the mind to allow or engage particular relevant schema. This timely exposure
to stimuli results in enhanced access to stored stimuli or information (retrieved
October 7, 2005 from http://filebox.vt.edu/8080/users/dereese2/module8/
revised 11/20/05
234
module08bkup/IDProjectWebpage/lesson4.htm). According to Dennis and Schmidt
(2003), repetition priming is closely allied to skill acquisition. Moreno and Mayer
(2005) commented that lack of priming (in the form of guidance) will result in
reduction of the metacognitive process of selecting—one of the key components in
meaningful learning.
In this study, all subjects were primed a number of times and in several key
knowledge areas. One occurrence of priming occurred during knowledge map
training. A series of primings occurred during SafeCracker training. Another
sequence of priming occurred during navigation map training for the treatment group
and during navigation training for the control group. Additional priming occurred at
the start of each of the two SafeCracker games. It is believed that these primes might
have improved the skills and game play knowledge of both groups enough to offset
any gains that were expected due to treatment. These priming events could have
inflated the control group’s skills and understanding of the game sufficiently enough
to have offset any differences that might have been seen due to treatment (use of a
navigation map). While priming occurred for the treatment group as well, the
priming might have been more important than use of the navigation map, negating
differences due to navigation map use and resulting in equivalent performance by the
two groups (treatment and control).
The main reason priming was included in this study was to emulate priming
provided in earlier game-based studies that utilized a game entitled Space Fortress
(see Day et al., 2001, for a description of Space Fortress). Numerous studies were
revised 11/20/05
235
conducted using Space Fortress and each study began by teaching participants how
to play the game and included strategy instructions based on expert player
knowledge and experience (e.g., Day et al., 2001; Gopher et al., 1994; Shebilske et
al., 1992). The purpose of the training was to ensure that every participant began the
game with equivalent game knowledge and playing skills. That way, it could be
assumed that any differences in performances would be attributed to treatment, not
prior abilities or knowledge, or other game-related individual differences. The same
was expected to be true for this study. A secondary reason for adding priming in this
study was a reaction to observations during the pilot study, where neither of the
participants searched for clues. It was decided that priming related to searching for
clues was necessary for the main study. The following describe priming during the
various phases of the study.
Strategy priming during knowledge map training. During knowledge map
training, all participants were told that, since every concept was applicable to
SafeCracker, they should add all the concepts to the screen and then begin making
links. These knowledge mapping instructions provided two key elements of priming.
First, participants were told that “all” concepts were to be used and should be added
to their knowledge map. This meant that, if they followed that strategy, early on
during map development they would see all concepts and know that links were
needed for all concepts. Therefore, it would prompt the participants to think about
each combination of concepts, possibly more than they would have. Second, as
participants exhausted the links they were aware of, they were primed by any
revised 11/20/05
236
concept not involved in a link that a link was missing. This might have fostered
deeper levels of thinking, simply by knowing that a link was missed, was not
obvious to the participant, and needed to be discovered.
Strategy priming during SafeCracker training. From observations during the
pilot study, a number of verbal prompts were added to the SafeCracker instructions
in the main study to assist participants in remembering to search for clues. To
support research that has found that repetition promotes retention, participants were
reminded several times to remember to search for clues. In addition to those
reminders, there were reminders on the importance of various types of clues, and
reminders for participants to write things down, particularly any diagrams they
found. For example, while looking at a piece of paper sitting on a counter in the
game environment, participants were given the instruction, “Notice the diagrams.
These might be important for opening a safe. You might want to write them down
later, when you start playing the game.” Not only were participants prompted to go
back to the paper when playing the game, they were primed to the concept that
diagrams could be important (even diagrams not on that particular paper) and that
information should be written down, rather than kept in working or long-term
memory.
Priming during navigation map and basic navigation training. The treatment
group received priming during navigation map training. The control group received
priming during navigation training. The treatment group’s priming included multiple
repetitions of the primes for remembering to search for clues and remembering to
revised 11/20/05
237
write things down. The control group received similar priming during their
navigation training. While not repeated as often as the primes for the treatment
group, the control group’s priming did repeat earlier priming both groups had
received on searching for clues and writing things down, which should have made
the scope of control group priming and the degree of priming repetition similar to
that of the treatment group.
Priming at the start of each game. At the start of the first game, all
participants were reminded once again to look at objects in the various rooms,
including the room they were currently in, the Reception Room. Prior to beginning
the second game, all participants were reminded once more to search for clues and to
write down any information they deemed important. These repetitions aided in
ensuring that participants remembered and, hopefully, acted on these strategies.
Prior to the second game, all participants were also told that the safes from
two of the rooms from the first game had been opened for them and the contents of
the safes in those rooms had been added to their inventories. Participants were then
told they might want to revisit those two rooms from the previous game, to search
for clues they might have missed in the first game. For the control group, this
priming might have caused participants to visit the two rooms from the prior game
which, without the priming, they might not have thought to visit. For the treatment
group, if the navigation map had been effective in reducing cognitive load,
participants might have thought to revisit those rooms, even without the priming. By
contrast, the greater cognitive load the control group was experiencing from lack of a
revised 11/20/05
238
navigation map may have prevented those participants from making the
determination to revisit those two rooms and search for clues. Therefore, by
providing the strategy, the control group might have been given a strategy they
would not otherwise have had the cognitive capacity to devise.
Overall, a large number of strategy primes were given to both groups
(treatment and control). Priming occurred during knowledge map training and during
SafeCracker training. Priming occurred during navigation map training for the
treatment group and during navigation training for the control group. These
combined primings could have altered the behavior of both groups enough to negate
differences by treatment (navigation map usage) and effectively inflating the
performance outcomes of the control group.
Summary of the Discussion
The lack of significance for any of the hypotheses in this study might be
explained by either something negatively influencing (deflating) performance by the
treatment group or by something positively influencing (inflating) performance by
the control group. Two explanations based on the scaffolding and cognitive load
research of Mayer and Sweller have been presented; one to account for deflated
performance by the treatment group and one to account for inflated performance by
the control group. Combined, these two effects provide plausible explanations for the
unexpected results of this study.
For the treatment group, inclusion of the navigation map may not have
provided the cognitive benefits expected from the inclusion of graphical scaffolding.
revised 11/20/05
239
The navigation map was separated from the main gaming environment (the computer
screen), which may have resulted in the contiguity effect and the split attention effect
for the treatment group. The search portion of the problem solving task of searching
for and opening safes may not have been sufficiently difficult to benefit from use of
a navigation map. If so, rather than providing cognitive benefit, the navigation map
may have acted as an extraneous detail, resulting in reduced performance by the
treatment group. And the general nature of the SafeCracker environment might have
been sufficiently filled with extraneous details to offset any benefits from use of a
navigation map by the treatment group, even if the map were necessary. For the
control group, the large number of strategy primes given to both groups (treatment
and control) could have positively altered behaviors enough to negate differences by
treatment (navigation map usage).
Four of the five study hypotheses were not supported and one was only
partially supported (hypothesis 4). The contiguity and split attention effects, as well
as the effects of extraneous details, appear to be plausible explanations for the results
of all five hypotheses, with regards to deflated treatment group performance.
Strategy priming appears to be a plausible explanation for inflating the performance
of the control group and ultimately affecting the results for the five hypotheses.
These combined effects provide reasonable explanation for the results of this study.
revised 11/20/05
240
CHAPTER 6
SUMMARY, CONCLUSIONS, AND IMPLICATIONS
Summary
The purpose of this study was to examine the effect of a navigation map on a
complex problem solving task in a 3-D, occluded, computer-based video game. With
one group playing the video game while using the navigation map (the treatment
group) and the other group playing the game without aid of a navigation map (the
control group), this study examined differences in problem solving outcomes as
informed by the O’Neil (1999) Problem Solving model. The O’Neil model
delineated problem solving into content understanding, problem solving strategies,
and self-regulation.
Five hypotheses were generated for this study. The first four addressed the
three components of the O’Neil (1999) Problem Solving model, asserting that those
who used a navigation map (the treatment group) would exhibit greater content
understanding (hypothesis 1), greater retention of problem solving strategies
(hypothesis 2), and greater transfer of problem solving strategies (hypothesis 3) than
those who did not use a navigation map (the control group). The fourth hypothesis
asserted that those with higher amounts of trait self-regulation would perform better
than those with lower amounts of trait self-regulation. The fifth hypothesis of the
study asserted that those who used the navigation map (the treatment group) would
exhibit greater continuing motivation than those who did not use the navigation map
(the control group), as exhibited by continued optional play of the game.
revised 11/20/05
241
Despite early expectations (Donchin, 1989; Malone, 1981; Malone & Lepper,
1987; Ramsberger, Hopwood, Hargan, & Underfull, 1983; Thomas & Macredie,
1994), research into the effectiveness of games and simulations as educational media
has been met with mixed reviews (de Jong & van Joolingen, 1998; Garris, Ahlers, &
Driskell, 2002; O’Neil, Baker, & Fisher, 2002). It has been suggested that the lack of
consensus can be attributed to weaknesses in instructional strategies embedded in
the media and to other issues related to cognitive load (Chalmers, 2003; Cutmore,
Hine, Maberly, Langford, & Hawgood, 2000; Lee, 1999; Thiagarajan, 1998; Wolfe,
1997). Cognitive load refers to the amount of mental activity imposed on working
memory at an instance in time (Chalmers, 2003; Sweller & Chandler, 1994, Yeung,
1999). Researchers have proposed that working memory limitations can have an
adverse effect on learning (Sweller & Chandler, 1994; Yeung, 1999). Further,
cognitive load theory suggests that learning involves the development of schemas
(Atkinson, Derry, Renkl, & Wortham, 2000), a process constrained by limited
working memory and separate channels for auditory and visual/spatial stimuli
(Brunken, Plass, & Leutner, 2003).
One way to reduce cognitive load is to use scaffolding, which provides
support during schema development by reducing the load in working memory
(Clark, 2001). For example, graphical scaffolding has been shown to provide
effective support for graphically-based learning environments, including video
games (Benbasat & Todd, 1993; Farrell & Moore, 2000; Mayer, Mautone, &
Prothero, 2002). Navigation maps, a particular form of graphical scaffolding, have
revised 11/20/05
242
been shown to be an effective scaffold for navigation of a three-dimensional (3-D)
virtual environment (Cutmore et al., 2000). Navigation maps have also been shown
to be an effective support for navigating in a problem solving task in a twodimensional (2-D) hypermedia environment (Baylor, 2001; Chou, Lin, & Sun, 2000).
What has not been examined, and is the purpose of this study, is the effect of
navigation maps, utilized for navigation in a 3-D, occluded, computer-based video
game, on outcomes of a complex problem solving task.
This study utilized an experimental posttest only, 2x2 repeated measures
design with two levels of treatment (maps vs. no maps) and 2 levels of occasion
(occasion 1 vs occasion 2). Participants were randomly assigned to either the
treatment or the control group. The procedure involved administration of pretest
questionnaires, the treatment, the occasion instruments, the treatment, the occasion
instruments, and debriefing. After debriefing, participants were offered up to 30
minutes of additional playing time (to examine continuing motivation). The data for
64 of the participants were included in the data analysis.
A number of instruments were included in the study: a demographic, game
play, and game preference questionnaire, two task completion forms that acted as
advance organizers by listing the names of the rooms to be found and brief
descriptions of the safes in each room, a self-regulation questionnaire the examined
the four self-regulation components of the O’Neil (1999) Problem Solving model
(planning, self-monitoring, mental effort, and self-efficacy), the computer-based
video game SafeCracker®, two navigation map of the game’s environment (each
revised 11/20/05
243
highlighting the rooms involved in the two games that would be played), a problem
solving strategy retention and transfer questionnaire to be completed after each of the
two SafeCracker games; and knowledge mapping software to be completed after
each of the two SafeCracker games.
Results of the study did not support the five hypotheses. With regard to
hypothesis 1, results of the data analysis found that the navigation map group did not
have significantly greater content understanding than the control group as measured
by knowledge map construction. Hypothesis 1 was not supported. With regards to
hypothesis 2, results of the data analysis found that the navigation map group did not
retain significantly more problem solving strategies than the control group.
Hypothesis 2 was not supported. With regards to hypothesis 3, results of the data
analysis found that the navigation map group did not exhibit significantly more
problem solving strategy transfer than the control group. Hypothesis 3 was not
supported.
That higher levels of self-regulation would be associated with better
performance (hypothesis 4) was only partially supported. With regards to knowledge
mapping, problem solving strategy retention, and problem solving strategy transfer,
two correlations existed between performance and self regulation. A negative
correlation between planning ability and the amount of improvement in the problem
solving strategy transfer scores, r = -.43, p < .05. A positive correlation between
amount of effort and the amount of improvement in the problem solving strategy
retention scores, r = .38, p <.05. With regards to hypothesis 5, while the continuing
revised 11/20/05
244
motivation score for the navigation map group (M=15) was higher than the control
group’s score (M=10), the difference between the two groups was not significant.
Hypothesis 5 was not supported.
In summary, results of the data analysis indicated that the use of navigation
maps did not affect problem solving as measured by the performance based to the
O’Neil (1999) Problem Solving model. Those using the navigation map (the
treatment group) did not score higher than those who did not use the navigation map
(the control group) in content understanding, problem solving strategy retention, and
problem solving strategy transfer. In addition, higher levels of self-regulation were
unrelated to higher levels of performance regardless of whether or not a map was
used. Lastly, those who used the navigation map (the treatment group) did not
exhibit higher continuing motivation than those who did not use the map (the control
group).
These results were surprising, as the hypotheses for this study were based on
the work of Richard Mayer (e.g., Mayer et al., 2002) and John Sweller (e.g.,
Tuovinen & Sweller, 1999), which would have predicted support for all hypotheses.
Based on cognitive load theory, an important cognitive goal in design is to control
the amount of load placed on working memory, particularly by items not necessary
for learning. Navigation maps, a graphical form of scaffolding, would serve such a
purpose, by distributing the need to retain location and paths from working memory
to an external graphical support. It appears from this study, though, that such support
revised 11/20/05
245
may not have been necessary in this game or that the maps did not offer appropriate
or sufficient scaffolding.
To explanations were examined; one that looked at possible causes of
deflated performance by the treatment group (navigation map) and one that looked at
a possible cause of inflated performance by the control group (no map). Specifically,
the following were examined: the split attention effect (Mayer, 2001; Tarmizi &
Sweller, 1988) and its related contiguity effect (Mayer et al., 1999; Mayer &
Moreno, 1998; Mayer & Sims, 1994; Moreno & Mayer, 1999; Yeung et al., 1997);
the negative cognitive effects of extraneous and seductive details (Mayer, 1998,
Mayer et al., 2001); and theories related to priming (Dennis & Schmidt, 2003).
Conclusions
Several potential causes were examined to explain the unexpected results of
this study. Overall, it is likely that the results were related to cognitive load. The
contiguity and split attention effects, as well as extraneous cognitive load, seem to
provide plausible explanations for why the treatment group may not have benefited
from the navigation map. Strategy priming seems to provide a plausible explanation
for why the control group might have performed at levels equivalent to the treatment
group.
The separation of the map from the playing area would account for increased
cognitive load for the treatment group, since this separation would have introduced
contiguity and split attention effects. However, it is unclear whether these effects
would have been sufficient to negate greater performance by the treatment group.
revised 11/20/05
246
Extraneous detail effects, including the effects of seductive details, would also
account for increased cognitive load, but it is unclear whether the effect would have
negated the effects of navigation map use by the treatment group enough to cause
both groups to perform equally. Priming is also a viable explanation of study results,
but it is unknown whether priming could have increased the performance of both
groups enough to negate the effects of navigation map use by the treatment group.
Implications
Off the shelf games might provide a platform for some research, but the
constraints imposed by off the self games might preclude many games from being
useful enough as research platforms, due to the lack of control of a number of
variables. One solution would be to use games that include editing abilities
sophisticated enough to modify the game to meet study requirements, including the
ability to add or remove elements, modify existing elements, and collect a variety of
data. A second solution would be to develop a game specifically for a study. While
this method would be the most costly and time consuming, it would offer the
advantage of creating a research environment containing every component needed
for treatment and control groups, enable modification of every element in the game,
from interface, to environment, to controls, to goals, and allow for tracking and
outputting of all desirable data.
The results of this study have shown that use of a navigation map does not
guarantee improvements over not using a navigation map. While scaffolding has
been shown to be a useful and beneficial instructional strategy, graphical scaffolding
revised 11/20/05
247
has been shown to be an effective aid in graphically-based environments, and
navigation maps have been shown to provide benefits in 3-D space, it is apparent
from the results of this study that factors may exist that preclude benefits in all
situations. Several factors have been presented as possible explanations for the lack
of benefit from navigation map usage. To determine which or which combination of
these factors are the cause or causes of the findings of this study, a series of
experiments should be conducted. A customizable gaming environment must be
used, to control for each variable, to introduce each variable one at a time or in
combinations, and to track participant activities. It is through this careful evaluation
of variables that the benefits or limitations of navigation map usage can be
discovered.
More and more, games are being seen as a viable delivery platform for
educational content. But as has been found through a number of studies, it is not the
game but the instructional methods built into the game that result in learning. A
game may provide motivation, but it does not, of itself, provide learning. Only the
methods, content, and strategies embedded in the game can provide that learning. As
a potentially useful instructional strategy, it is important to discover the
circumstances under which navigation maps are beneficial to learning. Only then can
we begin to prescribe their use.
Since treatment had been expected to result in better performance in this
study, but did not, those factors that might have influenced navigation map
performance should be examined first. Three effects need to be tested, all relating to
revised 11/20/05
248
unnecessary load caused by the use of the navigation map. The contiguity and related
split attention effects can be examined by having the map appear on screen, either on
the game’s interface or near the player’s focal point on the screen. The extraneous
load caused by the complex 3-D game environment can be examined by having
players use environments of varying complexity, from simply colored objects, bare
walls, and basic, boxy furniture to feature rich environments such as the environment
of SafeCracker. The third study would involve examining the use of the navigation
map in environments of varying scope. Because the search portion of the problem
solving task may not have been complex enough to benefit from a navigation map,
environments from as small as the single floor of a mansion as used in this study to
environments as large as several building, each with multiple floors, or even
buildings separated by complex occluded paths, such as swamps, lakes, or roads,
should be examined. In addition, the number of rooms involved in the problem
solving task could be varied from three rooms as in this study to ten or more rooms.
To examine the possible effect of priming on performance, priming could be
examined by varying the degree of strategies offered to players, from no strategies to
numerous strategies. It is also suggested that the amount of repetition be varied, to
determine the impact repetition has on priming for problem solving strategies in a
video game. Priming could be examined as a 2-by-3 study, with two levels of
strategy inclusion (none versus some amount) and two levels of repetition (none
versus a small amount versus a large amount).
revised 11/20/05
249
More and more, educational institutions seem to be embracing the use of
video game and simulation environments as a way of modernizing teaching. The
primary impetus for this change in learning strategy is the belief that the motivational
aspects of games and simulations will lead to improvements in learning. Yet, as
research as shown, it is the quality and appropriateness of the instructional strategies
embedded in learning environment that determine whether or not learning will occur.
Little is known about the use of immersive 3-D games for learning, and this study
highlights the fact that what works in one learning environment may not work in
another. To ensure that 3-D games provide the necessary features to foster learning,
studies examining instructional strategies that have been previously shown to be
effective in other learning environments must be carefully examined for
effectiveness in this new learning environment. One such strategy is the use of
navigation maps.
revised 11/20/05
250
REFERENCES
Adams, P. C. (1998, March/April). Teaching and learning with SimCity 2000
[Electronic Version]. Journal of Geography, 97(2), 47-55.
Alessi, S. M. (2000). Simulation design for training and assessment. In H. F. O’Neil,
Jr. & D. H. Andrews (Eds.), Aircrew training and assessment (pp. 197-222).
Mahwah, NJ: Lawrence Erlbaum Associates.
Alexander, P. A. (1992). Domain knowledge: Evolving themes and emerging
concerns. Educational Psychologist, 27(1), 33-51.
Allen, R. B. (1997). Mental models and user models. In M. Helander, T. K. Landauer
& P. Prabhu (eds.), Handbook of Human Computer Interaction: Second,
Completely Revised Edition (pp. 49-63). Amsterdam: Elsevier
The American Heritage Dictionary of the English Language (fourth edition). (2002).
Boston, MA: Houghton Mifflin Co.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R.
E., Pintrich, P. R., et al. (Eds.) (2001). A Taxonomy for Learning,
Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational
Objectives (complete edition). New York, NY: Longman.
Anderson, R. C., & Pickett, J. W. (1978). Recall of previously recallable information
following a shift in perspective. Journal of Verbal Learning and Verbal
Behavior, 17, 1-12.
Armer, A. (1988). Writing the Screenplay. Belmont, CA: Wadsworth Publishing.
Arthur, W., Jr., Strong, M. H., Jordan, J. A., Williamson, J. E., Shebilske, W. L., &
Regian, J. W. (1995). Visual attention: Individual differences in training
and predicting complex task performance. Acta Psychologica, 88, 3-23.
Asakawa, T., Gilbert, N. (2003). Synthesizing experiences: Lessons to be learned
from Internet-mediated simulation games. Simulation & Gaming, 34(1),
10-22.
Atkinson, R. K., Derry, S. J., Renkl, A., & Wortham, D. (2000). Learning from
examples: Instructional principles from the worked examples research.
Review of Educational Research, 70(2), 181-214.
Atkinson, R. K., Renkl, A., Merrill, M. M. (2003). Transitioning from studying
examples to solving problems: Effects of self-explanation prompts and fading
worked-out steps. Journal of Educational Psychology, 95(4), 774-783.
revised 11/20/05
251
Ausubel, D. P. (1963). The psychology of meaningful verbal learning. New York:
Grune and Stratton.
Ausubel, D. P. (1968). Educational psychology: A cognitive view. New York: Holt,
Reinhart, and Winston.
Baddeley, A. D. (1986). Working memory. Oxford, England: Oxford University
Press.
Baddeley, A. D., & Logie, R. H. (1999). Working memory: The multiple-component
model. In A. Miyake & P. Shah (Eds). Models of working memory:
Mechanisms of active maintenance and executive control (pp. 28-61).
Cambridge, England: Cambridge University Press.
Baker, D., Prince, C., Shrestha, L., Oser, R., & Salas, E. (1993). Aviation computer
games for crew resource management training. The International Journal of
Aviation Psychology, 3(2), 143-156.
Baker, E. L., & Mayer, R. E. (1999). Computer-based assessment of problem
solving. Computers in Human Behavior, 15, 269-282.
Baker, E. L. & O’Neil, H. F., Jr. (2002). Measuring problem solving in computer
environments: current and future states. Computers in Human Behavior,
18, 609-622.
Banbury, S. P., Macken, W. J., Tremblay, S., & Jones, D. M. (2001, Spring).
Auditory distraction and short-term memory: Phenomena and practical
implications. Human Factors, 43(1), 12-29.
Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.
Barab, S. A., Bowdish, B. E., & Lawless, K. A. (1997). Hypermedia navigation:
Profiles of hypermedia users. Educational Technology Research &
Development, 45(3), 23-41.
Barab, S. A., Young, M. F., & Wang, J. (1999). The effects of navigational and
generative activities in hypertext learning on problem solving and
comprehension. International Journal of Instructional Media, 26(3), 283309.
Baylor, A. L. (2001). Perceived disorientation and incidental learning in a web-based
environment: Internal and external factors. Journal of Educational
Multimedia and Hypermedia, 10(3), 227-251.
revised 11/20/05
252
Benbasat, I., & Todd, P. (1993). An experimental investigation of interface design
alternatives: Icon vs. text and direct manipulation vs. menus. International
Journal of Man-Machine Studies, 38, 369-402.
Berson, M. J. (1996, Summer). Effectiveness of computer technology in the social
studies: A review of the literature. Journal of Research on Computing in
Education, 28(4), 486-499.
Berube, M. S., Severynse, M., Jost, D. A., Ellis, K., Pickett, J. P., Previte, R. E., et al.
(Eds.) (2001). Webster’s II: New college dictionary. Boston, MA:
Houghton Mifflin Company.
Berylne, D. E. (1960). Conflict, arousal, and curiosity. New York: McGraw-Hill.
Betz, J. A. (1995/1996). Computer games: Increase learning in an interactive
multidisciplinary environment. Journal of Educational Technology Systems,
24(2), 195-205.
Bong, M. (2001). Between- and within-domain relationships of academic motivation
among middle and high school students: Self-efficacy, task-value, and
achievement goals. Journal of Educational Psychology, 93(1), 23-34.
Borkowski, J. G., Pintrich, P. R., Zeidner, M. H. (Eds). (2000). Handbook of SelfRegulation. San Diego, CA: Academic
Brougere, G. (1999, June). Some elements relating to children’s play and adult
simulation/gaming. Simulation & Gaming, 30(2), 134-146.
Brown, D. W., & Schneider, S. D. (1992), Young learners’ reactions to problem
solving contrasted by distinctly divergent computer interfaces. Journal of
Computing in Childhood Education, 3(3/4), 335-347.
Brozik, D., & Zapalska, A. (2002, June). The PORTFOLIO GAME: Decision
making in a dynamic environment. Simulation & Gaming, 33(2), 242-255.
Brunken, R., Plass, J. L., & Leutner, D. (2003). Direct measurement of cognitive
load in multimedia learning. Educational Psychologist 38(1), 53-61.
Brunning, R. H., Schraw, G. J., & Ronning, R R. (1999). Cognitive psychology and
instruction (3rd ed.). Upper Saddle River, NJ: Merrill.
Carr, P. D., & Groves, G. (1998). The Internet-based operations simulation game. In
J. A. Chambers (Ed.), Selected Papers for the 9th International Conference on
college Teaching and Learning (pp. 15-23). Jacksonville, FL, US: Florida
Community Collage at Jacksonville.
revised 11/20/05
253
Carroll, W. M. (1994). Using worked examples as an instructional support in the
algebra classroom. Journal of Educational Psychology, 86(3), 360-367.
Chalmers, P. A. (2003). The role of cognitive theory in human-computer interface.
Computers in Human Behavior, 19, 593-607.
Chen, H. H. (2005) A formative evaluation of the training effectiveness of a
computer game. Unpublished doctoral dissertation. University of Southern
California.
Chi, M. T. H., Bassok, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). Selfexplanations: How students study and use examples in learning to solve
problems. Cognitive Science, 13, 145-182.
Chou, C., & Lin, H. (1998). The effect of navigation map types and cognitive styles
on learners’ performance in a computer-networked hypertext learning system
[Electronic Version]. Journal of Educational Multimedia and Hypermedia,
7(2/3), 151-176.
Chou, C., Lin, H, & Sun, C.-t. (2000). Navigation maps in hierarchical-structured
hypertext courseware [Electronic Version]. International Journal of
Instructional Media, 27(2), 165-182.
Chuang, S.-h. (2003). The role of search strategies and feedback on a computerbased problem solving task. Unpublished doctoral dissertation. University of
Southern California.
Clark, R. E. (1999). The CANE model of motivation to learn and to work: A twostage process of goal commitment and effort [Electronic Version]. In J.
Lowyck (Ed.), Trends in Corporate Training. Leuven, Belgium: University
of Leuven Press.
Clark, R. E. (Ed.) (2001). Learning from Media: Arguments, analysis, and evidence.
Greenwich, CT: Information Age Publishing.
Clark, R. E. (2003, February). Strategies based on effective feedback during
learning. In H. F. O’Neil (Ed.), What Works in Distance Education. Report to
the Office of Naval Research by the National Center for Research on
Evaluation, Standards, and Student Testing (pp. 18-19).
Clark, R. E. (2003b, February). Strategies based on increasing student motivation:
Encouraging active engagement and persistence. In H. F. O’Neil (Ed.), What
Works in Distance Education. Report to the Office of Naval Research by the
National Center for Research on Evaluation, Standards, and Student Testing
(pp. 20-21).
revised 11/20/05
254
Clark, R. E. (2003b, February). Strategies based on providing learner control of
instructional navigation. In H. F. O’Neil (Ed.), What Works in Distance
Education. Report to the Office of Naval Research by the National Center for
Research on Evaluation, Standards, and Student Testing (pp. 14-15).
Clark, R. E. (2003d, March). Fostering the work motivation of teams and
individuals. Performance Improvement, 42(3), 21-29.
Clark, R. E., Sugrue, B. M. (2001). International views of the media debate. In R. E.
Clark (Ed.), Learning from Media: Arguments, Analysis, and Evidence. (pp.
71-88. Greenwich, CT: Information Age Publishing
Cobb, T. (1997). Cognitive efficiency: Toward a revised theory of media.
Educational Technology Research and Development, 45(4), 21-35.
Coffin, R. J., & MacIntyre, P. D. (1999). Motivational influences on computerrelated affective states. Computers in Human Behavior, 15, 549-569.
Corno, L., & Mandinach, E. B. (1983). The role of cognitive engagement in
classroom learning and motivation. Educational Psychologist, 18(2), 88-108.
Crookall, D., & Aria, K. (Eds.). (1995). Simulation and gaming across disciplines
and cultures: ISAGA at a watershed. Thousand Oaks, CA: Sage.
Crookall, D., Oxford, R. L., & Saunders, D. (1987). Towards a reconceptualization
of simulation. From representation to reality. Simulation/Games for
Learning, 17, 147-171.
Cross, T. L. (1993, Fall). AgVenture: A farming strategy computer game. Journal of
Natural Resources and Life Sciences Education, 22, 103-107.
Csikszentmihalyi, M. (1975). Beyond boredom and anxiety. San Francisco: Jossey
Bass.
Csikszentmihalyi, M. (1990). Flow: The psychology of optimal performance. New
York: Cambridge University Press.
Cutmore, T. R. H., Hine, T. J., Maberly, K. J., Langford, N. M., & Hawgood, G.
(2000). Cognitive and gender factors influencing navigation in a virtual
environment. International Journal of Human-Computer Studies, 53, 223249.
Daniels, H. L., & Moore, D. M. (2000). Interaction of cognitive style and learner
control in a hypermedia environment. International Journal of Instructional
Media, 27(4), 369-383.
revised 11/20/05
255
Davis, S., & Wiedenbeck, S. (2001). The mediating effects of intrinsic motivation,
ease of use and usefulness perceptions on performance in first-time and
subsequent computer users. Interacting with Computers, 13, 549-580.
Day, E. A., Arthur, W., Jr., & Gettman, D. (2001). Knowledge structures and the
acquisition of a complex skill. Journal of Applied Psychology, 86(5), 10221033.
de Jong, T., de Hoog, R., & de Vries, F. (1993). Coping with complex environments:
The effects of providing overviews and a transparent interface on learning
with a computer simulation. International Journal of Man-Machine Studies,
39, 621-639.
de Jong, T., & van Joolingen, W. R. (1998). Scientific discovery learning with
computer simulations of conceptual domains. Review of Educational
Research, 68, 179-202.
deCharms, R. (1986). Personal Causation. New York: Academic Press.
Deci, E. L. (1975). Intrinsic Motivation. New York: Plenum Press.
Dekkers, J., & Donati, S. (1981). The interpretation of research studies on the use of
simulation as an instructional strategy. Journal of Educational Research,
74(6), 64-79.
Dempsey, J. V., Haynes, L. L., Lucassen, B. A., & Casey, M. S. (2002). Forty simple
computer games and what they could mean to educators. Simulation &
Gaming, 43(2), 157-168.
Dias, P., Gomes, M. J., & Correia, A. P. (1999). Disorientation in hypermedia
environments: Mechanisms to support navigation. Journal of Educational
Computing Research, 20(2), 93-117.
Dillon, A., & Gabbard, R. (1998, Fall). Hypermedia as an educational technology: A
review of the quantitative research literature on learner comprehension,
control, and style. Review of Educational Research, 63(3), 322-349.
Donchin, E. (1989). The learning strategies project. Acta Psychologica, 71, 1-15.
Druckman, D. (1995). The educational effectiveness of interactive games. In D.
Crookall & K. Aria (Eds.), Simulation and gaming across disciplines and
cultures: ISAGA at a watershed (pp. 178-187). Thousand Oaks, CA: Sage
revised 11/20/05
256
Eberts, R. E., & Bittianda, K. P. (1993). Preferred mental models for directmanipulation and command-based interfaces. International Journal of ManMachine Studies, 38, 769-785.
Eccles, J. S., & Wigfeld, A. (2002). Motivational beliefs, values, and goals. Annual
Review of Psychology, 53, 109-132.
Farrell, I. H., & Moore, D. M. (2000). The effect of navigation tools on learners’
achievement and attitude in a hypermedia environment. Journal of
Educational Technology Systems, 29(2), 169-181.
Frohlich, D. M. (1997). Direct manipulation and other lessons. In M. Helander, T. K.
Landauer & P. Prabhu (eds.), Handbook of Human Computer Interaction:
Second, Completely Revised Edition (pp. 463-488). Amsterdam: Elsevier
Galimberti, C., Ignazi, S., Vercesi, P., & Riva, G. (2001). Communication and
cooperation in networked environment: An experimental analysis.
CyberPsychology & Behavior, 4(1), 131-146.
Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation, and learning: A
research and practice model. Simulation & Gaming, 33(4), 441-467.
Gevins, A., Smith, M. E., Leong, H., McEvoy, L., Whitfield, S., Du, R., & Rush, G.
(1998). Monitoring working memory load during computer-based tasks
with EEG pattern recognition methods. Human Factors, 40(1), 79-91.
Gopher, D., Weil, M., & Bareket, T. (1994). Transfer of skill from a computer game
trainer to flight. Human Factors, 36(3), 387-405.
Gredler, M.E. (1996). Educational games and simulations: a technology in search of
a research paradigm. In D. H. Jonassen (Ed.). Handbook of Research for
Educational Communications and Technology. (pp 521-540). New York:
Simon & Schuster Macmillan.
Green, C. S., & Bavelier, D. (2003, May 29). Action video game modifies visual
selective attention. Nature, 423, 534-537.
Green, T. D., & Flowers, J. H. (2003). Comparison of implicit and explicit learning
processes in a probabilistic task. Perceptual and Motor Skills, 97, 299-314.
Greenfield, P. M., deWinstanley, P., Kilpatrick, H., & Kaye, D. (1996). Action video
games and informal education: Effects on strategies for dividing visual
attention. In P. M. Greenfield & R. R. Cocking (Eds.), Interacting with Video
(pp. 187-205). Norwood, NJ: Ablex Publishing Corporation.
revised 11/20/05
257
Hannifin, R. D., & Sullivan, H. J. (1996). Preferences and learner control over
amount of instruction. Journal of Educational Psychology, 88, 162-173.
Harp, S. F., & Mayer, R. E. (1998). How seductive details do their damage: A theory
of cognitive interest in science learning. Journal of Educational Psychology,
90(3), 414-434.
Harter, S. (1978). Effectance motivation reconsidered: Toward a developmental
model. Human Development, 1, 34-64.
Henderson, L., Klemes, J., & Eshet, Y. (2000). Just playing a game? Educational
simulation software and cognitive outcomes. Journal of Educational
Computing Research, 22(1), 105-129.
Herl, H. E., Baker, E. L., & Niemi, D. (1996). Construct validation of an approach to
modeling cognitive structure of U.S. history knowledge. Journal of
Educational Psychology, 89(4), 206-218.
Herl, H. E., O’Neil, H. F., Jr., Chung, G., & Schacter, J. (1999) Reliability and
validity of a computer-based knowledge mapping system to measure content
understanding. Computer in Human Behavior, 15, 315-333.
Hong, E., & O’Neil, H. F. Jr. (2001). Construct validation of a trait self-regulation
model. International Journal of Psychology, 36(3), 186-194.
Howland, J., Laffey, J., & Espinosa, L. M. (1997). A computing experience to
motivate children to complex performances [Electronic Version]. Journal of
Computing in Childhood Education, 8(4), 291-311.
Hubbard, P. (1991, June). Evaluating computer games for language learning.
Simulation & Gaming, 22(2), 220-223.
Jones, M. G., Farquhar, J. D., & Surry, D. W. (1995, July/August). Using
metacognitive theories to design user interfaces for computer-based learning.
Educational Technology, 35(4), 12-22.
Kaber, D. B., Riley, J. M., & Tan, K.-W. (2002). Improved usability of aviation
automation through direct manipulation and graphical user interface design.
The International Journal of Aviation Psychology, 12(2), 153-178.
Kagan, J. (1972). Motives and development. Journal of Personality and Social
Psychology, 22, 51-66.
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal
effect. Educational Psychologist, 38(1), 23-31.
revised 11/20/05
258
Kalyuga, S., Chandler, P., & Sweller, J. (1998). Levels of expertise and instructional
design. Human Factors, 40(1), 1-17.
Kee, D. W., & Davies, L. (1988). Mental effort and elaboration: A developmental
analysis. Contemporary Educational Psychology, 13, 221-228.
Kee, D. W., & Davies, L. (1990). Mental effort and elaboration: Effects of
Accessibility and instruction. Journal of Experimental Child Psychology, 49,
264-274.
Khoo, G.-s., & Koh, t.-s. (1998). Using visualization and simulation tools in tertiary
science education [Electronic Version]. The Journal of Computers in
Mathematics and Science Teaching, 17(1), 5-20.
King, K. W., & Morrison, M. (1998, Autumn). A media buying simulation game
using the Internet. Journalism and Mass Communication Educator, 53(3),
28-36.
Kirriemuir, J. (2002). The relevance of video games and gaming consoles to the
higher and further education learning experience. Retrieved 2/3/2004 from
http://www.jisc.ac.uk/general/index.cfm?name=techwatch_report_0201
Kirriemuir, J. (2002b, February). Video gaming, education, and digital learning
technologies: Relevance and opportunities [Electronic Version]. D-Lib
Magazine, 8(2), 1-8.
Lee, J. (1999). Effectiveness of computer-based instructional simulation: A meta
analysis. International Journal of Instructional Media, 26(1), 71-85.
Leemkuil, H., de Jong, T., de Hoog, R., & Christoph, N. (2003). KM Quest: A
collaborative Internet-based simulation game. Simulation & Gaming,
34(1), 89-111.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task
performance. Englewood Cliffs, NJ: Prentice Hall.
Locke, E. A., Latham, G. P. (2002, Summer). Building a practically useful theory of
goal setting and task motivation: A 35-year odyssey. American Psychologist,
57(9), 705-717.
Malone, T. W. (1981). What makes computer games fun? Byte, 6(12), 258-277.
Malone, T. W., & Lepper, M. R. (1987). Making leraning fun: A taxonomy of
intrinsic motivation for learning. In R. E. Snow & M. J. Farr (Eds.).
revised 11/20/05
259
Aptitute, learning, and instruction: Vol. 3. Conative and affective process
analyses (pp. 223-253). Hillsdale, NJ: Lawrence Erlbaum.
Malouf, D. (1987-1988). The effect of instructional computer games on continuing
student motivation. The Journal of Special Education, 21(4), 27-38.
Mayer, R. E. (1981). A psychology of how novices learn computer programming.
Computing Surveys, 13, 121-141.
Mayer, R. E. (1998). Cognitive, metacognitive, and motivational aspects of problem
solving. Instructional Science, 26, 49-63.
Mayer, R. E. (2001). Multimedia Learning. Cambridge, UK: Cambridge University
Press.
Mayer, R. E. (2003). Learning and Instruction. Upper Saddle Ridge, NJ: Pearson
Education.
Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: Does
simple user interaction foster deeper understanding of multimedia messages?
Journal of Educational Psychology, 93(2), 390-397.
Mayer, R. E., Heiser, J., & Lonn, S. (2001). Cognitive constraints on multimedia
learning: When presenting more material results in less understanding.
Journal of Educational Psychology, 93(1), 187-198.
Mayer, R. E., Mautone, P., & Prothero, W. (2002). Pictorial aids for learning by
doing in a multimedia geology simulation game. Journal of Educational
Psychology, 94(1), 171-185.
Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning:
Evidence of dual processing systems in working memory. Journal of
Educational Psychology, 90(2), 312-320.
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in
multimedia learning. Educational Psychologist, 38(1), 43-52.
Mayer, R. E., Moreno, R., Boire, M., & Vagge, S. (1999). Maximizing constructivist
learning from multimedia communications by minimizing cognitive load.
Journal of Educational Psychology, 91(4), 638-643.
Mayer, R. E., Sobko, K., & Mautone, P. D. (2003). Social cues in multimedia
learning: Role of speaker’s voice. Journal of Educational Psychology,
95(20), 419-425.
revised 11/20/05
260
Mayer, R. E., & Wittrock, M. C. (1996). Problem solving transfer. In D. C. Berliner
& R. C. Calfee (Eds.), Handbook of educational psychology (pp. 47-62).
New York: Simon & Schuster Macmillan.
McGrenere, J. (1996). Design: Educational electronic multi-player games—A
literature review (Technical Report No. 96-12, the University of British
Columbia). Retrieved from http://taz.cs.ubc.ca/egems/papers/desmugs.pdf
Miller, G. A. (1956). The magical number, seven, plus or minus two: Some limits on
our capacity for processing information. Psychological Review, 63, 81-97.
Moreno, R., & Mayer, R. E. (1999). Cognitive principles of multimedia learning:
The role of modality and contiguity. Journal of Educational Psychology,
91(2), 358-368.
Moreno, R., & Mayer, R. E. (2000a). A coherence effect in multimedia learning: The
case of minimizing irrelevant sounds in the design of multimedia
instructional messages. Journal of Educational Psychology, 92(1), 117-125.
Moreno, R., & Mayer, R. E. (2000b). Engaging students in active learning: The case
for personalized multimedia messages. Journal of Educational Psychology,
92(4), 724-733.
Moreno, R., & Mayer, R. E. (2002). Learning science in virtual reality multimedia
environments: Role of methods and media. Journal of Educational
Psychology, 94(3), 598-610.
Morris, C. S., Hancock, P. A., & Shirkey, E. C. (2004). Motivational effects of
adding context relevant stress in PC-based game training. Military
Psychology, 16, 135-147.
Mousavi, S. Y., Low, R., & Sweller, J. (1995). Reducing cognitive load by mixing
auditory and visual presentation modes. Journal of Educational Psychology,
87(2), 319-334.
Mwangi, W., & Sweller, J. (1998). Learning to solve compare word problems: The
effect of example format and generating self-explanations. Cognition and
Instruction, 16(2), 173-199.
Niemiec, R. P., Sikorski, C., & Walberg, H. J. (1996). Learner-control effects: A
review of reviews and a meta-analysis. Journal of Educational Computing
Research, 15(2), 157-174.
Novak, J. (2005). Game Development Essentials. Clifton Park, NY: Thomson
Delmar Learning.
revised 11/20/05
261
Noyes, J. M., & Garland, K. J. (2003). Solving the Tower of Hanoi: Does mode of
presentation matter? Computers in Human Behavior, 19, 579-592.
O’Neil, H. F., Jr. (1999). Perspectives on computer-based performance assessment of
problem solving: Editor’s introduction. Computers in Human Behavior, 15,
255-268.
O'Neil, H. F., Jr. (2002). Perspective on computer-based assessment of problem
solving [Special Issue]. Computers in Human Behavior, 18(6), 605-607.
O’Neil, H. F., Jr., & Abedi, J. (1996). Reliability and validity of a state
metacognitive inventory: Potential for alternative assessment. Journal of
Educational Research, 89, 234-245.
O’Neil, H. F., Jr., Baker, E. L., Fisher, J. Y.-C. (2002, August 31). A formative
evaluation of ICT games. Manuscript: University of California, Los
Angeles, National Center for Research on Evaluation, Standards, and
Student Testing (CRESST)
O’Neil, H. F., Jr., & Herl, H. E. (1998). Reliability and validity of a trait measure of
self-regulation. Manuscript: University of California, Los Angeles, National
Center for Research on Evaluation, Standards, and Student Testing
(CRESST).
O’Neil, H. F., & Wainess, R. (in press). Classification of learning outcomes:
Evidence from the computer games literature. The Curriculum Journal.
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional
design: Recent developments. Educational Psychologist, 38(1), 1-4.
Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive
load measurement as a means to advance cognitive load theory. Educational
Psychologist, 38(1), 63-71.
Parchman, S. W., Ellis, J. A., Christinaz, D., & Vogel, M. (2000). An evaluation of
three computer-based instructional strategies in basic electricity and
electronics training. Military Psychology, 12(1), 73-87.
Park, O.-C., & Gittelman, S. S. (1995). Dynamic characteristics of mental models
and dynamic visual displays. Instructional Science, 23, 303-320.
Perkins, D. N., & Salomon, G. (1989). Are cognitive skills context bound?
Educational Researcher, 18, 16-25.
revised 11/20/05
262
Pintrich, P. R., & DeGroot, E. V. (1990). Motivational and self-regulated learning
components of classroom academic performance. Journal of Educational
Psychology, 82, 33-40.
Pintrich, P. R., & Schunk, D. H. (2002). Motivation in education: Theory, research,
and applications. Upper Saddle River, NJ: Pearson Education.
Plotnick, E. (1999). Concept mapping: A graphical system for understanding the
relationship between concepts. Educational Media & Technology Yearbook,
24, 81-84.
Porter, D. B., Bird, M. E., & Wunder, A. (1990-1991). Competition, cooperation,
satisfaction, and the performance of complex tasks among Air Force cadets.
Current Psychology: Research & Reviews, 9(4), 347-354.
Prislin, R., Jordan, J. A., Worchel, S., Semmer, F. T., & Shebilske, W. L. (1996,
September). Effects of group discussion on acquisition of complex skills.
Human Factors, 38(3), 404-416.
Quilici, J. L., & Mayer, R. E. (1996). Role of examples in how students learn to
categorize statistics word problems. Journal of Educational Psychology,
88(1), 144-161.
Ramsberger, P. F., Hopwood, D., Hargan, C. S., & Underhill, W. G. (1983),
Evaluation of a spatial data management system for basic skills education. Final
Phase I Report for Period 7 October 1980 - 30 April 1983 (HumRRO FR-PRD83-23). Alexandria, VA: Human Resources Research Organization.
Randel, J. M., Morris, B. A., Wetzel, C. D., & Whitehill, B. V. (1992). The
effectiveness of games for educational purposes: A review of recent
research. Simulation & Games, 23, 261-276.
Renkl, A., & Atkinson, R. K. (2003). Structuring the transition from example study
to problem solving in cognitive skill acquisition: A cognitive load
perspective. Educational Psychologist, 38(1), 13-22.
Renkl, A., Atkinson, R. K., Maier, U. H., & Staley, R. (2002). From example study
to problem solving: Smooth transitions help learning. The Journal of
Experimental Education, 70(4), 293-315.
Resnick, H., & Sherer, M. (1994). Computerized games in the human services--An
introduction. In H. Resnick (Ed.), Electronic Tools for Social Work Practice
and Education (pp. 5-16). Bington, NY: The Haworth Press.
revised 11/20/05
263
Rhodenizer, L. , Bowers, C. A., & Bergondy, M. (1998). Team practice schedules:
What do we know? Perceptual and Motor Skills, 87, 31-34.
Ricci, K. E. (1994, Summer). The use of computer-based videogames in knowledge
acquisition and retention. Journal of Interactive Instruction Development,
7(1), 17-22.
Ricci, K. E., Salas, E., & Cannon-Bowers, J. A. (1996). Do computer-based games
facilitate knowledge acquisition and retention? Military Psychology, 8(4),
295-307.
Rieber, L. P. (1996). Seriously considering play: Designing interactive learning
environments based on the blending of microworlds, simulations, and games.
Educational Technology Research and Development, 44(2), 43-58.
Rieber, L. P., & Matzko, M. J. (Jan/Feb 2001). Serious design for serious play in
physics. Educational Technology, 41(1), 14-24.
Rieber, L. P., Smith, L., & Noah, D. (1998, November/December). The value of
serious play. Educational Technology, 38(6), 29-37.
Rosenorn, T., & Kofoed, L. B. (1998). Reflection in learning processes through
simulation/gaming. Simulation & Gaming, 29(4), 432-440.
Ruben, B. D. (1999, December). Simulations, games, and experience-based
learning: The quest for a new paradigm for teaching and learning.
Simulations & Gaming, 30, 4, 498-505.
Ruddle, R. A., Howes, A., Payne, S. J., & Jones, D. M. (2000). The effects of
hyperlinks on navigation in virtual environments. International Journal of
Human-Computer Studies, 53, 551-581.
Ruiz-Primo, M. A., Schultz, S. E., and Shavelson, R. J. (1997). Knowledge mapbased assessment in science: Two exploratory studies (CSE Tech. Rep. No.
436). Los Angeles, University if California, Center for Research on
Evaluation, Standards, and Student Testing (CRESST).
Salas, E., Bowers, C. A., & Rhodenizer, L. (1998). It is not how much you have but
how you use it: Toward a rational use of simulation to support aviation
training. The International Journal of Aviation Psychology, 8(3), 197-208.
Salomon, G. (1983). The differential investment of mental effort in learning from
different sources. Educational Psychology, 18(1), 42-50.
revised 11/20/05
264
Santos, J. (2002, Winter). Developing and implementing an Internet-based financial
system simulation game. The Journal of Economic Education, 33(1), 3140.
Schau, C. & Mattern, N. (1997). Use of map techniques in teaching applied statistics
courses. American statistician, 51, 171-175.
Schraw, G. (1998). Processing and recall differences among seductive details.
Journal of Educational Psychology, 90(1), 3-12.
Shen, C.-Y. (in preparation). The Effectiveness of Worked Examples in a GameBased Problem-Solving Task. Unpublished doctoral dissertation.
University of Southern California.
Shebilske, W. L., Regian, W., Arthur, W., Jr., & Jordan, J. A. (1992). A dyadic
protocol for training complex skills. Human Factors, 34(3), 369-374.
Shewokis, P. A. (2004). Memory consolidation and contextual interference effects
with computer games. Perceptual and Motor Skills, 97, 381-389.
Shyu, H.-y., & Brown, S. W. (1995). Learner-control: The effects of learning a
procedural task during computer-based videodisc instruction. International
Journal of Instructional Media, 22(3), 217-230.
Soanes, C. (Ed.) (2003). Compact Oxford English Dictionary of Current English
(second edition). Oxford, UK: Oxford University Press
Spiker, V. A., & Nullmeyer, R. T. (n.d.). Benefits and limitations of simulationbased mission planning and rehearsal. Unpublished manuscript.
Stewart, K. M. (1997, Spring). Beyond entertainment: Using interactive games in
web-based instruction. Journal of Instructional Delivery, 11(2), 18-20.
Stolk, D., Alexandrian, D., Gros, B., & Paggio, R. (2001). Gaming and multimedia
applications for environmental crisis management training. Computers in
Human Behavior, 17, 627-642.
Story, N., & Sullivan, H. J. (1986, November/December). Factors that influence
continuing motivation. Journal of Educational Research, 80(2), 86-92.
Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition
and Instruction, 12, 185-233.
Tarmizi, R. A., & Sweller, J. (1988). Guidance during mathematical problem
solving. Journal of Educational Psychology, 80(4), 424-436.
revised 11/20/05
265
Tennyson, r. D., & Breuer, K. (2002). Improving problem solving and creativity
through use of complex-dynamic simulations. Computers in Human
Behavior, 18(6), 650-668.
Thiagarajan, S. (1998, Sept/October). The myths and realities of simulations in
performance technology. Educational Technology, 38(4), 35-41.
Thomas, P., & Macredie, R. (1994). Games and the design of human-computer
interfaces. Educational Technology, 31, 134-142.
Thompson, L. F., Meriac, J. P., & Cope, J. G. (2002, Summer). Motivating online
performance: The influence of goal setting and Internet self-efficacy. Social
Science Computer Review, 20(2), 149-160.
Thorndyke, P. W., & Hayes-Roth, B. (1982). Differences in spatial knowledge
acquired from maps and navigation. Cognitive Psychology, 14, 560-589.
Tkacz, S. (1998, September). Learning map interpretation: Skill acquisition and
underlying abilities. Journal of Environmental Psychology, 18(3), 237-249.
Tuovinen, J. E., & Sweller, J. (1999). A comparison of cognitive load associated
with discovery learning and worked examples. Journal of Educational
Psychology, 91(2), 334-341.
van Merrienboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for
complex learning: The 4C/ID-model. Educational Technology Research &
Development, 50(2), 39-64.
van Merrienboer, J. J. G., Kirschner, P. A., & Kester, L. (2003). Taking a load off a
learner’s mind: Instructional design for complex learning. Educational
Psychologist, 38(1), 5-13.
Wainess, R., & O’Neil, H. F. Jr. (2003, August). Feasibility study: Video game
research platform. Manuscript: University of Southern California.
Washbush, J., & Gosen, J. (2001, September). An exploration of game-derived
learning in total enterprise simulations. Simulation & Gaming, 32(3), 281296.
West, D. C., Pomeroy, J. R., Park, J. K., Gerstenberger, E. A., Sandoval, J. (2000).
Critical thinking in graduate medical education. Journal of the American
Medical Association, 284(9), 1105-1110.
revised 11/20/05
266
Westbrook, J. I., & Braithwaite, J. (2001). The Health Care Game: An evaluation of
a heuristic, web-based simulation. Journal of Interactive Learning Research,
12(1), 89-104.
Westerman, S. J. (1997). Individual differences in the use of command line and
menu computer interfaces. International Journal of Human-Computer
Interaction, 9(2), 183-198.
White, R. W. (1959). Motivation reconsidered: The concept of competence.
Psychological Review, 66, 297-333.
Wiedenbeck, S., & Davis, S. (1997). The influence of interaction style and
experience on user perceptions of software packages. International Journal
of Human-Computer Studies, 46, 563-588.
Wolfe, J. (1997, December). The effectiveness of business games in strategic
management course work [Electronic Version]. Simulation & Gaming
Special Issue: Teaching Strategic Management, 28(4), 360-376.
Wolfe, J., & Roge, J. N. (1997, December). Computerized general management
games as strategic management learning environments [Electronic Version].
Simulation & Gaming Special Issue: Teaching Strategic Management, 28(4),
423-441.
Yair, Y., Mintz, R., & Litvak, S. (2001). 3D-virtual reality in science education: An
implication for astronomy teaching. Journal of Computers in Mathematics
and Science Teaching, 20(3), 293-305.
Yeung, A. S., Jin, P., & Sweller, J. (1997). Cognitive load and learner expertise:
Split-attention and redundancy effects in reading with explanatory notes.
Contemporary Educational Psychology, 23, 1-21.
Yu, F.-Y. (2001). Competition within computer-assisted cooperative learning
environments: Cognitive, affective, and social outcomes. Journal of
Educational Computing Research, 24(2), 99-117.
Yung, A. S. (1999). Cognitive load and learner expertise: Split attention and
redundancy effects in reading comprehension tasks with vocabulary
definitions. Journal of Educational Media, 24(2), 87-102.
Zimmerman, B. J. (1994). Dimensions of academic self-regulation: A conceptual
framework for education. In D. H. Schunk, & B. J. Zimmerman (Eds.), Selfregulation of learning and performance (pp. 3-21). Hillsdale, NJ: Erlbaum.
revised 11/20/05
267
Zimmerman, B. J. (2000). Self-efficacy. An essential motive to learn. Contemporary
Educational Psychology, 25(1), 82-91.
revised 11/20/05
268
APPENDIX A
Self-Regulation Questionnaire
Name (please print): __________________________________________________
Directions: A number of statements which people have used to describe themselves
are given below. Read each statement and indicate how you generally think or feel
on learning tasks by marking your answer sheet. There are no right or wrong
answers. Do not spend too much time on any one statement. Remember, give the
answer that seems to describe how you generally think or feel.
Almost
Never
Sometimes
Often
Almost
Always
1.
I determine how to solve a task
before I begin.
1
2
3
4
2.
I check how well I am doing
when I solve a task.
1
2
3
4
3.
I work hard to do well even if I
don't like a task.
1
2
3
4
4.
I believe I will receive an
excellent grade in courses.
1
2
3
4
5.
I carefully plan my course of
action.
1
2
3
4
6.
I ask myself questions to stay on
track as I do a task.
1
2
3
4
7.
I put forth my best effort on
tasks.
1
2
3
4
8.
I’m certain I can understand the
most difficult material presented
in the readings for courses.
1
2
3
4
9.
I try to understand tasks before I
attempt to solve them.
1
2
3
4
10.
I check my work while I am
doing it.
1
2
3
4
11.
I work as hard as possible on
tasks.
1
2
3
4
revised 11/20/05
269
Almost
Never
Sometimes
Often
Almost
Always
1
2
3
4
12.
I’m confident I can understand
the basic concepts taught in
courses.
13.
I try to understand the goal of a
task before I attempt to answer.
1
2
3
4
14.
I almost always know how much
of a task I have to complete.
1
2
3
4
15.
I am willing to do extra work on
tasks to improve my knowledge.
1
2
3
4
16.
I’m confident I can understand
the most complex material
presented by the teacher in
courses.
1
2
3
4
17.
I figure out my goals and what I
need to do to accomplish them.
1
2
3
4
18.
I judge the correctness of my
work.
1
2
3
4
19.
I concentrate as hard as I can
when doing a task.
1
2
3
4
20.
I’m confident I can do an
excellent job on the assignments
and tests in courses.
1
2
3
4
21.
I imagine the parts of a task I
have to complete.
1
2
3
4
22.
I correct my errors.
1
2
3
4
23.
I work hard on a task even if it
does not count.
1
2
3
4
24.
I expect to do well in this course.
1
2
3
4
25.
I make sure I understand just
what has to be done and how to
do it.
1
2
3
4
26.
I check my accuracy as I
progress through a task.
1
2
3
4
revised 11/20/05
270
Almost
Never
Sometimes
Often
Almost
Always
27.
A task is useful to check my
knowledge.
1
2
3
4
28.
I’m certain I can master the skills
being taught in courses.
1
2
3
4
29.
I try to determine what the task
requires.
1
2
3
4
30.
I ask myself, how well am I
doing, as I proceed through tasks.
1
2
3
4
31.
Practice makes perfect.
1
2
3
4
32.
Considering the difficulty of
courses, teachers, and my skills, I
think I will do well courses.
1
2
3
4
Copyright © 1995, 1997, 1998, 2000 by Harold F. O’Neil, Jr.
revised 11/20/05
271
APPENDIX B
Knowledge Map Specifications
General Domain
Specification
Scenario
Participants
Knowledge map
concepts/nodes
Knowledge map links
Knowledge map
domain/content:
SafeCracker
Training of the
computer knowledge
mapping system
Type of knowledge to
be learned
Three problem solving
measures
This Software
Create a knowledge map of the content understanding
of SafeCracker, a computer puzzle-solving game.
College students, graduates, or graduate students.
Fifteen predefined key concepts identified in the
content of Safecracker by multiple experts; book,
catalog, clue, code, combination, compass, desk,
direction, floor plan, key, room, safe, searching, trialand-error, and tool.
Seven predefined relational links identified in the
content of SafeCracker by multiple experts: causes,
contains, leads to, part of, prior to, requires, and used
for.
SafeCracker is a computer puzzle-solving game. There
are over 50 rooms containing approximately 30 safes;
each safe is a puzzle to solve. Five rooms were used in
the study—three for each game, with one room used in
both games. To solve the puzzles, participants must
find clues and tools hidden in the rooms, deliberate
and reason the logic and sequence of a safe, and
attempt to apply items and clues they have found. In
some instances, participants must also apply prior
domain knowledge.
All participants went through the same training
session with one exception; those in the treatment
group learned to use the navigation map and the
treatment and control groups were given different path
finding strategies.
The training included the following elements:
• How to construct a knowledge map using the
computer mapping system
• How to play SafeCracker
Problem solving
1. Knowledge map used to measure content
understanding and structure, including (a)
semantic content score; (b) the number of
concepts; and (c) the number of links
2. Domain specific problem solving strategy
revised 11/20/05
272
questionnaire, including questions to measure
problem solving retention and transfer
3. Trait self-regulation questionnaire used to
measures the four elements of trait self-regulation:
planning, self-checking, self-efficacy, and mental
effort
Download