The Effectiveness of Worked Examples on a Game-Based Problem-Solving Task

advertisement
Worked Examples
<Draft v.3 04/09/04>
The Effectiveness of Worked Examples on a Game-Based Problem-Solving Task
A Proposal
Submitted to: Dr. Harold O’Neil (Chair)
Dr. Edward Kazlauskas
Dr. Robert Rueda
by
Chun-Yi (Danny) Shen
University of Southern California
3401 S. Sentous Ave., # 148
West Covina, CA 91792
(626) 715-6130;
cshen@usc.edu
In Partial Fulfillment for Ed.D. in Learning and Instruction
1
Worked Examples
Table of Contents
ABSTRACT…………………………………………………………….…………
6
CHAPTER I: INTRODUCTION……………………………………..……….…..
7
Background of the Problem…………………….………………………………….
7
Purpose of the Study……………………………………………………………….
9
Significance of the Study……………………………………………………………
9
CHAPTER II: LITERATURE REVIEW….………………………………………..
10
Relevant Studies…………………………………………………………………….
10
Games and Simulations……………………………………..……………………..
10
Theories of Games and Simulations…..…..……….……………………….
10
Training Effectiveness of Games.…………..….…………………………..
11
Promotion of Motivation…………………………………………..
12
Enhancement of Thinking Skills………………………………….
18
Facilitation of Metacognition……………………………………..
23
Improvement of Knowledge………………………………………
23
Building Attitudes…………………………………………………
28
Design of Games and Simulations………………………………………..
28
Summary………………………………..………………………………….
29
Problem Solving……………….………………………………………………........
29
Significance of Problem-Solving Skills……………………………………..
32
Assessment of Problem Solving…………………………….……………..
32
Measurement of Content Understanding……………………………
32
Measurement of Problem Solving Strategies……………..................
33
2
Worked Examples
Table of Contents (Cont.)
Measurement of Self-Regulation……………....…………………….
42
Summary………………………………………………………..……….…..
44
Worked Example……………………………………………………………………..
45
Theories of Worked Examples………………………………………….…..
47
Cognitive Load Theory……………………………………………..
47
Schema……………………………………………………………..
49
Scaffolding…………………………………………………………..
50
ACT-R……………………………………………………………….
51
Design of Worked Examples………………………………………………...
52
Before vs. After………………………………………………………
52
Part vs. Whole………………………………………………………
53
Backward Fading vs. Forward Fading………………………………
53
Text vs. Diagrams…………………………………………………..
54
Visual vs. Verbal…………………………………………………..
54
Steps vs. Subgoals………………………………………………….
55
Summary……………………………………………………………………
56
Summary of the Literature…………………………………………………………..
56
CHAPTER III: METHODOLOGY……………………………………………….
57
Research Design……………………………………………………………….….
57
Research Hypotheses……………………………………………………………..
57
Pilot Study…………………………………………..…………………………….
57
Participants…………………………………………………….…………..
58
3
Worked Examples
Table of Contents (Cont.)
Puzzle-Solving Game…….……………………………….………………
58
Knowledge Map…………….…..….…………………………………….
61
Measures…………………………………………………………….…….
63
Content Understanding Measure………………………………..…
63
Domain-Specific Problem-Solving Strategies Measure………..…...
65
Self-Regulation Questionnaire……………………………………
66
Procedure..…..……………………………………………………………..
67
Worked Examples…………………………………………………..
67
Time Chart of the Study…………………………………….
67
Data Analysis.…………….……..……………………………………………
68
Main Study….…………………….…………………………………………………
68
Method of the Main Study…………………………………….…………….
68
Participants………………………………..…………….…………
68
Game…….………..………………………………………………..
68
Measures…………………………………………………………………….
69
Knowledge Map…………………………………………..……….
69
Domain-Specific Problem-Solving Strategies Measure…………...
69
Self-Regulation Questionnaire……………………………..……….
69
Procedure..…..……………………………………………………………..
70
Computer-Based Knowledge Map Training……….………………
70
Game Playing……………………………………………..……….
70
Worked Examples…………………………………………………..
70
4
Worked Examples
Table of Contents (Cont.)
Data Analysis………………………………………………………………
70
REFERENCES ………………………………….……..…………………………
72
Appendix A
Expert Map………………………………………………….…
90
Appendix B
Problem Solving Strategy Questionnaire.………………..…..
91
Appendix C
Self-Regulation Questionnaire………………………………
93
5
Worked Examples
6
ABSTRACT
Training by computer games is one of the important activities in many industries (Adams,
1998; Chambers, Sherlock, & Kucik, 2002; Faria, 1998; Lane, 1995; O’Neil & Andrews, 2000;
Ruben, 1999; Washbush & Gosen, 2001). Researchers point out that simulations and games are
widely accepted as a powerful alternative to traditional ways of teaching and learning, with the
merits of facilitating learning by doing (Mayer, Moutone, & Prothero, 2002; Rosenorn & Kofoed,
1998; Schank, 1993). Problem-solving skill may be effectively improved by computer games
(Mayer, 2002). Problem solving is one of the most significant competencies whether in job
settings or in schools, and, as a result, teaching and assessing problem solving become one of the
most significant educational objectives (Mayer, 2002). In addition, according to previous studies
(Cooper & Sweller, 1987; Sweller & Cooper, 1985), worked example can effectively facilitate
problem solving skill by enhancing schema construction and automation, reduce cognitive load,
and provide assistance during learning. Therefore the researcher plans to conduct an
experimental study to examine the effectiveness of worked examples on problem solving skill,
including content understanding, domain-specific problem-solving strategies, and self-regulation,
in a game-based environment.
In the first part of this proposal, the author will review the relevant literature on computer
games and simulations, problem solving, and worked examples. The second part of this proposal
will include a pilot and a main study. The pilot study will focus on a formative evaluation. The
main study will examine the effectiveness of worked examples on a game-based problem-solving
task.
Worked Examples
7
CHAPTER I
INTRODUCTION
Background of the Problem
As Rieber (1996) pointed, play appears to be an important role in psychological, social,
and intellectual development. A study by Betz (1995) shows that computer games facilitate
learning by visualization, experimentation, and creativity of play. Also, Huntington (1984)
indicates that computer games usually include problems that enhance critical thinking. Computer
games have been used in many different industries, such as academic (Adams, 2000), business
(Adams, 1998; Faria, 1998; Lane, 1995; Wabush & Gosen, 2001), military (O’Neil & Andrews,
2000; Chambers, Sherlock, & Kucik, 200), and medical (Ruben, 1999). Quinn (1991, 1996)
indicates that computer games are effective tools for problem solving training, especially for
adventure games. O’Neil and Fisher (2002) suggests that computer games have four major
characteristics: (a) complex and diverse approaches to learning processes and outcomes; (b)
interactivity; (c) skill to address cognitive as well as affective learning issues, and most
importantly, (d) motivation for learning. As O’Neil and Fisher (2002) pointed, the effectiveness
of instructional games has been categorized into five major aspects: promotion of motivation,
enhancement of thinking skills, facilitation of metacognition, improvement of knowledge and
skills, and improving attitudes.
The two major alternative techniques to learn problem solving that have been studies are
the use of worked examples and goal-free problems (Owen & Sweller, 1985; Sweller, Mawer, &
Ward, 1983; Sweller, 1990). In last 20 years, many researchers paid a considerable amount
attention on worked examples and concluded that worked examples instruction is superior to the
conventional problem solving instruction, especially in the field of music, chess, athletics,
Worked Examples
8
mathematics, computer programming, and physics (Carroll, 1994; Tarmizi & Sweller, 1988;
Sweller, Chandler, Tierney, & Cooper, 1990; Chi, Bassok, Lewis, Reimann, & Glaser, 1989;
Ward & Sweller, 1990; Renkl, Atkinson, Maier, & Staley, 2002; Reimann, 1997; VanLehn,
1996). A number of researchers investigated the efficacy of using worked examples in classroom
instruction and provided evidence in favor of worked examples instruction rather than problem
solving practice (Zhu & Simon, 1987; Carroll, 1994; Ward & Sweller, 1990; Cooper & Sweller,
1987). As Zhu and Simon (1987) pointed out, worked example can be an appropriate and
acceptable substitute instructional method comparing to conventional classroom activity.
According to Gagne (1977), “educational programs have the important ultimate purpose
of teaching students to solve problems – mathematical and physical problems, health problems,
social problems, and problems of personal adjustment.” Some psychologists conclude that most
human learning engages problem-solving activities (Anderson, 1993). Problem solving is one of
the most significant competencies whether in job settings or in schools, and, as a result, teaching
and assessing problem solving become one of the most significant educational objectives (Mayer,
2002). As Mayer (2002) pointed out, teaching problem-solving transfer has become one of the
most critical educational. As O'Neil (1999) pointed out, problem-solving skill is a critical
competency requirement of college students and employees. The National Center for Research
on Evaluation, Standards, and Student Testing (CRESST) has conducted many researches on
problem solving (O’Neil, 2002; O'Neil, Ni, Baker, & Wittrock, 2002; O’Neil & Herl, 1998;
O’Neil, Baker, & Fisher, 1998; Baker & O'Neil, 2002; Baker & Mayer, 1999). The CRESST
adapts the problem-solving models of Baxter, Elder, and Glaser (1996), Glaser, Raghavan, and
Baxter (1992), Mayer and Wittrock (1996), and Sugrue (1995). The CRESST model of
Worked Examples
9
problem-solving consists of three components: (a) content understanding, (b) problem-solving
strategies, and (c) self-regulation (O’Neil, 1999; Baker & Mayer, 1999).
Purpose of the Study
The main purpose of this study is to examine the effectiveness of worked examples on
problem solving in a game-based environment. The researcher will use the problem-solving
assessment model developed by the National Center for Research on Evaluation, Standards, and
Student Testing (CRESST) to measure the three components of problem solving skill, which are
content understanding, problem solving strategy, and self-regulation (Herl, O’Neil, Chung,
Schacter, 1999; Mayer, 2002; Baker & Mayer, 1999). The dissertation will consist of a pilot and
a main study. The purpose of the pilot study is to make sure about the feasibility of the computer
game program, the format of worked example, and the assessment tools for problem solving
skill.
Significance of the Study
There are very few studies comparing learning from worked examples only with
learning from problem solving. Instead, Sweller and his colleagues (e.g., Mwangi & Sweller,
1998; Sweller & Cooper, 1985) have conducted several classic studies consisting of examples
followed by similar problems (example-problem pairs). Studies on worked examples conducted
by other researchers (e.g. Renkl, 1997) have focused on learning from examples only (Renkl,
Atkinson, Maier, & Staley, 2002). In addition, there are many worked examples studies in the
field of mathematics, computer programming, and physics (Tarmizi & Sweller, 1988; Sweller,
Chandler, Tierney, & Cooper, 1990; Chi, Bassok, Lewis, Reimann, & Glaser, 1989; Ward &
Sweller, 1990), but there is no study that investigates the effectiveness of worked examples in
game-based problem solving task.
Worked Examples
10
CHAPTER II
LITERATURE REVIEW
Relevant Studies
Games and Simulations
As Gredler (1996) defined that “game consist of rules that describe allowable player
moves, game constraints and privileges (such as ways of earning extra turns), and penalties for
illegal (nonpermissable) actions. Further, the rules may be imaginative in that they need not
relate to real-world events.”
As Christopher and Smith (1999) pointed, a simulation game has four major elements: (a)
settings, (b) roles, (c) rules, and (d) scoring, recording, or monitoring. Driskell and Dwyer (1984)
define a game as a rule-governed, goal-focused, microworld-driven activity incorporating
principles of gaming and computer-assisted instruction. On the other hand, Gredler (1996)
describes a game as an environment with allowable player moves, constraints and privileges, and
penalties for illegal actions. In addition, the rules of games do not have to obey those in real-life
and can be imaginative.
Theories of Games and Simulations
In learning by doing, in which students work on realistic tasks, a major instructional issue
within simulation environments is concerning the proper type of guidance (Mayer, Moutone, &
Prothero, 2002). Researchers point out that simulations and games are widely accepted as a
powerful alternative to traditional ways of teaching and learning, with the merits of facilitating
learning by doing (Mayer, et al.; Rosenorn & Kofoed, 1998; Schank, 1999). In addition,
problem-solving skill may be effectively improved by computer games (Mayer, 2002). The
potential of learning by doing, such as using simulations and games, has been pointed by
Worked Examples
11
educational reformers as an alternative to learning by being told, in which students listen to what
teachers have to say (Schank, 1999).
Adams (1998) points out that a game satisfies learners’ visual and auditory sensors and
provides flexibility in learning, which makes it an attractive tool for teaching and learning, based
on the perspectives of constructivism (Amory, 2001) and dual-coding theory (Mayer & Sims,
1998). However, students perform differently when different scaffolding is provided in
simulations and games (Mayer, 2002). For example, students learn better from a computer-based
geology simulation when they are given some support about how to visualize geological features.
The worst performing group was the group that received the least amount of support beyond
basic instructions and the best performing group was the group that received the most support
(Mayer et al., 2002).
Training Effectiveness of Games
As Rieber (1996) pointed out, play appears to be an important role in psychological,
social, and intellectual development. Play is not opposite to work and seems to be an acceptable
method of learning (Blanchard & Cheska, 1985). A study by Betz (1995) shows that computer
games facilitate learning by visualization, experimentation, and creativity of play. Also,
Huntington (1984) indicates that computer games usually include problems that enhance critical
thinking. This comes from the analysis and evaluation of information in computer games in order
to decide logical steps toward a goal (Huntington, 1984). Computer games have been used in
many different industries, such as academic (Adams, 2000), business (Adams, 1998; Faria, 1998;
Lane, 1995; Wabush & Gosen, 2001), military (O’Neil & Andrews, 2000; Chambers, Sherlock,
& Kucik, 200), and medical (Ruben, 1999).
Worked Examples
12
Quinn (1991, 1996) indicates that computer games are effective tools for problem solving
training, especially for adventure games. O’Neil and Fisher (2002) suggests that computer games
have four major characteristics: (a) complex and diverse approaches to learning processes and
outcomes; (b) interactivity; (c) skill to address cognitive as well as affective learning issues, and
most importantly, (d) motivation for learning (p.6).
As O’Neil and Fisher (2002) pointed, the effectiveness of instructional games has been
categorized into five major aspects: promotion of motivation, enhancement of thinking skills,
facilitation of metacognition, improvement of knowledge and skills, and improving attitudes.
Promotion of Motivation
Play performs important roles in psychological, social, and intellectual development
(Quinn, 1994; Rieber, 1996). As Quinn (1994) defined, play is a voluntary activity that is
intrinsically motivating. Internal promotion of motivation can be viewed from two aspects, which
are intrinsic and extrinsic. Intrinsic motivation refers to behaviors that are involved in for their
own internal reasons, such as joy and satisfaction (Woolfolk, 2001). On the other hand, extrinsic
motivation refers to behaviors that engaged in for external reasons, such as obligation and reward
(Deci, Vallerand, Pelletier, & Ryan, 1991). Although Mckee (1992) and Billen (1993) argue that
games affect cognitive functions, motivation and take players away from the real world, Thomas
and Macredie (1994) indicates that games appear to motivate players intrinsically by evoking
curiosity. This motivation may be due to the challenge, elements of fantasy, novelty, and
complexity of games (Malone, 1980, 1981, 1984; Carroll, 1982; Malone and Lepper, 1987;
Rivers, 1990).
As Woolfolk (2001) defined, motivation is usually as an internal state that arouses, directs,
and maintains person’s behavior. Quinn (1994, 1997) indicates that games need to combine fun
Worked Examples
13
factors with aspects of instructional design and system that include motivational, learning, and
interactive elements to benefit educational practice and learning. In addition, learning which is
fun seems to be more effective (Lepper and Cordova, 1992). As Bisson and Luckner (1996) say:
The role that fun plays with regard to intrinsic motivation in education is twofold. First
intrinsic motivation promotes the desire for recurrence of the experience….Second, fun
can motivate learners to engage themselves in activities with which they have little or no
previous experience.
Amory, Niacker, Vincent, and Adams (1999) indicate that playing computer games can
intrinsically motivate the college students participating in their study. Further more, they
concludes that for educators development of learning materials based on adventure game could
be a superior tools to attract learners into environments where knowledge is obtained with
intrinsic motivation (Amory, Niacker, Vincent, & Adams, 1999). Ehman & Glenn (1991)
indicates that simulations could increase students’ motivation, intellectual curiosity, sense of
personal control, and perseverance. A study conducted by Ricci, Salas, and Cannon-Bowers
(1996) shows a significant positive correlation between the level of students’ enjoyment during
training with computer games and their test scores.
Malone (1981) points that intrinsic motivation is an important factor for problem solving.
According to Dawes & Dumbleton (2001), computer games as an instructional tool facilitate both
intrinsic and extrinsic motivation of students.
There are many empirical evidences supporting that motivation has a positive influence
on performance (Clark, 1998; Emmons, 2000; Ponsford & Lapadat, 2001; Rieber, 1996; Urdan &
Midhley, 2001; Ziegler & Heller, 2000; Berson, 1996). Computer games have been used in many
different industries, such as military (O’Neil & Andrews, 2000), and schools (Adams, 1998;
Worked Examples
14
Amory, 2001, Amory, Naicker, Vincent, & Admas, 1999; Barnett, Vitaglione, Harper,
Quackenbush, Steadman, & Valdez, 1997), for instruction purpose because computer games
could provide diversity, interactivity, and motivation for learning (O’Neil & Fisher, 2002).
It is a common belief that motivation affects performance. In school, especially,
motivation has been considered basic for making various pedagogical decisions, and using
rewards to motivate school children has become a common practice. Children's interest in
exploration and learning starts at an early age but begins to fade as these children progress
through the grades. According to Lepper and Hodell (1988), "Motivation can become a problem
for many students in our educational system." They addressed the inskill of school systems to
enhance and maintain motivation. It is a normal practice for schoolchildren to be extrinsically
rewarded with stars and ribbons, with forms of public recognition (stapling good work on
bulletin boards), and with a variety of other interventions primarily aimed at controlling behavior.
There was a study conducted by Westrom and Shaban (1992) examining the intrinsic motivation
in computer games. Intrinsic motivational effects of an instructional computer game (Mission:
Algebra) were compared with those of a non-instructional computer game (Lode Runner). The
study examined Challenge, Curiosity, Control, and Fantasy aspects of the two games as factors
of intrinsic motivation. It also examined differences in intrinsic motivation between boys and
girls, among players with different levels of Perceived Creativity (Williams, 1980), and among
subgroups formed by these factors. In the outset, motivation for the non-instructional game was
higher than for the instructional game and consisted mainly of Challenge and Curiosity. But it
dropped significantly as students gained experience. Motivation for the instructional game on the
other hand, did not drop with experience but increased marginally and consisted mainly of the
factor labeled Control. There were no significant differences between boys and girls in their level
Worked Examples
15
of motivation. There were no significant interactions with levels of Perceived Creativity. Each of
the Challenge, Curiosity, Control, and Fantasy factors varied in ways that seemed reasonable and
contributed to students' overall expression of motivation.
Anderson and Lawton (1992b) have documented that 92.5% of instructors using total
enterprise (TE) simulations in college capstone courses grade on simulation performance. It
seems axiomatic that people who perform best in a game have learned what the game has to
teach in addition to being able to apply previously gained business knowledge to the situation
posed by the simulation. However, alternative explanations for performance are possible. If
researchers can discover why some learn more than others from the simulation experience,
teachers could use the information to enhance or supplement their learning environments with
other materials or pedagogical methods.
Washbush and Gosen (2001) conducted a series of exploratory studies dealing with
learning in total enterprise simulations. These studies had three purposes: (a) to examine the
validity of simulations as learning tools, (b) to measure any relationships between learning about
the simulation and economic performance in the game, and (c) to discover if some players learn
more than others from the same business gaming experience. For this research, they
hypothesized that seven types of variables might help explain why some students learn more than
others: (a) academic skill, (b) attitudes toward the simulation, (c) cohesion, (d) goals, (e)
motivation, (f) organization, and (h) struggle. These variables were selected for a variety of
reasons, including common sense; because educational, management, or simulation scholars
have suggested they influence learning; because of observations by the researchers themselves;
and because of previous research. In some cases, variables were chosen because they have either
been predicted or found to influence performance in a simulation and might also be significant
Worked Examples
16
predictors of learning in the simulation. Motivation was chosen because it has been found to
affect performance in academic (Sjoberg, 1984) and in simulation (Gosenpud & Washbush,
1996b) environments. To pursue the research purposes, 11 studies were conducted between the
spring of 1992 and the fall of 1997. All participants were undergraduate students enrolled in a
required administrative policy course at the University of Wisconsin–Whitewater. The
simulation used was MICROMATIC (1992), a moderately complex top management game. With
but two exceptions, students played the simulation in teams of 2 to 4 members, with the vast
majority in 3-member teams. Learning was measured using parallel forms of a multiple-choice,
short-essay examination. The results showed that learning occurred from simulation play but did
not vary with performance. There were 24 indices of motivation measured twice a data set in
three data sets. The results reveal that eight individual measures of motivation predicted learning,
but six of these relationships were negative. Therefore, they concluded that there is no relation
between simulation-based learning and motivation.
Clifford (1988) conducted a study in mathematics, spelling, and vocabulary learning for
grade school students and found that students at all grade levels choose problems considerably
below their skill levels. Moreover, Clifford found that low risk taking tends to increase with
grade level. She found that knowledge of correct responses (i.e., immediate item feedback) and
variable payoffs (e.g., difficult items are assigned greater worth than easier items) encouraged
students to choose the more difficult, higher payoff problems, which increased failure tolerance
and risk taking. There is another study conducted by Whitehill and McDonald (1993)
investigating the effect of motivational variables on task persistence and performance in Navy
technical training. The subjects that participated in this study were 88 male and 6 female enlisted
personnel attending a basic electricity and electronics-training program at the Service School
Worked Examples
17
Command, Naval Training Center, San Diego, California. The subjects' level of education ranged
from the ninth grade through college and their ages ranged from 17 to 33 years. Studies of
motivation in learning that manipulate both feedback and the levels of difficulty of problems
show increases in the amount learned, level of difficulty of problems chosen, and persistence.
The motivation variables consisted of a context of either a game or a drill and payoff point that
were either fixed or variable. This resulted in four groups: (a) game context with fixed payoff, (b)
game context with variable payoff, (c) drill context with fixed payoff, and (d) drill context with
variable payoff. In all experimental conditions, students were asked to solve 10 circuit problems.
In the game context, students simulated the role of an electrician repairing circuits. Students
moved a cursor around a maze representing a Navy ship’s floor plan to locate and repair faulty
circuits to prevent the ship from sinking. If water started to rise in the ship, students knew they
needed to improve their performance. In the drill context, students were presented with the same
problems one at a time without the simulated game embellishments. In the Fixed-payoff
condition, each problem was worth a fixed number of points, regardless of the difficulty level
selected by students. For each problem in the variable-payoff condition, points varied according
to the difficulty level, types of help, and quits selected. Dependent variables were accuracy,
persistence, time, level of difficulty, attempts, helps, and quits selected. The results show that a
task-based simulation game combined with a variable payoff increased student persistence in
selecting high levels of difficulty and induced a conservative behavior in relying on feedback
helps. Further, the result was in the direction of improved performance for game with variable
payoff. Apparently, level of difficulty became the more motivating factor. These findings
suggest that motivation plays a substantial role in the computer-based program for learning.
Worked Examples
18
Enhancement of Thinking Skills
A study conducted by Henderson, Klemes, and Eshet (2000) investigated if young
students internalized content and concepts embedded in a computer simulation as opposed to
treating it as merely a game to be played. This study reported changes in the Grade Two students'
cognitive outcomes and processes after learning with the software integrated within a thematic
curriculum in a classroom over a period of six weeks. They examined six aspects of cognitive
capskills which were (a) identification, (b) logical sequencing, (c) scientific and logical
classification, (d) inference, (e) transfer, and (f) internalization of content and use of the
scientific method of language. Results indicate improvement in various thinking skills and
strategies, from basic recall to the higher-level skills such as classification and inference, as well
as in the children's usage of scientific language. Transfer occurred but was not significant
thereby emphasizing the importance of pro-viding numerous practices instead of relying on the
software to teach higher order cognitive skills. They suggested that daily usage and a flexible
paired working environment with the computer were pedagogical variables in the cognitive
outcomes.
The development of higher-order cognitive skills such as decision-making is a critical
component of science education. There were two studies conducted by Taylor, Renshaw, and
Jensen (1997) assessing decision-making skills using common cognitive errors and evaluating
the impact of computer-based laboratories on the development of these skills. The first study
established the prevalence of cognitive errors among high school students, undergraduates, and
Earth Science professionals. The second study examined the role of computer-aided instruction
(CAI) in the Earth Science domain on subsequent decision-making. High school students took
part in either a computer or equivalent paper-and-pencil role-playing game-based exercise
Worked Examples
19
requiring students to evaluate the possible eruption of a volcano. In the conclusion, the students
who used the computer exercise made more consistent decisions than those who used the
traditional paper-and-pencil exercise, suggesting that well designed computer-based laboratories
can positively impact higher-order cognitive skills.
Another study conducted by Logie, Baddeley, Mane, Dochin, and Sheptak (1989)
reporting three experiments using the secondary task methodology of working memory, in the
task analysis of a complex computer game, 'SPACE FORTRESS'. Unlike traditional studies of
working memory, the primary task relies on perceptual-motor skills and accurate timing of
responses as well as short- and long-term strategic decisions. In experiment 1, highly trained
game performance was affected by the requirement to generate concurrent, paced responses and
by concurrent loads on working memory, but not by the requirement to produce a vocal or a
tapping response to a secondary stimulus. In experiment 2, expert performance was substantially
affected by secondary tasks, which had high visual-spatial or verbal cognitive processing loads,
but was not contingent upon the nature (verbal or visual-spatial) of the processing requirement.
In experiment 3, subjects were tested on dual-task performance after only 3 hours practice on
Space Fortress, and again after a further five hours practice on the game. Early in training, paced
generation of responses had very little effect on game performance. Game performance was
affected by general working memory load, but an analysis of component measures showed that a
wider range and rather different aspects of performance were disrupted by a visual-spatial
memory load than were affected by a secondary verbal toad. With further training this pattern
changed such that the differential nature of the disruption by a secondary visual-spatial task was
much reduced. Also, paced generation of responses had a small effect on game performance.
However, the disruption was not as dramatic as that shown for expert players. Subjective ratings
Worked Examples
20
of task difficulty were poor predictors of performance in all of the three experiments. These
results suggested that general working memory load was an important aspect of performance at
all levels of training. The greater disruption by paced responses in experts was interpreted as
suggesting that response timing is important for expert performance. The change with training in
the different interference from a visual-spatial versus a verbal secondary task was interpreted as
suggesting that perceptual-motor tracking control is an important and demanding aspect of
novice performance but that it is a highly automated skill in the performance of experts.
The study conducted by Ricci, Salas, and Cannon-Bowers in 1996 investigating the
effects of a gaming approach on knowledge acquisition and retention in military trainees. Three
groups of trainees were presented with subject matter in paper-based prose form (text),
paper-based question-and-answer form (test), or computer-based gaming form (game). These
conditions were selected to investigate potential benefits of computer-based gaming over
traditional paper-and-pencil media in terms of trainee performance and reaction. The results
showed that participants assigned to the game condition scored significantly higher on a
retention test compared to pretest performance. Furthermore, participants assigned to the game
condition scored significantly higher on a retention test than did participants assigned to the text
condition. Participants assigned to the test and text conditions showed no benefit from training in
performance at the retention test. In addition, participants assigned to the game condition rated
the training they received as more enjoyable and more effective than did those assigned to the
other 2 conditions. These results provide evidence that computer-based gaming can enhance
learning and retention of knowledge.
Playing video games requires concentration, memory, coordination, and quick reactions.
To the extent that they strengthen these skills, video games have potential benefits for the elderly
Worked Examples
21
(Hollander & Plummer, 1986; Schueren, 1986;Weisman, 1983). They may affect attention,
hand-eye coordination, fine motor skills, short-term memory, problem solving, and speed
reactions to novel situation. Cognitive effects of video games have not been consistently
obtained in studies of elderly players. Improved knowledge acquisition and retention among
videogame-playing adults was reported by Ricci, Salas and Cannon-Bowers (1996), while Drew
and Waters (1986) found higher WAJS IQs among the elderly after they played video games for
8-9 hours over a two-month period. However, Dustman, et al. (1992) found no effects of play on
any of six tests of free recall, cued recall, and recognition memory for word lists. As in other
research, Dustman, et al. report a significant improvement in reaction time, but no effects of
videogame. Clark et al. (1987) attribute the reaction time improvements in their study with the
elderly to different information processing strategies used by those who played video games. For
example, one strategy that improves response selection is to store stimuli in a short-term memory
buffer, where retrieval time is quicker. Birren (1995) also attributes reaction time findings to
enhanced information retrieval, rather than quicker reflexes. There was another study conducted
by Goldstein, Cajko, Oosterbroek, Michielsen, Houten, and Salverda (1997) examining the
effects of playing video games (Super Tetris) on the reaction time, cognitive/perceptual
adaptskill, and emotional well-being of 22 non-institutionalized elderly people aged 69 to90.
Volunteers in an elderly community in the Netherlands were randomly assigned to a
videogame-playing experimental group or a non-playing control group. The televisions of the 10
videogame players were provided with Nintendo SuperNes systems. Participants played Super
Tetris 5 hours a week for 5 weeks, and maintained a log of their play. Before and after this play
period, measures of reaction time (Sternberg Test), cognitive/perceptual adaptskill (Stroop Color
Word Test), and emotional well-being (self-report questionnaire) were administered. In the
Worked Examples
22
conclusion, playing video games was related to a significant improvement in the Sternberg
reaction time task, and to a relative increase in self- reported well-being. On the Stroop Color
Word Test, both the experimental and control groups improved significantly, but the difference
between groups was not statistically significant. The videogame-playing group had faster
reaction times and felt a more positive sense of well-being compared to their non-playing
counterparts. In general, these results suggested that playing video games for 25 hours is related
to improved reaction time and, relative to a non-playing control group. Videogame play was not
associated with improved cognitive/perceptual agility as measured by the Stroop Color Test.
Facilitation of Metacognition
O’Neil and Abedi (1996) suggests that metacognition includes planning and
self-checking, and it enables people to utilize various strategies to accomplish the goal. As
Woolfolk (2001) defined, metacognition is knowledge about our own thinking processes, which
includes three kinds of knowledge: (a) declarative knowledge about strategies to learn, to
memorize, and to perform will, (b) procedural knowledge about how to use the strategies, and (c)
conditional knowledge about when and why to apply the former two kinds of knowledge.
According to Pirolli and Recker (1994), metacognition could facilitate skills acquisition.
Computer game has the potential benefits of enhancing metacognitive skills (Bruning et
al., 1999; O’Neil & Fisher, 2002). Pillay, Browlee, and Wilss (1999) found that game playing
offers players an opportunity to apply metacognitive skills, which are checking their own action,
activating their schemata, finding out relation and connection, and forming hypotheses. The
researchers concluded that the frequent monitoring of thinking by game players is an application
of metacognitive approach (Pillay, Browlee, & Wilss, 1999).
Worked Examples
23
Improvement of Knowledge
The study conducted by Shapiro and Raymond (1989) investigating the hypothesis that
efficient oculomotor behaviors can be acquired through practice on a series of simple tasks and
can be transferred subsequently to a complex visual-motor task, such as a video game. Each of
two groups of subjects was exposed to a different set of simple tasks, or drills. One group, the
efficient eye movement experimental group, received training designed to minimize eye
movements and optimize scan path behaviors, whereas a second group of subjects, the inefficient
eye movement experimental group, received training designed to increase the frequency of eye
movements. Oculomotor training was interspersed with practice on the video game, Space
Fortress. Performance of these two experimental groups in the video game was compared to a
control group playing the video game but receiving no specific training and matched for total
time in the experiment. Overall there was a significant inverse correlation between the number of
foveations in the game and game score. The most important conclusion to draw from the results
of this experiment concerns the influence of the two distinct oculomotor drills on canonical game
score. The group receiving oculomotor training for the purpose of promoting efficient eye
movement behaviors exhibited a significant 30 percent higher asymptotic game score then either
the inefficient or control groups, which did not differ from each other.
Another study conducted by Fabiani, Buckley, Gratton, Coles, and Donchin (1989)
establishing the rules that govern the effectiveness of part-task training in the learning of
complex perceptual-motor tasks such as the 'Space Fortress' game. They compared two part-task
training regimes with a control training regime based on whole-task practice. In previous
researches, the part-task training regimes were shown to improve subjects' performance. They
compared the rate of learning and the final performance of subjects trained with these regimes, as
Worked Examples
24
well as the extent to which the acquired skills were susceptible to interference by a battery of
concurrently performed tasks. Thirty-three subjects participated in the study. The quality of
subject performance varied with the training regime. The best performance was achieved by
subjects trained with the hierarchical approach, which devotes part of the training time to
practice on a series of sub-tasks presumed to develop the elements of the subjects' optimal
strategy. Subjects trained with the integrated approach were continually exposed to the whole
game, while components of the game were emphasized by means of instructions and feedback.
The data presented in this paper can be summarized as follows. First, on most performance
variables, subjects trained with either of the part-task training methods achieved higher scores
than did subjects trained on the whole task. In addition, both part-task training methods lead
subjects to develop strategies that were qualitatively different from those developed by subjects
in the control group. There were also differences between the two part-task training groups. In
particular, the hierarchical group achieved superior performance when the game was performed
alone. The hierarchical group also developed a slower and more controlled circular flight pattern
than did the subjects in the integrated group. On the other hand, the integrated group's
performance was more resistant to disruption by concurrently performed secondary tasks. Finally,
the training regime and the initial skills of the subject as measured by the aiming screening task
interacted in determining the effectiveness of training. Subjects who scored high in the screening
task taken before training began did well regardless of the part-task training method to which
they were subjected. However, the method of training did make a difference for subjects with
low and medium screening scores. The hierarchical method was particularly beneficial, and the
integrated method particularly detrimental, for those subjects who scored poorly on the screening
task. These integrated subjects obtained lower scores than the subjects did in the hierarchical
Worked Examples
25
group, although they were superior to control subjects. Thus, part-task training was superior to
whole-task training.
The benefit of two instructional strategies: adaptive training and part training in teaching
complex perceptual motor skills was evaluated in another study conducted by Mane, Adams, and
Donchin (1989). One method was the part-task training method, where a subject practiced on
essential subtasks before performing the whole task. The other method was adaptive training,
where the time pressure of the task was continually adjusted to conform to the subject's
performance level until his performance was sufficient to handle the ultimate task difficulty. The
task was a computer-controlled video game, developed for research purposes. The task was
challenging and proficiency could be achieved only through significant amount of practice. The
results found the part-task training regime superior to all others. The two adaptive training
regimes brought mixed results, with one group superior and one equal to a control group.
Negative transfer from the slow to the fast version of the game was evident, and may be the
reason for the lack of clear advantage to the adaptive regimes. This study indicated the part-task
training was the better choice for the complex perceptual-motor skill tasks, such as computer
games.
There is a study conducted by Gopher, Weil, and Siegel (1989) examining the practice
effects on the complex tasks, like computer games. This paper discussed the theoretical
foundations and methodological rationale of the approach to the training of complex skills. This
approach is based on the introduction of multiple emphasis changes on subcomponents of a
complex task. The approach links together arguments from models of skills and theories of
attention. It is construed that the core of expert performance is an organized set of response
strategies that can be employed flexibly to meet task demands. Strategies can be constructed
Worked Examples
26
through emphasis change. This approach was applied to the training of 4 groups of players in a
highly demanding computer game. Subjects trained under emphasis manipulations performed
significantly better than control players who played the game for the same duration without
instruction. Moreover, the advantage of the trained groups continued to increase after formal
training had been terminated. Low-skill subjects benefited more than high skill subjects from
multiple emphasis changes. The results indicated that we could enhance the complex skills by
organized practice.
Foss, Fabaiani, Mane, and Donchin (1989) conducted another study of the unsupervised
practice effect on the computer game, Space Fortress. A control group of 40 subjects practiced
the Space Fortress game for 10 sessions, which have one-hour for each session. They were given
standard game instructions, but were not aided in their training in any other fashion. Subjects in
this group showed a general improvement, throughout training, in the total game score as well as
in many other aspects of game performance. However, individual differences were found in the
subjects' initial capabilities, in their rate of learning and in the strategies they adopted to achieve
their final performance. In order to summarize the many aspects of this complex database, two
multivariate techniques were used: Three-Mode Principal Component Analysis and Cluster
Analysis. These techniques proved useful in that they provided a coherent and relatively simple
description of the subjects' behavior. The model derived from these multivariate procedures was
applied to an independent group of subjects. This cross-validation accounted for some of the
differences observed between the two groups. The results indicated that the organized practice is
better than unsupervised practice on the computer game tasks.
There were two studies conducted by Greenfield, Brannon, and Lohr (1994) testing
whether video games could contribute to the development of spatial representational skills
Worked Examples
27
required for humans to interface effectively with computer technology. The authors studied 82
undergraduates to assess the relationship between expertise in a 3-dimensional action arcade
video game and the skills of dynamic 3-dimensional spatial representation (SR), as assessed in a
mental paper-folding test. Study 1 established a correlation between video game expertise and
skill in SR. Study 2 established a causal relationship between video game skill and spatial skill
through an experimental paradigm. The results showed that short-term video game practice had
no effect on mental paper folding. However, structural equation modeling provided strong
evidence that video game expertise, developed over the long-term, had a beneficial effect on the
spatial skill of mental paper folding. That means we can predict the spatial skills by the game
skill in a long-term setting.
There was another study conducted by Shebilske, Regian, Arthur, and Jordan (1992)
testing the dyadic training protocol derived from cognitive and social theories of complex skill
acquisition. Forty undergraduates practiced the computer game, Space Fortress, for 10 sessions
of eight practices and two test games. Half of them practiced and tested alone; the others had
identical tests but dyadic practice, in which they controlled part of each practice while being
interlocked with a partner who controlled the rest. Subjects practiced both parts and their
connections by alternating roles and by modeling their partners. Trainer time and resources were
half for the dyadic group, and performance was equivalent. This study indicated that the
cooperative learning in computer-based setting was better than individual earning.
Building Attitudes
Attitudes are commonly viewed as summary evaluations of objects (e.g., oneself, other
people, issues, etc.) along a dimension ranging from positive to negative (Petty, Priester, &
Wegener, 1994). For the evaluation of attitude toward computer game, the Computer Game
Worked Examples
28
Attitude Scale (CGAS), which evaluates student attitudes toward educational computer games,
has been created to assist computer game designers and teachers in the evaluation of educational
software games. Chappell and Taylor (1997) further found the evidence of its reliability and
validity. In the two studies conducted by Westbrook and Braithwaite (2001) and Amory et al
(1999), learner attitudes were measured with questionnaires. Comparing pre- and postquestionnaires, Westbrook and Braithwaite (2001) found that learners’ interests in the health care
system was significantly enhanced after completing the game.
In the study conducted by Adams (1998), the participants’ attitude was measured by
open-ended questions, and was found changed positively and significantly. For example,
students’ answers of the questions revealed that their interest, appreciation and respect for urban
planning and planners were promoted. Adams (1998) indicates that the most important learning
associated with using computer games is not the learning of facts but rather the development of
certain attitudes acquired through interaction with software. In addition, Wellington and Faria
(1996) found that when playing LAPTOP, a marketing simulation game specifically designed for
use in introductory marketing courses, participants’ attitudes were changed significantly.
Design of Games and Simulations
Creation of a sophisticated educational game requires many different activities (Amory,
2001). These activities are developing the resources, creating and authorizing software, and
applying appropriate educational theories. The major components in game design are purpose,
reality, timeline, feedback, decisions, participants, role, and learning objectives (Martin, 2000).
In addition, Gredler (1996) suggests that designing effective instructional game and simulation
should have three elements, which are students’ prior knowledge, the complexity of problem
solving, and the structure that could facilitate students’ knowledge and skills.
Worked Examples
29
As Quinn (1994) pointed out, based on educational theories game design involves many
different aspects, such as elements of fun, the steps for instructional game design, and systemversus user-center design. Based on the research by Quinn (1994), the researchers (i.e. Amory,
Naicker, Vincent, and Adams, 1999) developed the Game Object Model. This model tries to
build a relationship between educational dimensions and game elements.
Summary
Play appears to be an important role in psychological, social, and intellectual
development. Training by computer games is one of the important activities in many industries.
The researcher defined a game as a rule-governed, goal-focused, microworld-driven activity
incorporating principles of gaming and computer-assisted instruction. A simulation game is a
game that has four major elements, which are settings, roles, rules, and scoring, recording, or
monitoring.
Researchers point out that simulations and games are widely accepted as a powerful
alternative to traditional ways of teaching and learning, with the merits of facilitating learning by
doing. The potential of learning by doing, such as using simulations and games, has been pointed
by educational reformers as an alternative to learning by being told, in which students listen to
what teachers have to say. This theory is also called experience-based learning theory.
Creation of a sophisticated educational game requires many different activities.
Designing effective instructional game and simulation should have three elements, which are
students’ prior knowledge, the complexity of problem solving, and the structure that could
facilitate students’ knowledge and skills. In addition, the major components in game design are:
(a) purpose, (b) reality, (c) timeline, (d) feedback, (e) decisions, (f) participants, (g) role, and (h)
learning objectives. The development of an instructional game is composed of three perspectives,
Worked Examples
30
which are (a) the research it is based on, (b) the development of resources, and (c) the
development of software. There are four other essential factors to consider in order to design
effective instructional games and simulations: (a) the structure is designed to reinforce learners’
objective knowledge and skills (b) learners’ prior knowledge, and (c) the complexity of problem
solving, and (d) types and amount of guidance given to learners.
Computer games have been used for learning and training in many different fields, such
as academic, business, military, and medical. The effects of instructional games and simulation
can be generally divided into five categories: (a) promotion of motivation, (b) enhancement of
thinking skills, (c) facilitation of metacognition, (d) enhancement of knowledge, and (e) building
of attitude. The first one is the promotion of motivation. That means playing computer game can
enhance learning motivation and self-regulated learning. The second category is the
enhancement of thinking skills. Thinking skills is consisted of information processing, reasoning,
enquiry, creative thinking, learning strategies and evaluation skills. The third category is the
facilitation of metacognition, which includes planning and self-monitoring. The fourth category
is the enhancement of knowledge. The previous researches shows that computer game is
effective in facilitating knowledge acquisition, retention and transfer. The fifth category of
training effectiveness of games is the building of attitude, which can positively influence the
results of learning.
Problem Solving
According to Gagne (1977), “educational programs have the important ultimate purpose
of teaching students to solve problems – mathematical and physical problems, health problems,
social problems, and problems of personal adjustment.” Psychologists and educators have been
interested in problem solving for the last century (Bruning, Schraw, & Ronning, 1999). The first
Worked Examples
31
experiment of problem solving was conducted by a psychologist, E. L. Thorndike (1911), who
observed the behaviors of cats escaping from a cage by pressing a lever. He concluded that most
problem solving activities involve trial-and-error behaviors that lead to the solutions. However,
Dewey (1910) argues that problem solving is more like a conscious, deliberate process governed
by a naturally occurring sequence of steps. The third view is from Gestalt psychologists, who
observe the behaviors of chimps reaching bananas by using the tools. They suggest that problem
solving is complex cognitive process rather than simple trial-and-error behaviors (Bruning at al.,
1999). As Woolfolk (2001) defined, a problem includes: (a) current situation, (b) desired
outcome, and (c) a path for achieving the goal. In addition, problems can be categorized into two
categories: well-structured and ill-structured, which depends on how much structure is presented
for problem solving and how clear the goal is (Woolfolk, 2001), or routine and nonroutine
(Mayer, 1995). As Mayer (1990, 1996) and Simone (1973) defined, problem solving is
cognitive processing pursuing to accomplish a goal without obvious solutions. According to this
definition, problem solving includes four characteristics, which are cognitive, process-based,
directed, and personal (Baker & Mayer, 1999). Some psychologists conclude that most human
learning engages problem-solving activities (Anderson, 1993). Some researchers (Derry, 1991;
Derry & Murphy, 1986; Gallini, 1991; Gick, 1986; Bransford & Stein, 1993) suggest that
problem solving involves five-stage sequences: (a) identifying problems and opportunities, (b)
defining goals and representing the problem, (c) exploring possible strategies, (d) anticipating
out comes and acting, and (e) evaluating the solutions. According to Baker and Mayer (1999),
problem solving has four characteristics: cognitive, process-based, directed, and personal. Baker
and Mayer (1999) further indicated that problem solving has four steps. The first step is
“problem translation”, the problem solver identifying available information and translating it in
Worked Examples
32
the situation where the problem occurs, into his/her mental model. The second step is problem
integration. The problem solver puts the pieces of information together into a structure. The last
two steps of problem solving are “solution planning” and “solution execution”, developing a
feasible plan and implementing it to solve the problem. The first two components constitute the
problem representation phase of problem solving while the latter two components constitute the
problem solution phase of problem solving (Baker & Mayer, 1999).
The National Center for Research on Evaluation, Standards, and Student Testing
(CRESST) has conducted many researches on problem solving (O’Neil, 2002; O'Neil, Ni, Baker,
& Wittrock, 2002; O’Neil & Herl, 1998; O’Neil, Baker, & Fisher, 1998; Baker & O'Neil, 2002;
Baker & Mayer, 1999). The CRESST adapts the problem-solving models of Baxter, Elder, and
Glaser (1996), Glaser, Raghavan, and Baxter (1992), Mayer and Wittrock (1996), and Sugrue
(1995). The CRESST model of problem-solving (Fig. 1) consists of three components: (a)
content understanding, (b) problem-solving strategies, and (c) self-regulation (O’Neil, 1999;
Baker & Mayer, 1999).
Problem Solving
Content
Understanding
Problem Solving
Strategies
Domain
Specific
Self-Regulation
Domain
Independent
Metacognition
Planning
SelfMonitoring
Motivation
Effort
SelfEfficacy
Figure 1. National Center for Research on Evaluation, Standards, and Student Testing
(CRESST) model of problem solving.
Worked Examples
33
According to O’Neil (1999) and Baker and Mayer (1999), the content understanding is
the domain knowledge of the problems. The domain knowledge is defined as the realm of
knowledge that individuals have about a particular field of study (Alexnader, 1992). Knowledge
domains typically are subject areas, such as mathematics, music, and psychology, and also can
represent area of activity, such as car driving, mechanics, and accounting (Bruning, Schraw, &
Ronning, 1999). Problem solving strategies include domain specific and domain independent
problem solving strategy. The domain specific problem solving strategies are task-dependent
strategies and unique to specific subjects such as geometry, science, or geology that require the
specific content and procedure knowledge (O’Neil, 1999; Baker & Mayer, 1999). The domain
independent problem solving strategies are the general strategies such as analogy and metal
simulation. Self-regulation is consisted of metacognition, which includes planning and
self-monitoring or self-checking, and motivation, which includes effort and self-efficacy (O’Neil,
1999).
Significance of Problem Solving
Many researchers (e. g., Bransford, Brown, & Cocking, 2000; Bransford & Stein, 1993;
Jonassen, 1997) have emphasized the importance of engaging students in complex, ill-structured
problem-solving tasks, which are intended to help students see the meaningfulness and relevance
of what they learn and to facilitate transfer by contextualizing knowledge in authentic situations
(Ge & Land, 2003). But previous research has pointed to student deficiencies in problem solving,
for instance, a failure to apply knowledge from one context to another (Gick, 1986; Gick &
Holyoak, 1980), especially when solving ill-structured problems (Feltovich, Spiro, Coulson, &
Feltovich, 1996). Students' difficulties in problem solving have been attributed to both limited
domain and metacognitive knowledge (Brown, 1987).
Worked Examples
34
Problem solving is one of the most significant competencies whether in job settings or
in schools, and, as a result, teaching and assessing problem solving become one of the most
significant educational objectives (Mayer, 2002). As Mayer (2002) pointed out, teaching
problem-solving transfer has become one of the most critical educational. As O'Neil (1999)
pointed out, problem-solving skill is a critical competency requirement of college students and
employees. Bassi, Benson, and Cheney (1996) indicate that there are three trends in the
workplace related to Individual's problem-solving skills. The trends include: (a) fast response to
technological change will be a critical skill requirement, (b) learning organization will be a major
transforming direction for companies, and (c) high-performance work systems will be integrated.
As O'Neil (1999) pointed out, higher order thinking skills is one of the most important needs in
workplace. Problem solving skill is usually treated as a high order thinking skill (O'Neil, 1999).
Assessment of Problem Solving
There is substantial previous research which reveals the significance of problem solving
for all of the studies, institutes, workforces, and tasks and teaching for problem-solving transfer
is as a result an important educational objective. However, an assessment framework that is valid
and efficient need to be built up, and the methods to assess problem-solving skills still need to be
refined (Mayer, 2002; O’Neil & Fisher, 2002; O’Neil, et al., 2002). For example, assessing
students by giving them a test of separate and unconnected multiple choice questions, teachers
are not accurately assessing students’ problem-solving abilities, and traditional standardized tests
do not report to teachers or students what problem-solving and thinking processes they should
provide emphasis on. Although we can find useful measures for problem solving competence in
the cognitive science literature such as think-aloud protocols (e.g., Day, Arthur, &Gettman, 2001),
those measures, however, are inefficient performance assessments, requiring extensive human
Worked Examples
35
scoring and a great amount of time (O’Neil, 1999).
According to Baker and Mayer (1999), two aspects of problem-solving ability need to be
tested as a whole, which are retention and transfer. Retention involves what learners have
retained or remembered from what they have been presented, while transfer involves how much
learners can apply the learned knowledge or skills in a brand new situation; retention is tested
with routine problems, which are problems that learners have learned to solve, and transfer is
tested with non-routine problems, which are problems that learners haven’t solved in the past
(Mayer, 1998). According to the researchers, the assessment of problem-solving transfer should
be the current emphasis of education, since learners need not only memorize the materials, but
also to apply them in a novel situation or in the real world. In addition, problem solving ability
may be assessed by checking the entire process when a task is being solved or the final outcome,
“contrasting expert-novel performance.” Also, Day et al. (2001) pointed out that the more similar
novices’ knowledge structures are to experts’ structures, the higher the level of novices’ skills
acquisition is.
The National Center for Research on Evaluation, Standards, and Student Testing
(CRESST) has developed a problem-solving assessment model composed of content
understanding, problem-solving strategies, and self-regulation, the three elements of
problem-solving ability.
Measurement of Content Knowledge
Knowledge mapping has been applied as an effective tool to facilitate understanding of
complex subjects matter (Herl et al, 1996; Herl et al., 1999), to promote critical thinking (West,
Pomeroy, Park, Gerstenberger, & Dsndoval, 2000), to facilitate learning strategies (O’Neil, 1978;
O’Neil & Spiedlberger, 1979), and to assist learning in science (Stewart, 1982; Novak, 1983). In
Worked Examples
36
addition, knowledge mapping has been used as a reliable and efficient assessment tool for
measuring content understanding (Herl et al., 1999; Ruiz-Primo, Schultz, & Shavenlson, 1997).
As an assessment tool, knowledge mapping includes three elements: (a) a task that requires
participants to construct their content understanding in the specific domain, (b) a format that can
record participants’ responses, and (c) a scoring system with reliskill and validity (Ruiz-Primo at
al., 1997).
CRESST developed a computer-based knowledge mapping system for measuring
individual’s and content understanding. The knowledge-mapping task was designed to measure
learners’ content understanding by asking them to construct semantic relationships among key
concepts and facts (Schacter, Herl, Chung, Dennis, & O’Neil, 1999). An example of a concept
mapping system is shown in Figure 2 and 3.
Mayer and Moreno (1998) assessed content understanding with retention and transfer
questions. In their study on the split-attention effect in multimedia learning, they gave
participants retention test and matching test, containing questions designed to assess the extent to
which participants remembered the knowledge delivered by the multimedia with animation and
narration, or animation and on-screen text.
An alternative way to measure content understanding is knowledge maps. Knowledge
maps have been used as an effective tool to learn complex subjects (Herl et al., 1996) and to
facilitate critical thinking and (West, Pomeroy, Park, Gerstenberger, & Dsndoval, 2000).
Several studies also revealed that knowledge maps are not only useful for learning, but also a
reliable and efficient measurement of content understanding (Herl et al., 1999; Ruiz-Primo,
Schultz, & Shavenlson, 1997). For example, Ruiz-Primo et al. (1997) proposed a framework for
conceptualizing knowledge maps as a potential assessment tool in science. Students need to learn
Worked Examples
37
how to locate, organize, discriminate between concepts, and use information stored in formats to
make decisions, solve problems, and continue their learning when formal instruction is no longer
provided.
A knowledge map is a structural representation that consists of nodes and links. Each
node represents a concept in the domain of knowledge. Each link, which connects two nodes,
is used to represent the relationship between them; that is, the relationship between the two
concepts. As Schau and Mattern (1997) point out, learners should not only be aware of the
concepts but also of the connections among them. A set of two nodes and their link is called a
proposition, which is the basic and the smallest unit in a knowledge map.
Ruiz-Primo et al. (1997) claimed that as an assessment tool, knowledge maps are
identified as a combination of three components: (a) a task that allows a student to perform his or
her content understanding in the specific domain (b) a format in regard to the student’s response,
and (c) a scoring system by which the student’s knowledge map could be accurately evaluated.
Chuang (2003) modified their framework to serve as an assessment specification using a concept
map. Researchers have successfully applied knowledge maps to measure students’ content
understanding in science whether for high school students and adults (e.g., Chuang, 2003; Herl et
al., 1999; Schacter et al., 1999; Schau et al., 2001). Schau et al. (2001) used select-and-fill-in
knowledge maps to measure secondary and postsecondary students’ content understanding of
science in two studies respectively.
In the first study, the result of students’ performance on the
knowledge map correlated significantly with that on a multiple test, a traditional measure (r= .77
for eighth grade and r=. 74 for seventh grade). According to the research result, knowledge map
is therefore an assessment tool with validity. In the other study, Schau et al compared the results
of knowledge maps with both traditional tests of multiple choice and relatedness ratings.
Worked Examples
38
Further, the mean of map scores increased significantly, from 30% correct at the beginning of the
semester (SD=11%) to 50% correct at the end (SD=19%). At last, the correlation between
knowledge map scores and multiple choice test scores, and the correlation between concept
scores and relatedness ratings assessment were high.
Recently, CRESST has developed a computer-based knowledge mapping system, which
measures the deeper understanding of individual students and teams, reflects thinking processes
in real-time, and economically reports student thinking process data back to teachers and
students (Chung et al., 1999; O’Neil, 1999; Schacter et al., 1999). The computer-based
knowledge map has been used in at least four studies (Chuang, in preparation; Chung et al.,
1999; Hsieh, 2001; Schacter et al., 1999). In the four studies, the map contained 18 concepts of
environmental science, and seven links of relationships, such as cause, influence, and used for.
Students were asked to create a knowledge map in computer-based environment.
In the study
conducted by Schacter et al. (1999) students were evaluated by creating individual knowledge
map, after searching the simulated world wide web. On the other hand, in the studies conducted
by Chung et al. (1999), Hsieh (2001), and Chuang (2003) two students constructed a group map
cooperatively through the networked computers, and their results showed that using networked
computers to measure group processes was feasible.
An example of a knowledge map is shown in Figure 2 and 3. As seen in Figure 2, the
screen of computer was divided into three major parts. The numbered buttons located at the
lower right part of the screen are message buttons for communication between group members;
all predefined messages were numbered and listed on the handouts distributed to participants.
When a participant clicked on a button on the computer screen, the corresponding message
would be shown instantly on his/her and his/her partner’s computers simultaneously. The lower
Worked Examples
39
left part of the screen was where messages were displayed in the order sent by members. As seen
in Figure 2, the top-left-hand part was the area where the map was constructed.
Figure 2. User Interface for the System
Worked Examples
40
As seen in Figure 3, in this system (e.g. Chuang, in preparation; Hsieh, 2001), only a
leader can add concepts to the knowledge map and make connection among concepts by clicking
the icon of “Add Concept” on the menu bar and pressing the “Link” button respectively.
There are 18 concepts of environmental science under “Add Concept” such as “atmosphere”,
“bacteria”, “carbon dioxide”, and “water cycle”, and seven link labels (i.e., causes, influence,
part of, produces, requires, used for, uses). A leader was asked to use these terms and links to
construct a concept map using the computer mapping system.
In addition, the leader could
move concepts and links to make changes to the map. On the contrary, a searcher in each group
could seek information from the Simulated World Wide Web environment and access feedback
regarding the result of their concept map. Therefore, to construct a concept map successfully,
both of the searcher and leader of a group must collaborate well.
Worked Examples
41
Figure 3. Add Concepts and Links
Measurement of Problem Solving Strategy
Problem solving strategies can be divided into two categories, which are domain-specific
and domain-independent (Alexander, 1992; Bruning, Schraw, & Ronning, 1999; O'Neil, 1999;
Perkins & Salomon, 1989). Domain-specific knowledge is the information that is useful in a
particular situation or that applies only to one specific topic. Domain-independent or general
knowledge is the information that is useful in many different kinds of tasks and applies to many
situations (Woolfolk, 2001). In addition, domain-specific strategies are consciously applied skills
to reach goals in a particular subject or problem area (Gagne, Yekovich, & Yekovich, 1993).
Domain-independent or domain-general strategies are the general skills applying different
subjects (Chuang, 2003).
Worked Examples
42
CRESST developed a simulated web-based assessment tool to evaluate problem solving
strategies such as feedback inquiring and information search strategies (Herl et al., 1999;
Schacter et al., 1999). Mayer and Moreno (1998) conducted a study to assess problem-solving
strategies by a list of transfer questions. According to Mayer and Wittrock (1996), the questions
for problem solving assessment can be categorized as retention and transfer. Retention questions
examine if the participants remember what was presented during learning. Transfer questions
assess if the participants can use what they learn in new situations (Baker & Mayer, 1999).
Problem-solving transfer questions require that participants use what they have learned to create
novel solutions in new situations (Mayer & Wittrock, 1996). The meaningful learning requires
high transfer performance (Baker & Mayer, 1999). As pointed out by researchers (e.g., Mayer,
2002; Moreno & Mayer, 2000), promoting problem-solving transfer has become one of the most
important educational objectives.
Measurement of Self-Regulation
Self-regulation refers to the degree to which students are able to become
metacognitively, motivationally, and behaviorally active participants of their own learning
process (Zimmerman, 1989). Among the key self-regulatory processes affecting student
achievement and motivational beliefs are goal setting, self-monitoring, self-evaluating, use of
task strategies (e.g., rehearsing and memorizing and organizing and transforming), help seeking,
and time planning and management (Dabbagh & Kitsantas, 2002). Self-regulated learning theory
was developed by a group of researchers (Schunk, 1991; Schunk & Zimmerman, 1994; Winne,
1995; Zimmerman, 1990). As Bruning, Schraw, and Ronning (1999) defined, “self-regulated
learning is the skill to control all aspects of one’s learning, from advance planning to how one
evaluates performance afterward (p.135).” Most theories of self-regulation include three major
Worked Examples
43
elements: (a) metacognitive awareness, (b) strategy use, and (c) motivational control (Bruning,
Schraw, & Ronning, 1999). As Ridley, Shutz, Glanz, and Weinstein (1992) pointed out,
self-regulated learning is consisted of three components: (a) metacognition, (b) goal setting, and
(c) self-monitoring. In previous studies, the assessment method of speak-loud is often used to
measure the domain specific metacognition (Manning, Glasner, & Smith, 1996; O’Neil & Abedi,
1996). In recent studies, CRESST developed an alternative framework of self-regulation that is
composed of metacognition and motivation (O’Neil, 1999; Hong & O’Neil, 2001). According to
some researchers (O’Neil & Herl, 1998; Pintrich & DeGroot, 1990), metacognition includes two
subcategories, which are planning and self-monitoring. Zimmerman (1994) indicates that
motivation consists of effort and self-efficacy. Planning engages the selection of appropriate
strategies and the allocation of resource (Bruning, Schraw, & Ronning, 1999). Self-monitoring
are activities including making prediction or pausing, sequencing the strategies, and selecting
appropriate alternative strategies (Bruning, Schraw, & Ronning, 1999). Strong relationships
among motivation, self-efficacy, and self-regulation have been reported in many researches
(Bandura, 1993; Zimmerman, 1990; McCombs, 1984; Garcia & Pintrich, 1991; Malpass, O’Neil,
& Hocevar, 1999; Pintrich & Schrauben, 1992; Schunk, 1992; Zimmerman, 1995; Zimmerman,
Bandura, & Martinez-Pons, 1992). In addition, effort has been found in many self-regulation
studies as an element of motivation (Hong & O’Neil, 2001; Zimmerman, 1990; Bandura, 1986,
1993) O’Neil and Herl (1998) develop a trait self-regulation questionnaire assessing these four
elements of self-regulation with thirty-two questions.
Summary
Psychologists and Educators have been interested in problem solving for one century
(Bruning, Schraw, & Ronning, 1999). In the history of problem solving researches, the previous
Worked Examples
44
researchers concluded that most problem solving activities involve trial-and-error behaviors that
lead to the solutions. The later researchers suggested that problem solving is more like a
conscious, deliberate process governed by a naturally occurring sequence of steps. Recently, the
researchers agree that problem solving is cognitive processing directed at achieving a goal when
no solution method is obvious to the problem solver. There are many evidences showing the
problem solving is one of the most important ability in workplaces and schools. Problem solving
has four characteristics, which are cognitive, process-based, directed, and personal. In addition,
problem solving has four steps: (a) problem translation, (b) problem integration, (c) solution
planning, and (d) solution execution.
Recently, the National Center for Research on Evaluation, Standards, and Student Testing
(CRESST) developed a problem-solving model. CRESST model of problem solving consists of
three components: (a) content understanding, (b) problem-solving strategies, and (c)
self-regulation. In addition, problem-solving strategies could be divided into two categories,
which are domain independent and domain specific problem -solving strategies. Furthermore,
self-regulation consists of two sub-categories, metacognition and motivation. The former one is
composed of self-monitor and planning, and the latter one is composed of effort and
self-efficacy.
Educators need an assessment program that tests validly and efficiency of how much
students have leaned (retention) and how well they are able to apply it (transfer). This two
aspects of problem-solving ability need to be tested as a whole. Previous researchers point out
that computer-based assessment has the merit of integrating validity to generate test items and
the efficiency of computer technology as a means of presenting and scoring tests. CRESST has
developed a problem-solving assessment model composed of content understanding,
Worked Examples
45
problem-solving strategies, and self-regulation.
CRESST developed a computer-based knowledge mapping system for measuring
individual’s and content understanding. The researchers have successfully applied knowledge
maps to measure students’ content understanding for high school students or adults with high
reliability and high validity. The researchers have also created a simulated environment to
evaluate problem-solving strategies. In addition, CRESST developed a self-regulation
questionnaire to measure learners’ self-regulation with high validity and high reliability.
Worked Examples
A number of researchers investigated the efficacy of using worked examples in
classroom instruction and provided evidence in favor of worked examples instruction rather than
problem solving practice (Zhu & Simon, 1987; Carroll, 1994; Ward & Sweller, 1990; Cooper &
Sweller, 1987). Although there is no exact definition, worked examples share particular family
resemblance (Wittgenstein, 1953). As the definition by Atkinson(2000), worked examples are
instructional devices that provide an expert’s problem solution for a learner to study. As
instructional devices, typical worked examples include a problem statement and a step-by-step
procedure for problem solving. Both of these two elements are meant to show how similar
problems might be solved. In addition, worked examples provide an expert’s problem solving
model for the learner to study and follow (Atkinson, 2000). As Renkl’s (2002) definition,
worked examples consist of a problem formulation, solution steps, and the final solution itself.
In last 20 years, many researchers paid a considerable amount of attention on worked
examples and concluded that worked examples instruction is superior to the conventional
problem solving instruction, especially in the field of music, chess, athletics, mathematics,
computer programming, and physics (Carroll, 1994; Tarmizi & Sweller, 1988; Sweller, Chandler,
Worked Examples
46
Tierney, & Cooper, 1990; Chi, Bassok, Lewis , Reimann, & Glaser, 1989; Ward & Sweller, 1990;
Renkl, Atkinson, Maier, & Staley, 2002; Reimann, 1997; VanLehn, 1996). Even though learning
from worked examples has caught much attention recently, the idea of learning by example is
quite new (Atkinson, 2000).
Although a huge amount of research in the 1970s was to identify ways to facilitate
concept learning, a growing number of cognitive-oriented researchers started to focus on
complex forms of knowledge and learning (Brewer & Nakamura, 1984) by comparing the
knowledge structure between experts and novices (Atkinson, 2000). Researches indicate that
experts focus more on deeper structural aspects of problems, while novices are usually misled by
surface features (Chi, Feltovich, & Glaser, 1981; Chi, Glaser, & Ress, 1982; Silver, 1979). The
concept of the schema was usually used to account for performance differences between experts
and novices (VanLehn, 1990; Chi et al., 1981; Chi et al., 1982; Hinsley, Hayers, & Simon, 1977;
Silver, 1979). Sweller and his colleagues conducted many researches to examine how students
learn schemas, patterns that facilitate problem solving, via conventional, practice-oriented
instruction when practice was a preferred instructional approach (Mawer & Sweller, 1982;Owen
& Sweller, 1985; Sweller & Levine, 1982; Sweller, Mawer, & Howe, 1982; Sweller, Mawer, &
Ward, 1983). However, Sweller’s researches soon found empirical evidence showing that
traditional, practice-oriented problem solving was not an ideal method for improving problem
solving skill when compared to instruction that paired practice with worked examples (Cooper &
Sweller, 1987; Sweller & Cooper, 1985). Gerven, Paas, and Schmidt (2000) suggest worked
examples can promote acquisition of complex cognitive skills for elderly learners by reducing
their cognitive load and irrelevant information. As Zhu and Simon (1987) pointed out, worked
example can be an appropriate and acceptable substitute instructional method comparing to
Worked Examples
47
conventional classroom activity. In the study, the students completed a three-years course in two
years by using worked examples as instructional material (Zhu & Simon, 1987).
<Insert How to Improving Problem Solving>
Theories of Worked Examples
Cognitive Load Theory
The capacity and duration limitations of working memory are suggested by many
researchers (Miller, 1956; Simon, 1974; De Groot, 1965; Chase & Simon, 1973). Miller (1956)
suggests the limitation of working memory is about seven items, while Simon (1974) argues that
it should be about five items. This limitation has been considered as the most important factor in
instructional design (Carlson, Chandler, & Sweller, 2003; Chandler & Sweller, 1991, 1992, 1996;
Jeung, Chandler, & Sweller, 1997; Mayer, 2001; Paas, 1992; Pass & Van Merrienboer, 1994;
Sweller, 1999; Sweller, Chandler, Tierney, & Cooper, 1990; Sweller, van Merrienboer, & Paas,
1998; Tindall-Ford, Chandler, & Sweller, 1997). Sweller (1988, 1989, 1991) developed an
information processing model, cognitive load theory, which is concerned with how cognitive
resources are processed during problem solving and subsequent effect of learning (Ayres, 1996).
Cognitive load theory (Sweller, 1999; Sweller et al., 1998) assumes that all learning occurs by a
very limited working memory and an unlimited long-term memory. This theory suggests that the
limitation of working memory is the most critical factor when students are studying instructional
material. According to this theory, cognitive load can be divided into two categories: intrinsic
cognitive load and extraneous cognitive load (Carlson, Chandler, & Sweller, 2003). Intrinsic
cognitive load is determined by the degree of interaction among the elements in working
memory during learning (Sweller, 1994; Sweller & Chandler, 1994). If there are too many
interactive elements presented to the learners at the same time, the cognitive load might exceed
Worked Examples
48
the limits of working memory and the effectiveness of learning would be restricted (Carlson,
Chandler, & Sweller, 2003). On the other hand, extraneous cognitive load is determined by the
organization of the instructional material. Instructional materials can be presented in different
ways, such as diagrams, text, and worked examples, and different ways cause different amount
of extraneous cognitive load (Carlson et al., 2003). For example, the integration of diagrams and
text enhance learning comparing with diagrams-only or text-only instructional materials (Mayer
& Moreno, 2003; Mayer, 1995, 1996, 1998, 1999; Moreno & Mayer, 2002; Mayer, Sims, 1994;
Mayer & Gallini, 1990; Mayer, Mautone, & Prothero, 2002). That means instructional design
needs to reduce extraneous cognitive load by using the appropriate instructional methods and
formats.
Although there are many studies showing that worked examples are superior to
traditional problem solving instruction, not all worked examples are effective. Instruction needs
to be formatted to make cognitive resources appropriately focus on facilitating schema
acquisition rather than directed to other irrelevant activities (Sweller, 1990). Many researchers
suggested that the efficacy of worked examples as an instructional technique was limited by the
quantity of information processing required (Sweller, Chandler, Tierney, & Cooper, 1990;
Tarmizi & Sweller, 1988; Ward & Sweller, 1990). This meant that worked examples were only
effective under certain cognitive conditions. If the worked examples involved a heavy cognitive
load, cognitive resources directed to problem states and the related steps to the resolution were
reduced to the detriment of schema acquisition (Lim, Moore, 2002).
The two major alternative techniques to learn problem solving that have been studies are
the use of worked examples and goal-free problems (Owen & Sweller, 1985; Sweller, Mawer, &
Ward, 1983; Sweller, 1990), such as “Find the value of as many angles as possible” in geometry
Worked Examples
49
rather than “Calculate the value of angle x”. When worked examples involved a heavy cognitive
load, researchers obtained evidence that goal-free specific instruction was superior to worked
examples (Owen & Sweller, 1985; Sweller, 1989; Lim & Moore, 2002; Sweller, van
Merrienboer, & Pass, 1998). Sweller (1993) suggests that extraneous cognitive load is not the
only source of cognitive load, and the element interactivity is also a source of intrinsic cognitive
load.
Schema Theory
Schema is developed by psychologists in order to deal with the fact that much of our
knowledge seems integrated (Gagne, Yekovich, & Yekovich, 1993). Shema is a cognitive
construct that categorizes the elements of information by the manner with which they will be
dealt (Woolfolk, 2001; Sweller, 1994; Chi, Glaser, & Rees, 1982). Furthermore, Sweller defines
schema is a cognitive construct that allows problem solvers to recognize problems and problem
states as belonging to a particular category requiring particular moves for solution (Sweller,
1990). According to schema theory, knowledge is stored in long-term memory in the form of
schemas (Sweller, van Merrienboer, & Paas, 1998). Schema construction occurs when learners
determine the elements that need to be merged into a schema. (Ginns, Chandler, Sweller, 2003).
Schema automation is a critical factor in schema construction. Cognitive load can be reduced
effectively after schema automation. Schema automation occurs when learners practice
sufficiently (Sweller, van Merrienboer, & Paas, 1998). Many studies have confirmed the
importance of schemas in algebraic problem solving (Kotovsky, Hayes, & Simon, 1985; Chi,
Glaser, & Rees, 1982; Larkin, McDermott, Simon, & Simon, 1980; Sweller, 1990), algebraic
word problem solving (Low & Over, 1990, 1992), and geometry problem solving (Koedinger &
Worked Examples
50
Anderson, 1990). As pointed out by researchers (van Merrienboer, Clark, & de Croock, 2002),
cognitive schemata enable problem solvers to solve a new problem by serving as an analogy.
Ward and Sweller (1990) suggests that worked examples can facilitate schema
acquisition and rule automation. According to cognitive load theory, large numbers of elements
cannot be manipulated in working memory without assistance. However, studying instructional
material, such as worked examples, promotes the procedures and elements incorporating into
schemas held in long-term memory. That means learners can retrieve the schema as a single
element so that they can process it in working memory without the assistance of instruction
(Ginns, Chandler, Sweller, 2003). In a series of worked example studies, Cooper and Sweller
(1987) and Sweller and Cooper (1985) conclude that worked examples can enhance learning by
schema acquisition, reducing extraneous cognitive load, and focusing attention properly.
<Insert Prior Knowledge>
Scaffolding
Helping learners develop their own evolving knowledge through interactions is best
achieved by using the Vygotsky’s social constructivist model (Vygotsky, 1978; Hogan &
Pressley, 1997). Vygotsky believed that learners construct their knowledge by interacting with
other members of society and cannot be understood apart from the cultural settings. This theory
suggests that teachers need to do more than just arrange the environment for students in order to
discover on their own. That means teachers should assist or guide students’ learning rather than
transmit knowledge to them. Assisted learning requires scaffolding, which is the support for
learning and problem solving. Worked examples could be one kind of scaffolding (Rosenshine &
Meister, 1992; Woolfolk, 2001). According to Rosenshine and Meister (1992), assisted learning
includes: (a) adapting materials or problems to students’ current levels, (b) demonstrating skills
Worked Examples
51
or thought process, (c) walking students through the steps of a complex problem, (d) doing part
of the problem, (e) giving detailed feedback and allowing revisions, and (f) asking questions that
focus on students’ attention. According to Vygotsky’s theory, there is a zone between what
learners can do by themselves and with assistance. The zone of proximal development (ZPD) is a
phase at which a learner can master a task if given appropriate help or support (Woolfolk, 2001).
As the definition by Wertsch (1991), the zone of proximal development is an area where the
learner cannot solve problems alone, but can be successful under adult guidance. This is the area
where instruction can succeed, because real learning is possible. That means the effective
worked examples instruction should provide appropriate help and involve in the zone of
proximal development.
ACT-R
Although many studies indicating that worked examples are superior to traditional
practice instruction, there are some exceptions. Researchers have shown that worked examples
are of major importance for the initial acquisition of cognitive skills in well-structured domains
(Renkl, Atkinson, Maier, & Staley, 2002). Initial acquisition can be more precisely defined by a
four-stage model, ACT-R framework proposed by Anderson, Fincham, and Douglass (1997).
These authors argued that skill acquisition involves four overlapping stages. In the first stage,
learners solve problems by analogy. That means they study worked examples and emulate. In the
second stage, learners develop abstract declarative rules. After practice, they move to the third
stage, in which performance becomes smooth and rapid without using many cognitive resources.
In the fourth stage, learners retrieve a solution quickly and directly from memory after practicing
many different kinds of problems and studying different types of worked examples (Anderson,
Corbett, Koedinger, & Pelletier, 1995; Anderson, Fincham, & Douglass, 1997). Many researches
Worked Examples
52
indicated that worked examples are only effective in the first stage (analogy) or the beginning of
the second stage (abstract rules), while worked examples are no longer the preferred method in
the third stage (automatic performance) (Atkinson, Derry, Renkl, & Wortham, 2000; Renkl,
Atkinson, Maier, & Staley, 2002; Sweller & Cooper, 1985; Sweller, van Merrienboer, & Pass,
1998). But in the fourth stage, experts may learn stylistic techniques or tune up their own
complex skills by studying other experts’ complex performance (Atkinson, Derry, Renkl, &
Wortham, 2000).
Design of Worked Examples
The design or structure of worked examples plays an important role in the effectiveness
of worked examples (Catrambone, 1994; Catrambone & Holyoak, 1990; Mwangi & Sweller,
1998; Ward & Sweller, 1990; Zhu & Simon, 1987). In many cases, worked examples consist of
auxiliary representations, such as diagrams, of a problem. While the examples conducted by
many researchers were not similar, however, they shared the same fundamental purpose: to
demonstrate a pattern or principle (Atkinson, 2000).
Before vs. After
Laboratory studies indicated that when presented with traditional practice exercises,
students tended to use typical novice strategies, such as trial and error, while students presented
with worked examples before solving often used more efficient problem solving strategies and
focus on structural aspects of problems (Atkinson, 2000). Similar finding asserted by Trafton and
Reiser (1993) is that the most efficient way to present material to acquire a skill is to present as
example, then a similar problem to solve immediately following (p.1022). However, there is a
recent study concluding that presenting problems first followed by similar worked examples
(problem-example) is significantly more effective than exposing learners to either problems only
Worked Examples
53
or worked examples only instructions (Stark, Gruber, Renkl, & Mandl, 2000). Although there is
no study comparing problem-example with example-problem, we can conclude that integrated
instructions (example-problem or problem-example) are superior to example only or problem
only instructions.
Part vs. Whole
There are empirical evidences showing that in some condition complete worked
examples are not as effective as example-problem integrated instructions (Renkl, Atkinson,
Maier, & Staley, 2002). Many researchers argue that incomplete examples effectively support
the acquisition of cognitive skills (Pass, 1992; Stark, 1999; van Merrienboer, 1990; van
Merrienboer & de Crook, 1992). Stark (1999) found that studying incomplete examples fostered
significantly more the quality of self-explanations and the near and medium transfer of learned
solution methods than studying complete examples. Stark’s near transfer problems are defined as
the problems that are similar in structure (same solution rationale) to the worked examples but
contain different surface features (cover story, objects, numbers). Medium transfer problems are
the problems with a different structure (a modified solution procedure), but similar surface
features. Far transfer problems differed with respect to both structure and surface features (Stark,
1999).
Backward Fading vs. Forward Fading
Renkl, Atkinson, Maier, and Staley (2002) introduced a new feature for example-based
learning, fading, that integrates worked examples and problem solving and builds a bridge
between example study in early phases of cognitive skill acquisition and problem solving in later
stages. The authors conducted a successive integration of problem solving elements into worked
examples until the learners solved problems on their own (i.e. complete worked examples only –
Worked Examples
54
increasingly more incomplete examples – problem only). They found that the fading procedure
can foster the performance of near and medium transfer problems only and it is more favorable
to fade out the solution steps of the worked examples in a backward manner (leaving out the last
solution step first) when compared with a forward manner (omitting the first solution step first)
(Renkl, & et al., 2002). Because the previous study could not provide enough evidence of the
effect of fading on far transfer problems, Atkinson, Renkl, and Merrill (2003) conducted another
study combining fading with self-explanation prompts designed to encourage learners to identify
the underlying principle demonstrated in each worked example solution step. This study
successfully demonstrated using backward fading procedure combined with self-explanation
prompting significantly fosters not only the near and medium transfer learning but also far
transfer learning (Atkinson, Renkl, Merrill, 2003).
Text vs. Diagrams
The researchers (Mwangi & Sweller, 1998; Tarmizi & Sweller, 1988; Ward &
Sweller, 1990) found that the participants who need to attend the information from diagram and
text for the same concept decrease the effectives of learning in a serial of worked example
studies. Tramizi and Sweller (1988) labels this phenomenon as the split-attention effect. They
also find that restructuring the worked examples by integrating the diagrams and textual
explanations of problems facilitate learning.
Visual vs. Verbal
As integration of diagram and text can enhance learning, integration visual and verbal
can promote the effectiveness of worked example instruction also(Ward & Sweller, 1990;
Tarmizi & Sweller, 1988; Mousavi, Low & Sweller, 1995). For example, Mousavi et al. (1995)
conducted a serial of work example studies to compare the effectiveness of worked examples
Worked Examples
55
with different formats: (a) visual-visual, where the integration of diagrams and text were
presented, (b) visual-auditory, where diagram was presented with aural associated statements,
and (c) simultaneous, where diagrams was presented with both text and aural statements. The
results shows that visual-auditory and simultaneous methods are superior to visual-visual method.
Mayer and his colleges (Mayer, 1997; Mayer, Moreno, Boire, & Vegge, 1999) consistently
found that integration of dual-presentation can enhance learning in a series of multimedia studies.
However, Jeung, Chandler, and Sweller (1997) found that visual-verbal worked example method
is not superior to visual-visual worked example method in complex condition. They conducted a
serial of studies to compare visual-verbal and visual-visual worked example instructions in
complex and simple problem settings. They concluded that although the participants in
visual-verbal condition solve the problems faster than those in visual-visual condition,
visual-verbal instruction appears to be superior to visual-visual instruction only in simple
problem tasks (Jeung et al., 1997).
Steps vs. Subgoals
Catrambone (1990, 1994, 1995, 1996) and Holyoak (1990) suggest that subgoals
appears to enhance learning by modifying old steps rather than applying it without adaptation.
That might come from: (a) emphasizing meaningful chunks of problems’ solutions, (b) adding
labels to the solutions, (c) inducing the worked examples’ underlying goal structures, and (d)
leading learners to discover meaningful generalizations (Catrambone & Holyoak, 1990;
Atkinson, Derry, Renkl, & Wortham, 2000). Catrambone indicates that learners are encouraged
to distinguish the function of the subgoals and then make self-explain of the reason why the steps
are grouped together (1994, 1995, 1996). In addition, these cognitive activities help learners to
induct the principles and schemas used in the worked examples (Atkinson, Derry, Renkl, &
Worked Examples
56
Wortham, 2000). The researchers (Catrambone, 1990, 1994, 1995, 1996, 1998; Catrambone &
Holyoak, 1990) find a consistent result that the performances of the participants received the
subgoals are significantly better than the participants presented without subgoals in a series of
transfer studies.
Summary
Worked example, which is one kind of scaffolding, can effectively facilitate schema
construction and automation, reduce cognitive load, and provide assistance during learning. The
researchers provided evidence in favor of worked examples instruction rather than problem
solving practice. Worked examples consist of: (a) a problem formulation, (b) solution steps, and
(c) the final solution itself. Many researchers paid a considerable amount attention on worked
examples and concluded that worked examples instruction is superior to the conventional
problem solving instruction, especially in the field of music, chess, athletics, mathematics,
computer programming, and physics. Although there are many studies showing that worked
examples are superior to traditional problem solving instruction, not all worked examples are
effective. Instruction needs to be formatted to make cognitive resources appropriately focus on
facilitating schema acquisition rather than directed to other irrelevant activities
Summary of the Literature
(Under Construction)
Worked Examples
57
CHAPTER III
METHODOLOGY
Research Design
The research will consist of a pilot study and a main study. The pilot study will focus on
a formative evaluation. The main study will focus on the impact of work example instruction on
problem solving skill in a game-based task. The research design of the main study is an
experimental one. Students will be randomly assigned into two groups as either worked example
instruction group or control group. Each group will be asked to complete a game-based
problem-solving task.
Research Hypothesis
Research Question: Will participants in worked example instruction group increase their
problem-solving skill in a game-based task (i.e. SafeCracker) after studying worked examples
comparing to control group?
Hypothesis 1: Worked examples instruction will produce a significant increase in content
understanding comparing to control group.
Hypothesis 2: Worked examples instruction will produce a significant increase in
problem solving strategy comparing to control group.
Hypothesis 3: There will be no significant difference in self-regulation between worked
example instruction group and control group.
Pilot Study
According to Gall, Gall and Borg (2003), a pilot study is a small-scale trial conducted
before a research with a purpose to develop and examine the measures or procedures that will be
used in the main study. As Isaac and Michael (1997) pointed out, a pilot study can: (a) permit a
Worked Examples
58
preliminary testing of the hypotheses that lead to more precise hypotheses in the main study, (b)
bring researchers new ideas or alternative measures unexpected before the pilot study, and (c)
permit a complete examination of the planned research procedures and reduces the number of
treatment errors.
There are four purposes of pilot study in this proposal, which are: (a) it will be used to
determine whether the environment was feasible and understandable for the participants, (b) it
will be used to examine the functionality of the measurement instruments (knowledge mapping
system, problem solving strategy questionnaire, and self-regulation questionnaire), and (c) it will
be used to assess if the predicted time is suitable for participants to use the knowledge mapping
system, study the worked examples, and complete the tasks, and (d) it will be used to collect the
participants’ feedback for the game and the whole process.
Participants
The participants of the pilot study will be five college or graduate students at the
University of Southern California, aged from 20 to 35. All participants will be selected to have
no experience of playing SafeCracker. Following USC Human Subject Approval, data in the
pilot study will be collected in summer in 2004.
Puzzle-Solving Game
After conducting an evaluation on the research feasibility of potential 525 video games,
Wainess and O’Neil (2003) indicates that the appropriate game for studying games’ effectiveness
of enhancing problem-solving skill was puzzle games.
SafeCracker, a puzzle-solving game was the final decision by Wainess and O’Neil (2003)
since it facilitates problem solving with right and wrong solutions, and does not require special
background knowledge or extraordinary visual-spatial skill. In addition, the pacing of
Worked Examples
59
SafeCracker, which is mainly designed for adults, is controlled by players. The other significant
reason is that SafeCracker is not as popular as many other potential games. However, according
to Wainess and O’Neil (2003), as the most ideal choice of game for this study, SafeCracker has
three main drawbacks: (a) it may not be appropriate for testing transfer and retention outside the
game, (b) players’ actions within the program can not be tracked, (c) it is impossible to modify
the game scenarios. The lack of source code or editor for SafeCracker is a major reason for these
drawbacks. SafeCracker’s specifications following Wainess and O’Neil’s specification of games
are shown in Table 1 (adapted from Chen, in preparation).
Table 1
Games Evaluation Specifications
Purpose/domain
Puzzling solving with the focus on logical
inference and trial-and-error.
Type of game platform
Commercialization intent
Contractor
Genrea
Training use
Length of game
PC/CD ROM, Mac
Primary
Dreamcatcher
Puzzle
Recreational use
Unlimited (except the very beginning part of the
game)
Players/Learners
Candidate for the position of security leader of a
major company
Type of learningb
Problem solving
Worked Examples
Domain knowledge
60
Math, history, physics, information searching,
location/direction, science.
Time to learnc
Time frame
5 minutes
Modern
Plan of Instruction
No
Nature of practice
One scenario per game play
Single user vs. multiple user
Single user
a Action, role playing, adventure, strategy games, goal games, team sports, individual sports
(Laird & VanLent, 2001).
b Domain knowledge, problem solving, collaboration or teamwork, self-regulation,
communication (Baker & Mayer, 1999).
c Basic game play (i.e. an educated user, not winning strategies).
A player in SafeCracker is a candidate for a position as a head of security development at
a world famous firm of security systems, therefore needs to accomplish a task given by the boss.
The task is to open the safes in a mansion in 12 hours without any help from others. There are 35
safes scattered in about 60 rooms of the mansion. To open all of the safes, the player not only
needs to do mathematic calculation, logical reasoning, and trial-and-error guessing, but also has
to have good sense of direction and memorization.
For this study, five rooms of SafeCracker were selected, which were Reception/room 1,
Small Showroom/room 2, Constructor Office/room 5, Chief Engineer/room 6, and Technical
Design/room 27. These five rooms were selected due to their close connection and
puzzle-solving sequence. However one of the safe in Reception/room was cracked and saved for
the participant before the study, and the puzzle of the safe was about sliding. The reason to omit
Worked Examples
61
this puzzle is that due to previous experience, time needed to solve this puzzle varies
substantially from player to player. There will be seven safes, one box, and an archive in total to
be cracked and opened by the participants in the two 20-minut sessions of game playing. Another
reason that those five rooms are selected for this study is that they are to be cracked at the
beginning of the game and tools and hints are cumulative (Chen, in preparation).
Small Showroom/room 2 is selected for worked examples. There are two reasons of
choosing this room for worked examples, which are: (a) there are three major kinds of safes in
this room (indicator-light, puzzle, and switch-light safes), and (b) players do not necessarily need
tools and hints cumulated or obtained from other rooms to open the safes in this room
Knowledge Map
A knowledge map is a structural representation that consists of nodes and links. Each
node represents a concept in the domain of knowledge. Previous literature has shown its validity
and reliability of assessing content understanding (Herl et al., 1999; Mayer, 2002; Ruiz-Primo,
Schultz, & Shavenlson, 1997). In this research, participants will be asked to create a knowledge
map in a computer-based environment and evaluated their content understanding before and after
playing SafeCracker. Table 2 lists the knowledge map specification that will be used in this study
(adapted from Chen, in preparation)
Table 2
Knowledge Map Specifications
General Domain
This Software
Specification
Scenario
Create a knowledge map on the content understanding of
SafeCracker, a computer puzzle-solving game.
Worked Examples
Participants
College students or graduate students. Each works on
his/her own knowledge map about SafeCracker, after
both the first time and the second time of playing the
game.
Knowledge map
Fifteen predefined key concepts identified in the content
concepts/nodes
of SafeCracker, by experts of the game and knowledge
map professionals. The fifteen predefined concepts are:
book, catalog, clue, code, combination, compass, desk,
direction, floor plan, key, room, safe, searching,
trial-and-error, and tool.
Knowledge map links
Seven predefined important links of relationships
identified in the content of SafeCracker by experts of the
game and knowledge map professionals. The seven
predefined links are: causes, contains, leads to, part of,
prior to, requires, and used for.
Knowledge map
SafeCracker is a computer puzzle-solving game. There
domain/content: SafeCracker
are over 50 rooms with about 30 safes; each safe is a
puzzle to solve. To solve the puzzles, players need to find
out clues and tools hidden in the rooms, deliberate and
reason out the logic and sequence, try to apply what they
have found.
Training of the computer
All students will go through the same training session.
62
Worked Examples
knowledge mapping system
63
The training included the following elements:
• How to construct a knowledge map using the
computer mapping system
• How to play SafeCracker, which is the target
domain/content of the programmed knowledge mapper.
Type of knowledge to be
Problem solving
learned
Three problem solving
measures
1. Knowledge map used to measure content
understanding and structure, including (a)
semantic content score; (b) the number of
concepts; and (c) the number of links
2. Domain specific problem-solving strategy
questionnaire, including questions to measure
problem-solving retention and transfer
3. Trait self-regulation questionnaire used to
measures the four elements of trait
self-regulation: planning, self-checking,
self-efficacy, and effort
Measure
Content Understanding Measure
Content understanding measures were computed by comparing semantic content score of
a participant’s knowledge map to semantic score of a set of three expert maps (see Appendix A
for example) created by five experts. All of the five experts have graduate degree with USC and
Worked Examples
64
were familiar with SafeCracker and were experts on solving the puzzles of at least the five
selected rooms in SafeCracker. An example of how to score a knowledge map of SafeCracker is
shown in Figure 4. The following description shows how these outcomes would be scored. First,
the semantic score was calculated based on the semantic propositions, two concepts connected
by one link, in experts’ knowledge maps. Every proposition in a participant’s knowledge map
would be compared against each proposition in the three SafeCracker expert maps. One match
would be scored as one point. The average score across all three expert maps would be the
semantic score of the participant’s map. For example, as seen in Table 3, if a participant makes a
proposition such as “key used for safe”, this proposition is then compared with the propositions
in all of the three expert maps. A score of one meant this proposition was the same with the
proposition in one expert map. A score of zero means this proposition is not the same with any
one of the propositions of the three expert maps. Table 3 shows “key used for safe” received one
score from each of the three expert maps; “safe requires key” received one from the first two
expert maps but zero from the third expert map; “catalog contains clue” received one score from
the first and the third expert maps but zero score from the second expert map score; and “safe
contains clue” received one score from the third expert map but zero from the first and the
second expert map. Then this participant’s semantic score of the knowledge map in this
example would be the average score, which would be 8 (total score received from the three
expert maps)/ 3 (the number of expert maps), and yielded 2.67 (Chen, in preparation).
Worked Examples
65
Figure 4
Sample Knowledge Map
key
clue
is used for
requires
contains
contains
safe
catalog
Table 3
An Example of Scoring Map
Concept 1
Links
Concept 2
Expert1
Expert2
Expert3
Key
used for
safe
1
1
1
Safe
Requires
key
1
1
0
Catalog
Contains
clue
1
0
1
Safe
Contains
clue
0
0
1
Final score= total score÷ number of experts =8÷3= 2.67
Domain-Specific Problem-Solving Strategies Measure
In this study, the researcher will use the problem-solving strategy questionnaire
modified by Chen (in preparation) (see Appendix B). Chen (in preparation) adapted Mayer and
Moreno’s (1998) problem-solving question list to measure domain specific problem-solving
strategies. The problem-solving strategies questions designed for this dissertation research will
be relevant to the selected safes/problems in SafeCracker, the selected puzzle-solving game.
Furthermore, those questions will be regarding the application of the strategies relevant to the
Worked Examples
66
puzzles/safes solving/cracking strategies participants may acquire after trying to solve the
problems in the rooms pre-selected by the researchers from the 60 rooms of the game. The
following are problem solving strategy questions of retention and transfer, which will be used in
this study (Chen, in preparation):
Retention question:

Write an explanation of how you solve the puzzles in the rooms:
Transfer questions:

List some ways to improve the fun or challenge of the game:
Participants’ retention scores will be counted by the number of predefined major idea
units correctly stated by the participant regardless of wording. The example of the answers for
the retention question are “follow map”, “find clues”, “find key”, and “tools are cumulative”
(Chen, in preparation; Mayer and Moreno, 1998). In addition, participants’ transfer questions
will be scored by counting the number of acceptable answers that the participant produced across
all of the transfer problems. For example, the acceptable answers for the first transfer question
about ways to improve the play in room 1 includes “jot down notes”, and one of the acceptable
answers for question three ways to improve the fun or challenge of playing the game in room 1 is
that “increase clues needed to crack a safe” (Chen, in preparation; Mayer and Moreno, 1998).
Self-Regulation Questionnaire
The trait self-regulation questionnaire designed by O’Neil and Herl (1998) will be
applied in this study to access participants’ degree of self-regulation, one of the components of
problem-solving skill (see Appendix C). There was sufficient reliability of the self-regulation
questionnaire, ranged from .89-.94, reported in previous study (O’Neil & Herl, 1998). A total 32
items are composed of eight items of each of the four factors: planning, self-checking,
Worked Examples
67
self-efficacy, and effort. For example, item 1 “ I determine how to solve a task before I begin.” is
designed to access participants’ planning skill; and item 2 “I check how well I am doing when I
solve a task” is to evaluate participants’ self-efficacy. The answer for each item is ranged from
almost never, sometimes, often, to almost always.
Procedure
Worked Examples
After the participants in the worked example instruction group complete the
problem-solving strategy questions pretest, they will study the worked examples of the three
tasks: (a) opening the indicator-light safe, (b) opening the puzzle safe, and (c) opening the
switch-light safe. This worked examples session will take 10 minutes. After the worked
examples session, the participants can still refer to the worked examples during the second
game-play session.
(Under Construction)
Time Chart of the Study
Activity
Time
Introduction
2-3 minutes
Self-regulation questionnaire
6-8 minutes
Game introduction
2 minutes
Introduction on knowledge mapping
8 minutes
Knowledge map (pre)
5 minutes
Game Playing 1 (room 5, 6, & 27)
20 minutes
Problem-solving strategy questions (pre)
4 minutes
Worked Examples
Introduction on worked example
2 minutes
Worked Example (room 2)
10 minutes
Game playing 2 (room 1, 2, & 27)
20 minutes
Knowledge map (post)
5 minutes
Problem-solving strategy questions (post)
4 minutes
Debriefing
2 minutes
Total
90 – 93 minutes
68
Data Analysis
According to the outcomes of the pilot study, some modifications will be made for the
main study. For example, the researcher will figure out the problems of the computer system,
which may occur during the main study, such as system crash. In addition, the pilot study will
examine if the measurement instruments works successfully. The researcher may be adjust the
time of using the knowledge mapping system, studying the worked examples, and completing
the tasks.
Main Study
Method of the Main Study
Participants
There will be 60 college or graduate students at USC, aged from 20-35, participating in
this main study. They will be randomly assigned into two groups, which are worked example
instructional group and control group. Each group has 30 participants. The main study will be
conducted at a lab of USC after receiving the approval of USC Review of Human Subjects in
Summer 2004. All participants will be selected to have no experience of playing SafeCracker or
other puzzle-solving games.
Worked Examples
69
Game
The same puzzle-solving game, SafeCracker, will be used in the main study; however,
some adjustments may be made according to the results of the pilot study. In addition, the same
five rooms selected for the pilot study, which were Reception/room 1, Small Showroom/room 2,
Constructor Office/room 5, Chief Engineer/room 6, and Technical Design/room 27 are used in
the main study.
Measures
Knowledge Map
The same knowledge mapping system used in the pilot study will be used in the main
study. However, based on the outcome of the pilot study, the researcher may make adjustment of
the time for the participants to draw the knowledge map after the two times of game playing. As
in the pilot study, participants will be required to create a knowledge map in a computer
knowledge mapping system after both of the two sessions of game playing. The measures were
computed by comparing semantic content score of a participant’s knowledge maps to the
semantic score of a set of three expert maps (Schacter et al., 1999), the same as it is measured in
the pilot study.
Domain-Specific Problem-Solving Strategies Measures
The same problem-solving strategy questions of retention and transfer modified by Chen
(in preparation) used in the pilot study will be used in the main study. However, based on the
outcome of the pilot study, the researcher may make adjustment of the time for the participants
to complete this questionnaire. The problem-solving questions are related to the puzzle-solving
strategies for the selected rooms in SafeCracker. Also, the same scoring system of counting
acceptable answers will be used with the main study.
Worked Examples
70
Self-Regulation Questionnaire
The same self-regulation questionnaire (O’Neil & Herl, 1998) with thirty-two questions
used in the pilot study will be used in the main study. There are 32 items in this questionnaire
composed of eight items of each of the four factors: planning, self-checking, self-efficacy, and
effort.
Procedure
The worked example instruction group will have the same procedure used in the pilot
study (modified) will be used in the main study. The control group will have the same procedure
as the worked example group except the 10-minute worked example session. However, the
researcher may adjust the time for studying worked examples based on outcome of the pilot
study.
Computer-Based Knowledge map Training
Participants will be trained how to use computer-based knowledge map, including
adding/erasing concepts and create/delete links between concepts.
Game Playing
SafeCracker will be used in the main study. The participants will be asked to play in
specific rooms in SafeCracker as the pilot study. Each section of game playing will last for 20
minutes.
Worked Examples
The same worked examples used in the pilot study will be provided to the participants in
the worked example instruction group. However, based on the outcome of pilot study, the
researcher may adjust the time of studying worked examples.
Worked Examples
71
Data Analysis
The descriptive statistics will be means, standard deviation, and correlation coefficients.
The t-test will be used to examine the relationships between outcomes before and after game
playing for both groups. The researcher will find out if worked examples enhance participants’
problem-solving skills in a game-based environment.
Worked Examples
72
REFERENCES
Adams, P. C. (1998). Teaching and learning with SimCity 2000 [Electronic Version]. Journal of
Geography, 97(2), 47-55.
Albertson, L. M. (1986). Personalized feedback and cognitive achievement in computer-assisted
instruction. Journal of Instructional Psychology, 13(2), 55-57.
Alessi, S. M. (2000a). Building versus using simulations. In J. M. Spector & T. M. Anderson
(Eds.), Integrated and holistic perspectives on learning, instruction technology:
Improving understanding in complex domains (pp. 175-196). Dordrecht, The Netherlands:
Kluwer.
Alessi, S. M. (2000b). Simulation design for training and assessment. In H. F. O’Neil, JR. & D.
H. Andrews(Eds.), Aircrew training and assessment (pp. 197-222). Mahwah, NJ:
Lawrence Erlbaum Associates.
Alessi, S. M. (2000c). Simulation design for training and assessment. In H. F. O’Neil, Jr. & D. H.
Andrews (Eds.), Aircrew training and assessment (pp. 197-222)., Mahwah, NJ: Lawrence
Erlbaum Associates.
Alexander, P. A. (1992). Domain knowledge: Evolving themes and emerging concerns.
Educational Psychologist, 27(1), 33-51.
Amory, A. (2001). Building an educational adventure game: Theory, design, and lessons. Journal
of Interactive Learning Research, 12(2/3), 249-263.
Amory, A., Naicker, K., Vincent, J., & Adams, C. (1999). The use of computer games as an
educational tool: Identification of appropriate game types and game elements. British
Journal of Educational Technology, 30(4), 311-321.
Anderson, C. A., & Bushman, B. J. (2001). Effects of violent video games on aggressive
Worked Examples
73
behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial
behavior: A meta-analytic review of the scientific literature. Psychological Science, 12(5),
353-358.
Arthur, W. Jr, Strong, M. H., Jordan, J. A., Williamson, J. E., Shebilske, W. L., & Regian, J. W.
(1995). Visual attention: individual differences in training and predicting complex task
performance, Acta Psychologica, 88, 3-23.
Arthur, W. Jr., Tubre, T., Paul, D. S., & Edens, P S. (2003). Teaching effectiveness: The
relationship between reaction and learning evaluation criteria. Educational Psychology,
23(3), 275-285.
Atkinson, R. K., Derry, S. J.; Renkl, A., & Wortham, D. (2000). Learning from Examples:
Instructional Principles from the Worked Examples Research. Review of Educational
Research, 70(2), 181-214.
Atkinson, R. K., Renkl, A., Merrill, M. M. (2003). Transitioning From Studying Examples to
Solving Problems: Effects of Self-Explanation Prompts and Fading Worked-Out Steps.
Journal of Educational Psychology, 95(4), 774-83.
Baird, W. E., & Silvern, S. B. (1999) Electronic games: Children controlling the cognitive
environment. Early Child Development & Care, 61, 43-49.
Baker, E. L., & Alkin, M. C. (1973). Formative evaluation of instructional development. AV
Communication Review, 21(4), 389-418. (ERIC Document Reproduction Service No.
EJ091462)
Baker, E. L., & Herman, J. L. (1985). Educational evaluation: Emergent needs for research.
Evaluation Comment, 7(2), 1-12.
Worked Examples
74
Baker, E. L. & Mayer, R. E. (1999). Computer-based assessment of problem solving. Computers
in Human Behavior, 15, 269-282.
Baker, E. L., & O’Neil, H. F. Jr. (2002). Measuring problem solving in computer environments:
Current and future states. Computers in Human Behavior, 18(6), 609-622.
Bandura, A. (2001). Impact of Guided Exploration and Enactive Exploration on Self-Regulatory
mechanisms and Information Acquisition Through Electronic Search.
Journal of
Applied Psychology, 86 (6), 1129-1141.
Barnett, M. A., Vitaglione, G. D., Harper, K. K. G., Quackenbush, S. W., Steadman, L. A., &
Valdez, B. S. (1997). Late adolescents’ experiences with and attitudes toward videogames.
Journal of Applied Social Psychology, 27(15), 1316-1334.
Barootchi, N, & Keshavarz, M. H. (2002) Assessment of achievement through portfolio and
teacher-made tests. Educational Research, 44(3), 279-288.
Berson, M. J. (1996). Effectiveness of computer technology in the social studies: a review of the
literature. Journal of Research on Computing in Education, 28, 486-99.
Betz, J. A. (1995-96). Computer games: Increase learning in an interactive multidisciplinary
environment. Journal of Educational Technology Systems, 24, 195-205.
Blanchard, P. N., Thacker, J. W., & Way, S A. (2000). Training evaluation: Perspectives and
evidence from Canada. International Journal of Training and Development, 4(4),
295-304.
Bong, M., & Clark, R. E. (1999). Comparison between self-concept and self-efficacy in
academic motivation research. Educational Psychologist, 34(3), 139-153.
British Educational Communications and Technology Agency. Computer Games in Education
Worked Examples
75
Project. Retrieved from http://www.becta.org.uk
Brunning, R. H., Schraw, G. J., & Ronning, R R. (1999). Cognitive psychology and instruction
(3rd ed.). Upper Saddle River, NJ: Merrill.
Carroll, W. M. (1994). Using Worked Examples as an Instructional Support in the Algebra
Classroom. Journal of Educational Psychology, 86(3), 360-67.
Catrambone, R. (1994). Improving examples to improve transfer to novel problems. Memory &
Cognition, 22, 606-15.
Chambers, C., Sherlock, T. D., & Kucik III, P. (2002). The Army Game Project. Army, 52(6),
59-62.
Chappell, K. K., & Taylor, C. S. (1997). Evidence for the reliskill and factorial validity of the
computer game attitude scale. Journal of Educational Computing Research, 17(1), 67-77.
Cheung, S. (2002). Evaluating the psychometric properties of the Chinese version of the
Interactional Problem-Solving Inventory. Research on Social Work Practice, 12(4),
490-501.
Christopher, E. M. (1999). Simulations and games as subversive activities. Simulation & Gaming,
30(4), 441-455.
Chuang, S., (in preparation). The role of search strategies and feedback on a computer-based
collaborative problem-solving task. Unpublished doctoral dissertation. University of
Southern California.
Chung, G. K. W. K., O’Neil H. F., Jr., & Herl, H. E. (1999). The use of computer-based
collaborative knowledge mapping to measure team processes and team outcomes.
Computers in Human Behavior, 15, 463-493.
Worked Examples
76
Clark, R. E. (1998). Motivating performance: Part 1—diagnosing and solving motivation
problems. Performance Improvement, 37(8), 39-47.
Clark, K., & Dwyer, F. M. (1998). Effects of different types of computer-assisted feedback
strategies on achievement and response confidence. International Journal of Instructional
Media, 25(1), 55-63.
Crisafulli, L., & Antonietti, A. (1993). Videogames and transfer: An experiment on analogical
problem-solving. Ricerche di Psicologia, 17, 51-63.
Day, E. A., Arthur, W, & Gettman, D. (2001). Knowledge structures and the acquisition of a
complex skill. Journal of Applied Psychology, 86(5), 1022-1033.
Dawes, L., & Dumbleton, T. (2001). Computer games in education. BECT
ahttp://www.becta.org.uk/technology/software/curriculum/computergames/docs/report.pd
f
Donchin, E. (1989). The learning strategies project. Acta Psychologica, 71, 1-15
Driskell, J. E., & Dwyer, D. J. (1984). Microcomputer videogame based training. Educational
Technology, 11-16.
Dugard, P. & Todman, J. (1995). Analysis of pre-test-post-test control group designs in
educational research. Educational Psychology, 15(2), 181-198.
Dugdale, S. (1998). Mathematical problem solving and computers: a study of learner-initiated
application of technology in a general problem-solving context. Journal of Research on
Computing in Education, 30(3), 239-253.
Enman, M., & Lupart, J. (2000). Talented female students’ resistance to science: an exploratory
study of post-secondary achievement motivation, persistence, and epistemological
characteristics. High Skill Studies, 11(2), 161-178.
Worked Examples
77
Faria, A. J. (1998). Business simulation games: current usage levels-an update. Simulation &
Gaming, 29, 295-308.
Fery, Y. A., & Ponserre S. (2001). Enhancing the control of force in putting by video game
training. Ergonomics, 44, 1025-1037.
Forsetlund, L., Talseth, K. O., Bradley, P., Nordheim, L, & Bjorndal, A. (2003). Many a slip
between cut and lip: Process evaluation of a program to promote and support
evidence-based public health practice. Evaluation Review, 27(2), 179-209.
Galimberti, C., Ignazi, S., Vercesi, P., & Riva, G. (2001). Communication and cooperation in
networked environments: An experimental analysis. Cyber Psychology & Behavior, 4(1),
131-146.
Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research. An introduction
(7th ed.).
New York: Allyn & Bacon.
Gopher, D., Weil, M., & Bareket, T. (1994). Transfer of skill from a computer game trainer to
flight. Human Factors, 36, 387-405.
Gredler, M. E. (1996). Educational games and simulations: A technology in search of a (research)
paradigm. In D. Jonassen (Ed.). Handbook of Research for Educational Communications
and Technology (pp521-540). New York: Macmillan.
Greenfield, P.M., DeWinstanley, P., Kilpatrick H., & Kaye D. (1994). Action video games and
informal education: Effects on strategies for dividing visual attention. Journal of Applied
Developmental Psychology, 15, 105-123.
Harrell, K. D. (2001). Level III training evaluation: Considerations for today’s organizations.
Performance Improvement, 40 (5), 24-27.
Henderson, L., Klemes, J., & Eshet, Y. (2000). Just playing a game? Educational simulation
Worked Examples
78
software and cognitive outcomes. Journal of Educational Computing Research, 22(1),
105-129.
Herl, H. E., Baker, E. L., & Niemi, D. (1996). Construct validation of an approach to modeling
cognitive structure of U.S. history knowledge.
Journal of Educational Psychology,
89(4), 206-218.
Herl, H. E., O’Neil, H. F., Jr., Chung, G., & Schacter, J. (1999) Reliskill and validity of a
computer-based knowledge mapping system to measure content understanding. Computer
in Human Behavior, 15, 315-333.
Hong, E., & O’Neil, H. F. Jr. (2001). Construct validation of a trait self-regulation model.
International Journal of Psychology, 36(3), 186-194.
Hsieh, I. (2001). Types of feedback in a computer-based collaborative problem-solving Group
Task.
Unpublished doctoral dissertation.
University of Southern California.
Isaac, S, & Michael, W. B. (1997). Handbook in research and evaluation for education and the
behavioral sciences (3rd ed.). San Diego, CA: EdITS.
Kalyuga, S., Chandler, P., Tuovinen, J. (2001). When Problem Solving Is Superior to Studying
Worked Examples Journal of Educational Psychology, 93(3), 579-88.
King, K. W., & Morrison M. (1998). A media buying simulation game using the Internet.
Journalism & Mass Communication Education,53(3), 28-36.
Kirkpatrick, D. L. (1994). Evaluating training program. The four levels. San Francisco, CA:
Berrett-Koehler Publishers.
Kirkpatrick, D. L. (1996, January). Great ideas revisited. Training and Development Journal,
54-59.
Kulhavy, R. W., & Wager, W. (1993). Feedback in programmed instruction: Historical context
Worked Examples
and implication for practice.
79
In J. V. Dempsey & G.C. Sales (Eds.), Interactive
instruction and feedback (pp.3-20). Englewood, NJ: Educational Technology
publications.
Lane, D. C. (1995). On a resurgence of management simulations and games. Journal of the
Operational Research Society, 46, 604-625.
Malone, T. W. (1981). Toward a theory of intrinsically motivating instruction. Cognitive Science,
4, 333-369.
Manning, B. H., Glasner, S. E., & Smith, E. R. (1996). The self-regulated learning aspect of
metacognition: A component of gifted education.
Roeper Review, 18(3), 217-223.
Marsh, H. W., & Roche, L. A. (1997). Making students’ evaluations of teaching effectiveness
effective: The critical issues of validity, bias, and utility. American Psychologist, 52(11),
1187-1197.
Martin, A. (2000). The design and evaluation of a simulation/game for teaching information
systems development. Simulation & Gaming, 31(4), 445-463.
Mayer, R. E. (1998). Cognitive, metacognitive, and motivational aspects of problem solving.
Instructional Science, 26, 49-63.
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
Mayer, R. E. (2002). A taxonomy for computer-based assessment of problem-solving. Computer
in Human Behavior, 18, 623-632.
Mayer, R. E., & Moreno, R. (1998). A split-attention effect in multimedia learning: evidence for
dual processing systems in working memory. Journal of Educational Psychology, 90(2),
312-320.
Mayer, R. E., Moutone, P., & Prothero, W. (2002). Pictorial aids for learning by doing in a
Worked Examples
80
multimedia geology simulation game. Journal of Educational Psychology, 94(1),
171-185.
Mayer, R. E., & Sims, V. K. (1994). For whom is a picture worth a thousand words? Extensions
of a dual-coding theory of multimedia learning. Journal of Educational Psychology, 86,
389-401.
Mayer, R. E., & Wittrock, M. C. (1996). Problem-solving transfer.
In D. C. Berliner, &
Calfee, R.C. (Ed.), Handbook of educational psychology (pp. 47-62).
New York, NJ:
Macmillian Library Reference USA, Simon & Schuster Macmillan.
Mayer, Ri. E., & Moreno, R. (2003). Nine Ways to Reduce Cognitive Load in Multimedia
Learning. Educational Psychologist, 38(1), 3-52
Mehrotra, C. M. (2001). Evaluation of a training program to increase faculty productivity in
aging research. Gerontology & Geriatrics Education, 22(3), 79-91.
Moreno, R., & Mayer, R. E. (2000). Engaging students in active learning: The case for
personalized multimedia messages. Journal of Educational Psychology, 92(4), 724-733.
Morris, E. (2001). The design and evaluation of Link: A computer-based teaching system for
correlation. British Journal of Educational Technology, 32(1), 39-52.
Mulqueen, W. E. (2001). Technology in the classroom: lessons learned through professional
development. Education, 122(2), 248-256.
Mwangi, W., Sweller, J. (1998). Learning To Solve Compare Word Problems: The Effect of
Example Format and Generating Self-Explanations. Cognition and Instruction, 16(2),
173-99.
Worked Examples
81
Nabors, Martha L. (1999). New functions for “old Macs”: providing immediate feedback for
student teachers through technology.
International Journal of Instructional Media,
26(1) 105-107.
Naugle, K. A., Naugle, L. B., & Naugle, R. J. (2000). Kirkpatrick’s evaluation model as a means
of evaluating teacher performance. Education, 121(1), 135-144.
Novak, J. D. (1990). Knowledge maps and Vee diagrams: Two metacognitive tools to
facilitate
meaningful learning. Instructional Science, 19(1), 29-52.
Okagaki, L. & Frensch, P.A. (1994). Effects of video game playing on measures of spatial
performance: Gender effects in late adolescence. Journal of Applied Developmental
Psychology, 15, 33-58.
O’Neil, H. F., Jr. (Ed.). (1978). Learning strategies. New York: Academic Press.
O’Neil, H. F., Jr. (1999). Perspectives on computer-based performance assessment of
problem-solving. Computers in Human Behavior, 15, 225-268.
O’Neil, H. F., Jr. (2003). What works in distance learning. Los Angeles: University of Southern
California; UCLA/National Center for Research on Evaluation, Standards, and Student
Testing (CRESST).
O’Neil, H. F., Jr., & Abedi, J. (1996). Reliskill and validity of a state metacognitive inventory:
Potential for alternative assessment. Journal of Educational Research, 89, 234-245.
O’Neil, H. F., Jr., & Andrews, D. (Eds). (2000). Aircrew training and assessment. Mahwah, NJ:
Lawrence Erlbaum Associates.
O’Neil, H. F., Jr., Baker, E. L., & Fisher, J. Y.-C. (2002). A formative evaluation of ICT games.
Los Angeles: University of Southern California; UCLA/National Center for Research on
Evaluation, Standards, and Student Testing (CRESST).
Worked Examples
82
O’Neil, H. F., Jr., & Fisher, J. Y.-C. (2002). A technology to support leader development:
Computer games. In Day, V. D., & Zaccaro, S. J. (Eds.), Leadership development for
transforming organization. Mahwah, NJ: Lawrence Erlbaum Associates.
O’Neil, H. F., Jr., & Herl, H. E. (1998). Reliskill and validity of a trait measure of
self-regulation.
Los Angeles, University if California, Center for Research on
Evaluation, Standards, and Student Testing (CRESST).
O’Neil, H. F., Jr., Mayer, R. E., Herl, H. E., Niemi, C., Olin, K, & Thurman, R A. (2000).
Instructional strategies for virtual aviation training environments. In H. F. O’Neil, Jr., &
D. H. Andrews (Eds.), Aircrew training and assessment, (pp. 105-130). Mahwah, NJ:
Lawrence Erlbaum Associates.
Paas, F. G. W. C.; Van Merrienboer, J. J. G. (1994). Variskill of Worked Examples and Transfer of
Geometrical Problem-Solving Skills: A Cognitive-Load Approach. Journal of
Educational Psychology, 86(1), 122-33.
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive Load Theory. Educational Psychologist,
38(1), 5-71.
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent
developments. Educational Psychologist, 38(1), 1-4.
Parchman, S. W., Ellis, J. A., Christinaz, D., & Vogel, M. (2000). An evaluation of three
computer-based instructional strategies in basic electricity and electronics training.
Military Psychology, 12(1), 73-87.
Peat, M., & Franklin, S. (2002). Supporting student learning: The use of computer-based
formative assessment modules. British Journal of Educational Technology, 33(5),
Worked Examples
83
515-523.
Perkins, D. N., & Salomon, G. (1989). Are cognitive skills context bound? Educational
Researcher, 18, 16-25.
Petty, R. E., Priester, J. R., & Wegener, D. T. (1994). Handbook of social cognition. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Pillay, H. K., Brownlee, J., & Wilss, L. (1999). Cognition and recreational omputer games:
implications for educational technology. Journal of Research on Computing in Education,
32, 203-216.
Pintrich, P. R., & DeGroot, E. V. (1990). Motivational and self-regulated learning components
of classroom academic performance.
Journal of Educational Psychology, 82, 33-40.
Pirolli, P, & Recker, M. (1994). Learning strategies and transfer in the domain of programming.
Cognition & Instruction, 12(3), 235-275.
Poedubicky, V. (2001). Using technology to promote healthy decision making. Learning and
Leading with Technology, 28(4), 18-21.
Ponsford, K. R., & Lapadat, J. C. (2001). Academically capable students who are failing in high
school: Perceptions about achievement. Canadian Journal of Counselling, 35(2),
137-156.
Quinn, C. N. (1991). Computers for cognitive research: A HyperCard adventure game. Behavior
Research Methods, Instruments, & Computers, 23(2) 237-246.
Quinn, C. N. (1996). Designing an instructional game: Reflections on “Quest for independence.”
Education and Information Technologies, 1, 251-269.
Quinn, C. N., Alem, L., Eklund, J. (1997). Retrieved August 30, 2003, from http://www.
Testingcentre.come/jeklund/interact.htm.
Worked Examples
84
Rabbitt, P., Banerji, N., Szymanski, A. (1989). Space fortress as an IQ test? Predictions of
learning and of practiced performance in a complex interactive video-game. ACTA
Psychologica Special Issue: Tge Kearbubg Strategues Origran: An Examination of the
Strategies in Skill Acquisition, 71(1-3), 243-257.
Renkl, A. (2002). Worked-Out Examples: Instructional Explanations Support Learning by
Self-Explanations. Learning and Instruction, 12(5), 529-56.
Renkl, A., & Atkinson, R. K. (2003). Structuring the Transition From Example Study to Problem
Solving in Cognitive Skill Acquisition: A Cognitive Load Perspective. Educational
Psychologist, 38(1), 15-22.
Renkl, A., Atkinson, R. K., & Maier, U. H. (2002). From example study to problem solving:
smooth transitions help learning. The Journal of Experimental Education, 70(4), 293-315.
Rhodenizer, L., Bowers, C., & Bergondy, M. (1998). Team practice schedules: What do we know?
Perceptual and Motor Skills, 87, 31-34.
Ricci, K. E., Salas, E., & Cannon-Bowers, J. A. (1996). Do computer-based games facilitate
knowledge acquisition and retention? Military Psychology, 8, 295-307.
Rieber, L. P. (1996). Animation as feedback in computer simulation: Representation matters.
Educational Technology Research and Development, 44(1), 5-22.
Rieber, L.P. (1996). Seriously considering play: Designing interactive learning environments
based on the blending of microworlds, simulations, and games. Educational Technology,
Research and Development, 44, 43-58.
Ritchie, D., & Dodge, B. (1992, March). Integrating technology usage across the curriculum
through educational adventure games. (ED 349 955).
Rosenorn, T. Kofoed, L. B. (1998). Reflection in Learning Processes through simulation/gaming.
Worked Examples
85
Simulation & Gaming, 29(4), 432-440.
Ross, S. M., & Morrison, G. R. (1993). Using feedback to adapt instruction for individuals. In J.
V. Dempsey & G.C. Sales (Eds.), Interactive instruction and feedback (pp.177-195).
Englewood, NJ: Educational Technology publications.
Ruben, B. D. (1999, December). Simulations, Games, and experience-based learning: The quest
for a new paradigm for teaching and learning. Simulation & Gaming, 30(4), 498-505.
Ruiz-Primo, M. A., Schultz, S. E., and Shavelson, R. J. (1997).
Knowledge map-based
assessment in science: Two exploratory studies (CSE Tech. Rep. No. 436).
Los Angeles,
University if California, Center for Research on Evaluation, Standards, and Student
Testing (CRESST).
Salas, E. (2001). Team training in the skies: does crew resource management (CRM) training
work? Human Factors, 43(4), 641-674.
Sales, G. C. (1993). Adapted and adaptive feedback in technology-based instruction.
In J. V.
Dempsey & G.C. Sales (Eds.), Interactive instruction and feedback (pp.159-176).
Englewood, NJ: Educational Technology publications.
Santos, J. (2002). Developing and implementing an Internet-based financial system simulation
game. Journal of Economic Education, 33(1) 31-40.
Schacter, J., Herl, H. E., Chung, G., Dennis, R. A., O’Neil, H. F., Jr. (1999). Computer-based
performance assessments: a solution to the narrow mearurement and reporting of
problem-solving. Computers in Human Behavior, 13, 403-418.
Schank, R. C. (1997). Virtual learning: A revolutionary approach to build a highly skilled
workforce. New York: McGraw-Hill Trade.
Worked Examples
86
Schau, C. & Mattern, N. (1997). Use of map techniques in teaching applied statistics courses.
American statistician, 51, 171-175.
Schau, C., Mattern, N., Zeilik, M., Teague, K., & Weber, R. (2001). Select-and-fill-in
knowledge map scores as a measure of students' connected understanding of science.
Educational & Psychological Measurement, 61(1), 136-158.
Schunk, D. H., & Ertmer, P A. (1999). Self-regulatory processes during computer skill
acquisition: Goal and self-evaluative influences. Journal of Educational Psychology,
91(2).
Scriven, M. (1967). The methodology of evaluation. In R. W. Tyler, R. M. Gagné, & M. Scriven
(Eds.), Perspectives of curriculum evaluation (American Educational Research
Association Monograph Series on Curriculum Evaluation, No. 1, pp. 39-83). Chicago:
Rand McNally.
Simon, H. A. (1973). The structure of ill structured problem. Artificial Intelligence, 4, 181-201.
Sternberg, R. J., & Lubart, T. E. (2003). The role of intelligence in creativity. In M. A. Runco
(Ed.), Critical Creative Processes. Perspectives on Creativity Research (pp. 153-187).
Cresskill, NJ: Hampton Press.
Stolk, D., Alesandrian, D., Gros, B., & Paggio, R. (2001). Gaming and multimedia applications
for environmental crisis management training. Computers in Human Behavior, 17,
627-642.
Sweller, J. (1990). Cognitive Processes and Instruction Procedures. Australian Journal of
Education, 34(2), 125-30.
Sweller, J.; van Merriënboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and
instructional design. Educational Psychology Review, 10(3), 251-96.
Worked Examples
87
Sweller, J. (1989). Cognitive technology: some procedures for facilitating learning and problem
solving in mathematics and science. Journal of Educational Psychology, 81, 457-66.
Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning
& Instruction, 4(4), 295-312.
Sweller, J. & Cooper, G. A. (1985). The use of worked examples as a substitute for problem
solving in learning algebra. Cognition & Instruction, 2(1), 59-89.
Thomas, P., & Macredie, R. (1994). Games and the design of human-computer interfaces.
Educational Technology, 31, 134-142.
Thornburg, D. G. & Pea, R. D. (1991). Synthesizing instructional technologies and educational
culture: Esxploring cognition and metacognition in the social studies. Journal of
Educational Computing Research, 7(2), 121-164.
Tkacz, S. (1998). Learning map interpretation: Skill acquisition and underlying skills. Journal of
Environmental Psychology, 18 (3), 237-249.
Urdan, T., & Midgley, C. (2001). Academic self-handicapping: What we know, what more there
is to learn. Education Psychology Review, 13, 115-138.
Van Gerven, P. W. M., Paas, F. G. W. C., & Van Merrienboer, J. J. G. (2002). Cognitive Load
Theory and Aging: Effects of Worked Examples on Training Efficiency Learning and
Instruction, 12(1), 87-105.
Van Merrienboer, J. J. G., Clark, R. E., & de Croock, M. B. M. (2002). Blueprints for complex
learning: The 4C/ID-Model. Educational Technology Research & Development, 50(2),
39-64.
Van Merrienboer, J. J. G., Kirschner, P. A.; Kester, L. (2003). Taking the Load Off a Learner s
Mind: Instructional Design for Complex Learning. Educational Psychologist, 38(1),
Worked Examples
88
5-13.
White, B. Y., & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science
accessible to all students. Cognition and Ind Instruction, 16(1) 3-118.
Washbush, J., & Gosen, J. (2001). An exploration of game-derived learning in total enterprise
simulations. Simulation & Gaming, 32(3), 281-296.
Ward, M., & Sweller, J. (1990). Structuring effective worked examples. Cognition & Instruction,
7(1), 1-39.
Weller, M. (2000). Implementing a CMC tutor group for an existing distance education course.
Journal of Computer Assisted Learning, 16(3), 178-183.
Wellington, W. J., & Faria, A. J. (1996). Team cohesion, player attitude, and performance
expectations in simulation. Simulation & Gaming, 27(1).
West, D. C., Pomeroy, J. R., Park, J. K., Gerstenberger, E. A., Sandoval, J. (2000). Critical
thinking in graduate medical education. Journal of the American Medical Association,
284(9), 1105-1110.
Westbrook, J. I., & Braithwaite, J. (2001). The health care game: An evaluation of heuristic,
web-based simulation. Journal of Interactive Learning Research, 12(1), 89-104.
Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In M. Boekaerts, & P. R.
Pintrich (Eds.), Handbook of Self-regulation (pp. 531-566). San Diego, CA: Academic
Press
Woolfolk, A. E. (2001). Educational Psychology (8th ed.). Needham Heights, MA: Allyn and
Bacon.
Worked Examples
89
Ziegler, A., & Heller K. A. (2000). Approach and avoidance motivation as predictors of
achievement behavior in physics instructions among mildly and highly gifted eight-grade
students. Journal for the Education of the Gifted, 23(4), 343-359.
Zimmerman, B. J. (1994). Dimensions of academic self-regulation: A conceptual framework
for education.
In D. H. Schunk, & B. J. Zimmerman (Eds.), Self-regulation of learning
and performance (pp. 3-21). Hillsdale, NJ: Erlbaum.
Zimmerman, B. J. (2000). Self-efficacy. An essential motive to learn. Contemporary Educational
Psychology, 25(1), 82-91.
Zhu, X., & Simon, H. (1987). Learning mathematics from examples and by doing. Cognition and
Instruction, 4, 137-66.
Worked Examples
Appendix A
Expert Map
90
Worked Examples
Appendix B
Problem Solving Strategy Questionnaire
User ID: _______________________
Please answer the following questions. You don’t have to write complete or perfect sentences.
You can only use key words or incomplete sentences.
Retention Question:
Write an explanation of how you solved the puzzles in the rooms:
91
Worked Examples
Transfer Question:
List some ways to improve the fun or challenge of the game:
92
Worked Examples
93
Appendix C
Trait Self-Regulation Questionnaire
User ID: __________________________________________________________________________
Directions: A number of statements which people have used to describe themselves are given
below. Read each statement and indicate how you generally think or feel on learning tasks by
filling number on your answer sheet. There are no right or wrong answers. Do not spend too much
time on any one statement. Remember, fill in the number that seems to describe how you
generally think or feel: 1=almost never, 2= sometimes, 3= often, 4= almost always.
Number
1.
I determine how to solve a task before I begin.
2.
I check how well I am doing when I solve a task.
3.
I work hard to do well even if I don't like a task.
4.
I believe I will receive an excellent grade in this course.
5.
I carefully plan my course of action.
6.
I ask myself questions to stay on track as I do a task.
7.
I put forth my best effort on tasks.
8.
I’m certain I can understand the most difficult material presented in the readings for
this course.
9.
I try to understand tasks before I attempt to solve them.
10.
I check my work while I am doing it.
11.
I work as hard as possible on tasks.
12.
I’m confident I can understand the basic concepts taught in this course.
13.
I try to understand the goal of a task before I attempt to answer.
14.
I almost always know how much of a task I have to complete.
15.
I am willing to do extra work on tasks to improve my knowledge.
16.
I’m confident I can understand the most complex material presented by the teacher in
this course.
17.
I figure out my goals and what I need to do to accomplish them.
Worked Examples
94
Number
18.
I judge the correctness of my work.
19.
I concentrate as hard as I can when doing a task.
20.
I’m confident I can do an excellent job on the assignments and tests in this course.
21.
I imagine the parts of a task I have to complete.
22.
I correct my errors.
23.
I work hard on a task even if it does not count.
24.
I expect to do well in this course.
25.
I make sure I understand just what has to be done and how to do it.
26.
I check my accuracy as I progress through a task.
27.
A task is useful to check my knowledge.
28.
I’m certain I can master the skills being taught in this course.
29.
I try to determine what the task requires.
30.
I ask myself, how well am I doing, as I proceed through tasks.
31.
Practice makes perfect.
32.
Considering the difficulty of this course, the teacher, and my skills, I think I will do
well in this course.
Copyright © 1995, 1997, 1998, 2000 by Harold F. O’Neil, Jr.
Download