Uploaded by 王明杰

s11257-016-9186-6

advertisement
User Model User-Adap Inter (2017) 27:55–88
DOI 10.1007/s11257-016-9186-6
Enhancing learning outcomes through self-regulated
learning support with an Open Learner Model
Yanjin Long1 · Vincent Aleven2
Received: 15 November 2015 / Accepted: 6 September 2016 / Published online: 31 December 2016
© Springer Science+Business Media Dordrecht 2016
Abstract Open Learner Models (OLMs) have great potential to support students’
Self-Regulated Learning (SRL) in Intelligent Tutoring Systems (ITSs). Yet few classroom experiments have been conducted to empirically evaluate whether and how an
OLM can enhance students’ domain level learning outcomes through the scaffolding
of SRL processes in an ITS. In two classroom experiments with a total of 302 7th- and
8th-grade students, we investigated the effect of (a) an OLM that supports students’
self-assessment of their equation-solving skills and (b) shared control over problem
selection, on students’ equation-solving abilities, enjoyment of learning with the tutor,
self-assessment accuracy, and problem selection decisions. In the first, smaller experiment, the hypothesized main effect of the OLM on students’ learning outcomes was
confirmed; we found no main effect of shared control of problem selection, nor an
interaction. In the second, larger experiment, the hypothesized main effects were not
confirmed, but we found an interaction such that the students who had access to the
OLM learned significantly better equation-solving skills than their counterparts when
shared control over problem selection was offered in the system. Thus, the two exper-
The paper is based on work that was conducted while the first author was at the Human-Computer
Interaction Institute of Carnegie Mellon University. A conference paper based on Classroom Experiment
1 was published in 2013 (Long and Aleven 2013b).
B
Yanjin Long
ylong@pitt.edu
Vincent Aleven
aleven@cs.cmu.edu
1
Learning Research and Development Center, University of Pittsburgh, 3939 O’Hara Street,
Pittsburgh, PA 15213, USA
2
Human-Computer Interaction Institute, Carnegie Mellon University, 5000 Forbes Avenue,
Pittsburgh, PA 15213, USA
123
56
Y. Long , V. Aleven
iments support the notion that an OLM can enhance students’ domain-level learning
outcomes through scaffolding of SRL processes, and are among the first in-vivo classroom experiments to do so. They suggest that an OLM is especially effective if it is
designed to support multiple SRL processes.
Keywords Open Learner Model · Self-assessment · Making problem selection
decisions · Intelligent tutoring system · Learner control · Self-regulated learning ·
Classroom experiment
1 Introduction
The work presented in this article investigates how Open Learner Models (OLMs)
used in intelligent tutoring systems (ITSs) can be designed to facilitate and enhance
aspects of students’ self-regulated learning, specifically, their assessment of their own
knowledge and their selection of suitable tasks for practice. Furthermore, we investigate whether the support of these two SRL processes will result in improvements on
students’ domain level learning outcomes and motivation. ITSs are a type of adaptive
learning technologies that provide scaffolded problem-solving practice to enhance students’ domain level learning (VanLehn 2011). ITSs generally track students’ learning
status (how much/how well has been learned) using a range of student modeling methods. They often display their assessment in an OLM, which the learner can inspect
(Bull and Kay 2008, 2010). OLMs use a variety of visualizations such as skill meters
(Corbett and Bhatnagar 1997; Martinez-Maldonado et al. 2015; Mitrovic and Martin
2007), and concept maps (Mabbott and Bull 2007; Perez-Marin et al. 2007). Bull and
Kay (2008) pointed out that OLMs have great potential to facilitate metacognitive
processes involved in self-regulated learning, such as self-assessment, planning and
reflection.
Self-regulated learning (SRL) is a critical component of student learning. Zimmerman and Martinez-Pons (1986) define SRL as “the degree to which students
are metacognitively, motivationally, and behaviorally active participants in their own
learning process.” Theories of SRL abound (Pintrich 2004; Winne and Hadwin 1998;
Zimmerman 2000). All tend to view learning as repeated cycles with broad phases
such as forethought, execution, and evaluation, with learning experiences in one cycle
critically influencing those in the next in intricate ways. A number of empirical studies have shown that the use of SRL processes (both cognitive and metacognitive
processes) accounts significantly for the differences in students’ academic performance in different domains of learning (Zimmerman and Martinez-Pons 1986; Pintrich
and Groot 1990; Cleary and Zimmerman 2000), such as reading comprehension,
math problem solving, music learning, athletic practice, etc. In the current paper,
we focus on using an OLM and shared student/system control to facilitate and scaffold two critical SRL processes in ITSs, making problem selection decisions and
self-assessment, in order to foster better domain level learning outcomes and motivation.
Although ITSs are typically strong system-controlled learning environments, recent
research has started to offer some learner control to students (Long and Aleven 2013a;
123
Enhancing learning outcomes through self-regulated learning…
57
Mitrovic and Martin 2007). Learner control may lead to higher motivation and engagement of learning with the systems (Clark and Mayer 2011; Schraw et al. 1998), which
in turn may lead to better learning outcomes (Cordova and Lepper 1996). The current paper focuses on learner control over problem selection in ITS. Prior research
has shown that good problem selection decisions can lead to more effective and efficient learning (Metcalfe 2009; Thiede et al. 2003). However, previous research has
also found that students are not good at making effective problem selection decisions on their own (Kostons et al. 2010; Metcalfe and Kornell 2003; Schneider and
Lockl 2002). Therefore, the process of making problem selection decisions needs to
be scaffolded.
Self-assessment is another critical SRL process. Accurate self-assessment of one’s
learning status lays a foundation for appropriate problem selection decisions (Metcalfe
2009). In addition, the process of self-assessing may lead to deep reflection and careful
processing of the learning content (Mitrovic and Martin 2007), which may in turn
lead to better learning outcomes. However, prior research indicates that students’ selfassessments are often inaccurate (Dunlosky and Lipko 2007) and that students tend
not to actively self-assess or reflect while learning with an ITS (Long and Aleven
2011). Thus, self-assessment also needs to be supported in an ITS, perhaps especially
if the ITS grants students control over problem selection.
As argued by Bull and Kay (2010), OLMs have the potential to scaffold and facilitate both self-assessment and making problem selection decisions in ITSs. However,
few in-vivo classroom experiments have been conducted to investigate whether having an OLM enhances students’ domain-level learning outcomes, when the OLM is
designed to facilitate SRL processes in ITSs. The current work focuses on designing
an OLM that facilitates and scaffolds self-assessment and making problem selection
decisions in an ITS for equation solving, with the goal to foster better domain level
learning outcomes and motivation through supporting these two SRL processes. We
conducted two classroom experiments with 302 middle school students to investigate
the effect of having an OLM on students’ domain level learning outcomes, motivation
and self-assessment accuracy. The experiments also investigated the effect of offering students partial control over problem selection on their learning and motivation
in the tutor. The results of our studies contribute to the literature on the effectiveness of OLMs in enhancing students’ domain-level learning, motivation and SRL
processes. The work also provides design recommendations for OLMs and offering
shared student/system control over problem selection in online learning environments.
1.1 Theoretical background
1.1.1 Using Open Learner Models to support self-assessment, making problem
selection decisions, and domain level learning
Prior work has designed and evaluated OLMs that support self-assessment in ITSs.
There are four primary types of OLMs: inspectable, co-operative, editable, and negotiated models (Bull 2004), which vary in the types of interactions involving the model
123
58
Y. Long , V. Aleven
between the system, students and other potential users such as teachers, parents and
peers (Bull and Kay 2010, 2016). Kerly and Bull (2008) compared the effect of
an inspectable OLM against a negotiated OLM on primary school students’ selfassessment accuracy. The students used the OLMs and self-assessed their learning
status regarding the learning topics in the system. Log data analyses revealed that the
students’ self-assessment accuracy (evaluated against the system-measured learning
status) improved significantly, especially with the negotiated models (Kerly and Bull
2008). Other forms of OLMs have been designed to facilitate students’ identification and self-assessment of their misconceptions and difficulties (Bull et al. 2010),
sometimes with involvement of parents (Lee and Bull 2008). These studies evaluated the effect of the OLMs on facilitating self-assessment mainly through self-report
questionnaires and analyses of log traces, and found that students were highly interested in viewing their misconceptions displayed in the OLM. On the other hand, the
effect of the self-assessment support through OLM on students’ domain level learning
outcomes was generally not measured in these studies.
Researchers have also explored how to use OLMs to support students in making
problem selection decisions in ITSs (Kay 1997; Mitrovic and Martin 2007). This line
of work highlights that it is challenging for students to make good problem selection decisions, and suggests that they might benefit from scaffolding. Mitrovic and
Martin (2003) tried to teach problem selection strategies through a scaffolding-fading
paradigm with the aid of an OLM in their SQL Tutor. Students with low prior knowledge first selected problems and then received feedback on their selection stating what
the system would have selected for them and why (the OLM was shown to the students
to help explain the system’s decisions). After they had attained a certain level of the
targeted domain knowledge, the scaffolding was faded, and the students selected their
own problems without receiving any feedback. The results indicated that the students
in the fading condition were more likely to select the same problems as the system
would have selected for them when the scaffolding was in effect. However, whether
or not these students kept making better problem selection decisions during the fading
stage (i.e., without feedback) was not measured.
Adaptive navigation support in hypermedia learning environments often has functions similar to those of an OLM, as it often highlights to the learners what they need to
study next based on their learning status and the difficulty of the topics (Brusilovsky
2004; Brusilovsky et al. 1996) as a way of scaffolding students’ problem selection
decisions. Effective designs that assist students in making good decisions on what to
attend to next include using headers and site maps, eliminating the links to irrelevant
materials, highlighting important topics, and so forth (Clark and Mayer 2011). These
designs help reduce students’ cognitive load for monitoring their learning status and
making the problem selection decisions. They may also help to make up for students’
lack of domain knowledge and metacognitive knowledge that is required to make
good decisions. In a study with QuizGuide, an adaptive hypermedia learning system,
Brusilovsky et al. (2004) found that with adaptive navigation support (designs in the
system that highlight important topics and topics that need more practice based on
students’ current learning status), students’ use of the system increased, as well as
their final academic performance.
123
Enhancing learning outcomes through self-regulated learning…
59
Lastly, controlled experiments that evaluate the effect of OLMs on enhancing
domain level learning outcomes are sparse in the prior literature, and have generated
mixed results. Arroyo et al. (2007) investigated the effect of an OLM with information regarding students’ domain level performance, accompanied by metacognitive
tips. They found that students in the OLM group achieved greater learning gains and
were more engaged in learning, compared to the no OLM condition. Metacognitive
tips alone, without the accompanying OLM, were ineffective, suggesting (without
definitively proving) that the improvement may be due primarily to the OLM. Hartley
and Mitrovic (2002) compared students’ learning gains with or without access to an
inspectable OLM, but failed to find a significant difference between the two conditions. The less able students’ performance in this experiment improved significantly
from pre- to post-test in both conditions (Hartley and Mitrovic 2002; Mitrovic and
Martin 2007). Brusilovsky et al. (2015) compared the effectiveness of an OLM integrated with social elements to a traditional OLM with only progress information in
a classroom experiment with college students, and found superior effects on learning
effectiveness and engagement for the OLM with social features. Basu et al. (2017)
integrated adaptive scaffolding in an open-ended learning environment to help middle
school students build their own learner models. In a classroom experiment, they found
that students with the adaptive scaffolding achieved better understanding of science
concepts as compared to their counterparts who had to build learner models without
any scaffolding from the system (Basu et al. 2017). Tongchai (2016) conducted an
experiment with college students and found that students who learned in a blended
course that used a learning management system with an embedded OLM achieved significantly better learning outcomes than their counterparts who learned in a traditional
face-to-face class in one semester. However, with the holistic comparison between the
face-to-face and blended class, the result could not be simply attributed to the presence of the OLM (Tongchai 2016). Therefore, more empirical evaluations are needed
to investigate whether and how an OLM can significantly enhance students’ domain
level learning outcomes through the scaffolding of SRL processes in ITSs, which is
one of the future directions in OLM research identified by Bull and Kay (2016).
1.1.2 Shared control over problem selection in intelligent tutoring systems
Research on adaptive learning technologies has generally found that students are
unable to make problem selection decisions that are as good as those made by computer
algorithms based on cognitive and instructional theories. In a classic experiment in
which participants learned vocabulary in a second language, Atkinson (1972) found
that the student-selected practice condition achieved better learning outcomes than
the random-selected condition, but was worse than the computer-selected condition,
which implemented a mathematic algorithm that takes into account students’ learning
status and item difficulty. Nevertheless, letting students make problem selection decisions in ITSs grants students a degree of learner control. Learner control has generally
been considered motivating to students (Clark and Mayer 2011), which may lead to
higher enjoyment and better domain level learning outcomes.
Learner control can be applied to different aspects of student learning in learning
technologies, e.g., selecting instructional materials (Brusilovsky 2004), deciding the
123
60
Y. Long , V. Aleven
characteristics of the interface (Cordova and Lepper 1996), selecting how the system
should be personalized through different ways to integrate the algorithms for item recommendation (Jameson and Schwarzkopf 2002; Parra and Brusilovsky 2015), or even,
deciding when to request hints from the system (Aleven and Koedinger 2000). Learner
control spans a spectrum from full system control to full student control, with different ways of shared student/system control in between. Shared control means that the
system and learners each control part of the learning activities and learning resources.
In addition to using OLMs or adaptive navigation support to scaffold learnercontrolled problem selection in ITSs, shared student/system control over problem
selection offers an approach to scaffold problem selection, different from the OLM
approach described above (Brusilovsky et al. 2004; Mitrovic and Martin 2007). With
shared control, the system can help prevent students from making suboptimal problem
selection decisions due to lack of domain and metacognitive knowledge, as well as
lack of appropriate motivation (e.g., to challenge themselves with new problem types).
Sharing control may also help alleviate students’ cognitive load. For example, Corbalan
et al. (2008) implemented a form of adaptive shared control over problem selection
in a web-based learning application for health sciences. The tutor pre-selected problem types for the students based on the task difficulty and available support that were
tailored to the students’ competence level and self-reported cognitive load. For each
selected problem type, the tutor presented problems to the student that only differed
with respect to superficial features (e.g., the species and traits in a genetics problem).
The student can then choose from these problems. This form of shared control led to
the same learning outcomes as the full system-controlled condition in the experiment,
although—contrary to expectation—it did not foster higher interest in using the system
(Corbalan et al. 2008). In another study, the same authors (Corbalan et al. 2009) manipulated the level of variability of the surface features of the problems, hypothesizing
that higher variability of the surface features would enhance the students’ perceived
control. They found that shared control combined with high variability of surface features led to significantly better learning outcomes and task involvement than shared
control with low variability features. Nevertheless, overall the shared control conditions did not lead to significantly better learning outcomes than the system-controlled
conditions. To sum up, prior work on creating shared student/system control over problem selection in ITSs have only found it led to comparable (but not better) learning
outcomes as full system control, and no significant motivational benefits have been
observed.
1.2 Research questions
Prior literature has highlighted the great potential of designing OLMs to facilitate
self-assessment and making problem selection decisions in ITSs. However, few invivo classroom experiments have empirically evaluated whether and how the designs
enhance students’ domain level learning outcomes and motivation through supporting
these two SRL processes. Moreover, shared control over problem selection offers
an approach that may be integrated with the design of OLM to further support the
SRL processes, which may in turn lead to better learning outcomes and motivation.
123
Enhancing learning outcomes through self-regulated learning…
61
Specifically, it is an open question whether the integration of OLM and shared control
will generate better learning outcomes and enjoyment than learning with full system
control over problem selection.
To address these open issues, we conducted two classroom experiments to investigate the following research questions: (1) Does the presence of an OLM that supports
self-assessment and making problem selection decisions in an ITS lead to better learning outcomes, greater enjoyment and more accurate self-assessment, as compared to
a no OLM condition? (2) Does shared control over problem selection lead to better
learning outcomes and greater enjoyment, as compared to a full system control condition? And (3) Does the presence of shared control over problem selection further
enhance the effect of the OLM on learning outcomes and enjoyment?
The first experiment was conducted in a fall semester with a small sample size (N =
57). Therefore, we conducted a replication study in the following spring semester
with a larger sample size (N = 245) with different groups of students to further
solidify our results. The procedures were kept the same in both experiments, except
that an enjoyment questionnaire was added to measure students’ enjoyment of using
the system in the second experiment.
2 Research platform
2.1 Lynnette and its built-in Open Learner Model
2.1.1 Example-tracing tutors and Lynnette
We used an intelligent tutoring system for equation solving, Lynnette, as the platform
of the research (Long and Aleven 2014; Waalkens et al. 2013). At the time, Lynnette
was an example-tracing tutor (Aleven et al. 2009, 2016) built with the Cognitive
Tutor Authoring Tools (CTAT, http://ctat.pact.cs.cmu.edu/). Example-tracing tutors
are a type of intelligent tutoring system that can be built without programming. They
behave similarly to Cognitive Tutors (Koedinger and Corbett 2006), a widely used
type of tutoring system from which they derive. Example-tracing tutors (including
Lynnette) provide a user interface in which each problem to be solved by the student is
broken into steps and provide step-by-step guidance as the student solves problems in
this interface. This guidance includes correctness feedback for each step, right after the
student attempts the step. Also, at any point in time, the student can request a hint as to
what to do next. This guidance is sensitive to the student’s path through the problem,
as is appropriate in a domain such as equation solving, where problems can have many
solution paths. Students need to get all the (required) steps correct in order to complete
a problem. Example-tracing tutors work by evaluating students’ problem-solving steps
against generalized examples of correct and incorrect problem solving steps (Aleven
et al. 2009), captured in behavior graphs that record the various ways of solving each
problem. The use of behavior graphs is a key difference with Cognitive Tutors, which
instead use rule-based cognitive models (Aleven 2010; Koedinger and Corbett 2006)
of the problem-solving skills targeted in the tutor. Example-tracing tutors can track
individual students’ knowledge growth using the well-known Bayesian Knowledge
123
62
Y. Long , V. Aleven
Table 1 Five types of equations in Lynnette
Equations
Example
Level/problem type
One step
x +5=7
Level 1
Two steps
2x + 1 = 7
Level 2
Multiple steps
3x + 1 = x + 5
Level 3
Parentheses
2(x + 1) = 8
Level 4
Parentheses, more difficult
2(x + 1) + 1 = 5
Level 5
Fig. 1 The problem-solving interface of Lynnette
Tracing method (Corbett and Anderson 1995). As described in more detail below, this
model forms the basis for their OLM.
Lynnette provides practice for five types of equations, shown in Table 1. As illustrated in Fig. 1, it offers step-by-step feedback and on-demand hints for equation
solving steps, and asks the students to self-explain each main step through drop-down
menus. Lynnette had been used in two previous classroom studies with 1556th–8th
grade students. One study found an effect size of .69 (Cohen’s d) on students’ learning gains from pre- to post-tests (Waalkens et al. 2013). The second study observed a
ceiling effect on pre-test (mean = 0.91, SD = 0.14), thus did not find significant improvement on the post-test (Long and Aleven 2013a).
2.1.2 Lynnette’s Open Learner Model
Lynnette’s OLM displays probability estimates of students’ mastery of the knowledge
components they are learning, depicted as “skill bars” (see Fig. 2). A knowledge
123
Enhancing learning outcomes through self-regulated learning…
63
Fig. 2 The built-in OLM of Lynnette; it displays skill mastery in the form of skill bars
component (KC) is defined as “an acquired unit of cognitive function or structure that
can be inferred from performance on a set of related tasks” (Koedinger et al. 2012 ). We
generally refer to KCs as “skills” in classroom use. The OLM was originally included
in the tutors to give students a sense of progress and of how close they are to finishing
the tutor unit they are working. The bar turns gold (as illustrated in Fig. 2) when
students reach a 95% probability of mastering the skill. To complete a unit, students
need to get all their skills in the unit to the gold level. Skill bars are a common type
of OLM embedded in ITSs, for example in Cognitive Tutors (Corbett and Bhatnagar
1997), the SQL-tutor (Mitrovic and Martin 2007), and an interactive tabletop learning
environment (Martinez-Maldonado et al. 2015). In a survey with college students,
Bull et al. (2016) also found that students favored viewing simple skill meters in their
learning systems over more complex OLMs such as treemaps or radar plots.
As mentioned, the probability estimates displayed in the skill meter are computed
using Bayesian Knowledge Tracing (BKT, Corbett and Anderson 1995). This process depends on having a KC model (e.g., Aleven and Koedinger 2013; Koedinger
et al. 2010), that is, a fine-grained decomposition of the knowledge needed for the
given task domain, in the form of knowledge components. Many ITSs have taken a
KC approach to learner modeling (Aleven and Koedinger 2013), including Cognitive
Tutors (Anderson et al. 1995; Koedinger and Aleven 2007), constraint-based tutors
(Mitrovic et al. 2001), example-tracing tutors (Aleven et al. 2009, 2016) and Andes
(VanLehn et al. 2005). This means they assess students’ knowledge continuously and
on a KC-by-KC basis, based on student performance on the practice opportunities
supported by the system. This process requires having a mapping of problem steps
to KCs (often referred to as “Q matrix”). This mapping is provided by the exampletracing tutor’s behavior graphs; authors need to label each link in the graph (which
represent problem-solving steps) with the KCs that are required for that step.
BKT has a long history in the field of ITS. It is being used in commercial tutoring
systems (e.g., Cognitive Tutors marketed by Carnegie Learning, Inc.) and is also
making its way into MOOCs and online courses (Pardos et al. 2013). Also, it has been
widely investigated in the educational data mining community. Individualized task
selection and mastery learning based on BKT have been shown to markedly improve
student learning (Corbett 2000; Desmarais and Baker 2012). BKT models are based on
relatively simple assumptions about learning and performance (Corbett and Anderson
1995). At any point in time, a given student either knows a KC of interest or does not
123
64
Y. Long , V. Aleven
know it. At any opportunity to apply the KC, she may learn it. When a student knows a
given KC, she will generally perform correctly on problem steps that involve the KC,
unless she slips. Likewise, when she does not know a given KC, she will not perform
correctly, unless she guesses right. These assumptions are captured in formulas that
compute the probability of a student knowing a KC after an opportunity to apply it.
These formulas use four parameters per KC, which can for example be estimated
from tutor log data: p (L 0 ): the probability of knowing the KC prior to the instruction;
p (T ): the probability of learning the KC on any opportunity; p (S): the probability of
getting a step wrong by slipping; and p (G): the probability of getting a step right by
guessing. After opportunity n to apply a given KC, p (L n |E n ), the probability that a
given student knows this KC, given the evidence on the step, is computed as follows:
p (L n |E n ) = p (L n−1 |E n ) + (1 − p (L n−1 |E n )) p (T )
Intuitively, the student knows the given KC after opportunity n, given the evidence
on opportunity n, if either she knew it before opportunity n or she did not know it but
learned it on opportunity n. This requires computing the revised probability that the
student knew the KC before opportunity n (i.e., p (L n−1 |E n )), based on the evidence
obtained on this opportunity . For correct performance, this revised probability is
calculated as follows:
p (L n−1 |E n = 1) =
p (L n−1 ) (1 − p (S))
p (L n−1 ) (1 − p (S)) + (1 − p (L n−1 )) p (G)
The probability that the student knew the KC before opportunity n, given correct
performance on opportunity n (i.e., p (L n−1 |E n = 1)) is equal to the probability of
getting the step right by knowing the KC over the probability of getting the step right in
any fashion (a direct application of Bayes rule). The student can get the step right either
by knowing and not slipping, or by guessing right when not knowing. The formula for
computing p (L n−1 |E n = 0) following incorrect performance on step n is analogous.
This method for learner modeling (i.e., BKT) has been validated very extensively
by others, for example by studying how well this method predicts post-test scores
or comparing how it predicts within-tutor performance (Corbett and Anderson 1995;
Corbett and Bhatnagar 1997; Gong et al. 2011; Pardos and Heffernan 2010; Yudelson
et al. 2013). For the current experiment, we did not estimate the four BKT parameters
from data, nor did we refine the KC model for equation solving. Instead, we used a KC
model based on initial cognitive task analysis and we used default parameter values
that are commonly used in Cognitive Tutors.
2.2 The new features of the redesigned Open Learner Model in Lynnette
We went through a user-centered design process (e.g., conducted interviews, think
alouds and prototyping) to redesign Lynnette’s built-in OLM (i.e., the skill bars) so
that it supports students in self-assessing their mastery of the targeted knowledge and
in making problem selection decisions with shared control over problem selection
123
Enhancing learning outcomes through self-regulated learning…
65
Fig. 3 View 1 of the redesigned OLM; The three self-assessment prompts (on the left) and the overall
progress bar (on top of the skill bars) are new features added to the original OLM
Fig. 4 The three self-assessment prompts will replace the hint window (as shown in Fig. 1) and appear
one by one after the student enters the correct solution of the equation; The overall progress bar and the
skill bars will be shown after the student answers the third prompt and clicks the “View My Skills” button.
The tutor animates the growing or shrinking of the level progress and skill bars
between students and the system (Long and Aleven 2013a). Specifically, we implemented two views of the OLM with three new features: self-assessment prompts,
delaying the update of the skill bars, and high-level summary of progress on a problem
selection screen. These views are displayed, respectively, at the end of each problem
(View 1) and in-between problems, to support problem selection (View 2).
Self-Assessment Prompts We added self-assessment prompts to View 1 of the OLM,
which is shown on Lynnette’s problem-solving interface when the student finishes a
problem. (It is not visible during problem solving, in contrast to most standard OLMs;
see Fig. 1). The three self-assessment prompts (see Fig. 3, left part) appear first and
are shown one by one. Thus, students respond to the self-assessment prompts without
referencing the skill bars, so that they can independently reflect and self-assess. Once
the student has responded to these prompts, the “View My Skills” button appears.
123
66
Y. Long , V. Aleven
Fig. 5 The problem selection interface of Lynnette; it also shows View 2 of the OLM, i.e., the level progress
bars and number of problems solved for the five levels
Delaying the Update of the Skill Bars Once students click the “View My Skills”
button, the progress bars shown on the right part of Fig. 3 appear (as illustrated by
Fig. 4). The bars start moving after 1 second. They grow/shrink to new places based
on students’ updated skill mastery after finishing the current problem, as calculated
by BKT (Koedinger and Corbett 2006). The updating of the bars is a form of feedback
on students’ self-assessment as prompted by the three self-assessment questions. The
black vertical lines on the skill bars mark the mastery level of the skills from the last
problem, which allows students to track and reflect on the change of their skill mastery.
High-Level Summary of Progress on a Problem Selection Screen The second view of
the OLM (View 2) displays a high-level summary of the OLM on the problem selection
screen of Lynnette, which students visit in between problems (shown in Fig. 5). This
view shows the overall progress of each level as well as how many problems students
have solved in that level, which may assist students in deciding which level to select
next. The overall progress of a particular level is determined by the least mastered skill
in that level. We also included an overall level progress bar on View 1 of the OLM
(see Fig. 3).
2.3 Shared control over problem selection in Lynnette
Results from our user-centered design process also suggested that students needed
scaffolding to decide which problems to practice (Long and Aleven 2013a), consistent with prior literature on granting students control over problem selection (Clark
and Mayer 2011). Specifically, it is challenging for students to decide how much prac-
123
Enhancing learning outcomes through self-regulated learning…
67
tice is needed to reach mastery for a certain problem level (Long and Aleven 2013a).
Therefore, we designed and implemented a type of shared control over problem selection between students and the system. After finishing a problem, students select the
level from which a problem should be given, from one of the five levels offered by
the tutor. The system then picks the specific problem from that level. Students are free
to select the levels in any order, but once they reach mastery for a level (meaning all
skills are mastered with 95% probability, according to BKT), the tutor locks the level,
so no more problems can be selected from that level. All problems in the same level
are isomorphic in that they involve the same set of skills for equation solving (e.g.
add/subtract a constant from both sides), and will only be practiced once. In other
words, each time the student selects a level, she will get a new problem to practice
from that level. As shown by Fig. 5, students can select the level they want to work
on next by clicking the “Get One Problem” button. If a level is fully mastered, the
“Get One Problem” button will be hidden (see Level 1 and Level 2 in Fig. 5). To
complete the tutor, a student must master all levels. Thus, the student gets to select the
level, the system determines which specific problem; it also determines from which
level the student can select. The student can select only from levels with un-mastered
skills, meaning skills for which the 95% BKT threshold is not met; different students
typically need to complete different numbers of problems to reach these thresholds.
Compared to Corbalan et al. (2008), the current shared control allows students to
make arguably a more critical decision, i.e., which problem level to practice next,
whereas the prior work only let students pick a specific problem from a level selected
by the system. However, the amount of control in our system may still be perceived
as modest to the students, and its effect on enhancing learning and motivation needs
to be empirically evaluated.
To sum up, we redesigned the original OLM to facilitate active self-assessment
processes and shared control over problem selection. The OLM is shown on the
problem-solving interface at the end of each problem to promote a short session for
self-assessment with feedback (from the update of the skill and level bars) on students’ current learning status before they proceed to select the next level. A high-level
view of the OLM is also displayed on the problem selection screen to help students
decide in what order to practice the five levels of equations. No explicit instructions
are provided to students regarding how to refer to the OLM when they make problem
selection decisions.
3 Classroom experiments
We conducted two classroom experiments with a total of 301 middle school students to
evaluate the effectiveness of an OLM with support for self-assessment by students and
shared control over problem selection on students’ domain level learning outcomes,
problem selection decisions, self-assessment accuracy and motivation. The two experiments shared the same experimental design and procedure, only that Experiment 2
had administered an enjoyment questionnaire along with the paper post-test. In this
section, we first introduce the methods of the two experiments, and describe the results
separately.
123
68
Y. Long , V. Aleven
Fig. 6 The problem selection screen of the OLM+noPS condition (left) and the noOLM+PS condition
(right)
3.1 Methods of the two experiments
3.1.1 Experimental design
Both experiments had a 2×2 factorial design (Long and Aleven 2013b), with independent factors OLM (whether or not both views of the redesigned OLM were displayed)
and PS (whether the students had shared control over problem selection or problem
selection was fully system-controlled). Therefore, the experiment had four conditions:
(1) OLM+PS; (2) OLM+noPS; (3) noOLM+PS; and (4) noOLM+noPS.
For the two noOLM conditions, both views of the OLM were removed from the
interface. For the two noPS conditions, there was a single “Get One Problem” button
on the problem selection screen, and the tutor assigned problems to the students to
reach mastery from Level 1 to Level 5. This amounts to system control with ordered
blocked practice for the five levels, which is common practice for many ITSs. On the
other hand, the students in the two PS conditions were free to select whether they
would follow a blocked or interleaved practice (they can jump around to select the
levels) to reach mastery for the five levels. Figure 6 illustrates the problem selection
screens of the OLM+noPS and noOLM+PS conditions.
3.1.2 Procedure
The four conditions followed the same procedure. All conditions completed a paper
pre-test on the first day of the study for around 25 min. Next the participants worked
with one of the four versions of Lynnette in their computer labs for 5 41-min class
periods on five consecutive school days. If a student mastered all five levels in less than
5 class periods, she was directed to work in a Geometry unit. Lastly, the participants
completed an immediate paper post-test in one class period on the last day of the
experiments.
123
Enhancing learning outcomes through self-regulated learning…
69
Table 2 Measurements of the two experiments
Constructs/abilities measured
Assessments
Procedural skills of equation solving
Procedural items on pre-test Procedural
items on post-test
Conceptual knowledge of equations
Conceptual items on pre-test Conceptual
items on post-test
Learning performance in the tutor
Tutor log data: process measures, e.g.
number of incorrect attempts and hints per
step
Self-assessment accuracy on procedural skills
Self-assessment question on pre-test
Self-assessment question on post-test
Enjoyment of using the tutora
Seven-point Likert scale items on post-test
a The enjoyment questionnaire was only given in Experiment 2
3.1.3 Participants
Experiment 1 had 56 7th grade students from 3 advanced classes taught by the same
teacher at a local public school. Experiment 2 had 245 7th and 8th grade students
from 16 classes (eight advanced classes and eight mainstream classes) of three local
public schools. The students in Experiment 2 were taught by six teachers. In both
experiments, the students were randomly assigned to one of the four conditions within
each class.
3.1.4 Measurements
Table 2 summarizes the measurements implemented in the two experiments. The
paper pre- and post-tests were in similar format and measured students’ abilities to
solve linear equations. We created two equivalent test forms and administered them
in counterbalanced order. There were both procedural and conceptual items on the
tests. Procedural items included equations of the same types that students practice in
Lynnette. The procedural items were graded from 0 to 1, with partial credit given to
correct intermediate steps. In other words, if a student did not finish a problem but
wrote some intermediate steps, each correct step was given 0.2.
Conceptual items were True/False questions that measured students’ understanding
of key concepts of equations. Similar items have also been used in prior literature to
measure students’ conceptual knowledge (e.g., Booth and Koedinger 2008). Figure 7
shows three example conceptual items from the pre/post-tests. Each conceptual item
was graded as 0 or 1.
In addition to the pre/post-tests, we also extracted process measures from the tutor
log data to compare students’ learning performance in the tutor, such as the number
of hints requested from the tutor per problem step or the number of incorrect attempts
per problem step.
We measured students’ self-assessment accuracy for the procedural items on preand post-tests. Students were asked to rate from 1 to 7 regarding how well they think
they can solve each equation before they actually solved it (as shown in Fig. 8).
123
70
Y. Long , V. Aleven
Fig. 7 Examples of conceptual items from the pre/post-tests
Fig. 8 Example of the self-assessment question for each procedural item
We computed the absolute accuracy of students’ self-assessment (Schraw 2009),
using the formula shown below, where “N ” represents the number of tasks, “c” stands
for students’ ratings on their ability to finish the task while “ p” represents their actual
performance on that task. Students’ self-assessment ratings were converted to a 0 to 1
scale to be used in this formula. This index simply computes the discrepancy between
the self-assessed and actual scores; therefore, a lower index represents more accurate
self-assessment.
N
Absolute Accuracy Index = N1
(ci − pi )2
i=1
Lastly, we added a questionnaire to measure students’ enjoyment of learning with
the different versions of Lynnette to the post-test in Experiment 2. The enjoyment
questionnaire was adapted from the Enjoyment subscale of the Intrinsic Motivation
Inventory (IMI, University of Rochester, 1994). The questionnaire had seven items,
all with a 7-point Likert Scale. One example item was, “Working with the linear
equation program was fun.” We used the average score of the seven items to measure
the students’ self-reported enjoyment of using the system.
3.2 Hypotheses
Table 3 summarizes the hypotheses of the two experiments, the corresponding data
analyses, and whether the hypotheses were confirmed by the data analyses.
3.3 Results of Experiment 1
We report Cohen’s d for effect sizes. An effect size d of .20 is typically deemed a
small effect, .50 a medium effect, and .80 a large effect.
123
Not confirmed
Not confirmed
Confirmed
Not confirmed
Not confirmed
Confirmed
Not confirmed
Not confirmed
N/A
Not confirmed
Test for significant main effect of the OLM on
students’ learning gains on equation solving from
pre- to post-tests
Test for significant main effect of the shared control
on students’ learning gains on equation solving
from pre- to post-tests
Test for significant interaction between the OLM and
the shared control on students’ learning gains on
equation solving from pre- to post-tests
Planned contrast—If the interaction is significant,
compare the two shared control condition to test for
significant difference on learning gains due to the
OLM
Test for significant main effect of the shared control
on students’ self-reported enjoyment ratings on
post-tests
Test for significant main effect of the OLM on
students’ change of self-assessment accuracy from
pre- to post-tests
H2 Shared control over problem selection will lead
to greater learning gains on equation solving,
compared to full system control
H3 The presence of the OLM will lead to greater
learning gains on equation solving, when shared
control over problem selection is provided
H4 Shared control over problem selection will lead
to higher enjoyment of learning with the tutor,
compared to full system control
H5 After learning with a tutor that has an OLM,
students will more accurately self-assess their
equation solving abilities than their counterparts
who learned without an OLM
H1The presence of an OLM will lead to greater
learning gains on equation solving
Hypotheses
confirmed by
Experiment 2?
Data analyses
Hypotheses
Hypotheses
confirmed by
Experiment 1?
Table 3 Hypotheses of the two experiments
Enhancing learning outcomes through self-regulated learning…
71
123
72
Y. Long , V. Aleven
Table 4 Means and SDs for the test performance for all four conditions in Experiment 1
Conditions
Pre-test
(procedural)
Post-test
(procedural)
Pre-test
(conceptual)
Post-test
(conceptual)
OLM+PS
.44 (.26)
.71 (.23)
.48 (.22)
.52 (.19)
OLM+noPS
.56 (.35)
.68 (.22)
.47 (.17)
.54 (.23)
noOLM+PS
.36 (.20)
.63 (.24)
.39 (.22)
.36 (.20)
noOLM+noPS
.49 (.20)
.63 (.29)
.44 (.16)
.46 (.20)
Table 5 Means and SDs of process measures for all four conditions in Experiment 1
OLM+PS
OLM+noPS
noOLM+PS
noOLM+noPS
Total number of problems
32.80 (9.15)
36.93 (11.50)
34.23 (6.51)
39.31 (9.30)
Total number of stepsa
384.25 (139.46)
429.94 (104.95)
397.33 (124.85)
458.07 (125.76)
Incorrect attempts per step
.33 (.24)
.34 (.23)
.43 (.33)
.45 (.22)
Hints per step
.23 (.20)
.28 (.26)
.32 (.29)
.38 (.61)
a In Long and Aleven (2013b), we reported the averages of process measures combining the equationsolving and self-explanation steps. In the current paper, we only report the data with the equation-solving
steps to better capture students’ learning performance
Table 4 summarizes the average test performance of the four conditions on the
pre- and post-tests (Long and Aleven 2013b). No significant differences between the
conditions on the pre-test were found. Overall (i.e., across conditions) the students
improved significantly from pre to post-tests on the procedural items, affirming the
effectiveness of Lynnette in helping students learn equation-solving skills (F (1, 52) =
35.239, p < .000, d = 1.65). No significant improvement on the conceptual items
was found, but the improvement on the overall test was still significant (procedural
and conceptual items together: F (1, 52) = 13.927, p = .000, d = 1.04). A twoway ANOVA with factors OLM and PS found a significant main effect of OLM on
students’ overall post-test scores (F (1, 52) = 4.903, p = .031, d = .56), suggesting that
the inclusion of the OLM led to better domain level learning outcomes. However, no
significant main effect was found for PS, nor a significant interaction between OLM
and PS on equation solving.
We also ran ANOVAs on the process measures from the tutor log data. On average,
the two OLM conditions made fewer incorrect attempts and requested fewer hints
(as shown in Table 5), but the difference was not statistically significant. Overall no
significant main effects or interactions were found for the two factors with the process
measures.
Lastly, we analyzed students’ self-assessment ratings and their absolute accuracy
of self-assessment on pre/post-tests. Table 6 shows the averages of the four conditions
(the lower the accuracy index, the more accurate the students’ self-assessment). A
repeated measures ANOVA revealed that students’ self-assessment ratings increased
significantly from pre- to post-test (F (1, 52) = 13.08, p = .001, d = 1.00). No
significant differences were found between the conditions. On the other hand, no
123
Enhancing learning outcomes through self-regulated learning…
73
Table 6 Means and SDs of self-assessment (SA) ratings and absolute accuracy for the four conditions in
Experiment 1
OLM+PS
OLM+noPS
noOLM+PS
noOLM+noPS
Pre-test SA ratings
4.63 (1.44)
5.22 (1.36)
4.21 (1.20)
4.66 (1.38)
Post-test SA ratings
5.42 (1.61)
5.42 (1.04)
4.75 (1.18)
5.44 (1.33)
Pre-test SA accuracy
.19 (.16)
.15 (.15)
.15 (.13)
.14 (.15)
Post-test SA accuracy
.13 (.12)
.13 (.09)
.17 (.08)
.11 (.08)
significant improvement on students’ absolute self-assessment accuracy was found.
The students showed relatively high absolute self-assessment accuracy on both pre- and
post-tests. An absolute accuracy index of .14 means that a student correctly answers
a question with 62.6% confidence; 50% confident is regarded as moderately accurate,
based on Schraw (2009). Also, no significant differences were found between the
conditions.
3.4 Results of Experiment 2
For the analyses in Experiment 2, we conducted ANCOVA tests using teachers as a
co-variate on the dependent measures to account for the variance that resides within
classes.
3.4.1 Effects of the OLM and shared control on equation solving abilities
To address Hypotheses 1 to 3, that both the presence of the OLM and having shared
control over problem selection will enhance students’ learning outcomes, and the
effect of the shared control will be further strengthened by the presence of the OLM,
we analyzed students’ learning gains from pre- to post-test for both the procedural and
the conceptual items.
Table 7 shows the average test performance of the four conditions for the procedural and conceptual items. Overall the four conditions improved significantly on the
procedural items (F (1, 236) = 81.066, p < .000, d = 1.17) and conceptual items (F (1,
236) = 23.168, p < .000, d = 1.17) from pre-test to post-test. ANCOVA (using OLM
and PS as two independent variables, and using teachers as co-variate; the learning
gains (post minus pre) were used as dependent variables)1 analyses found no significant main effects for OLM or PS on students’ learning gains from pre- to post-tests
on the procedural items or conceptual items.
However, a significant interaction between OLM and PS was found on students’
learning gains from pre to post-tests for the procedural items (F (1, 236) = 7.535, p =
.007). Planned contrasts (we had hypothesized that the presence of the OLM would
enhance the effect of the shared control on enhancing learning outcomes) revealed that
1 The average scores of pre/post-tests are not normally distributed (even with Logarithmic and Square-Root
transformations), so we used the learning gains as the dependent variables, which are normally distributed.
123
74
Y. Long , V. Aleven
Table 7 Means and SDs for the test performance for all four conditions in Experiment 2
Conditions
Pre-test
(procedural)
Post-test
(procedural)
Pre-test
(conceptual)
Post-test
(conceptual)
OLM+PS
.50 (.28)
.69 (.28)
.47 (.24)
.58 (.23)
OLM +noPS
.57 (.30)
.70 (.28)
.51 (.22)
.57 (.21)
noOLM+PS
.57 (.33)
.63 (.29)
.46 (.20)
.52 (.21)
noOLM+noPS
.59 (.32)
.75 (.24)
.51 (.22)
.58 (.25)
Table 8 Means and SDs for the time spent with the OLM for the two OLM conditions in Experiment 2
Conditions
Total time spent with the
OLM
Time spent answering the
three self-assessment
prompts
Time spent viewing the
update of the skill bars
OLM+PS
292.94 (97.25)
219.14 (71.39)
46.69 (18.59)
OLM+noPS
299.28 (113.50)
224.53 (93.36)
50.15 (17.11)
the OLM + PS condition learned significantly more than the noOLM + PS condition
(F (1, 236) = 6.401, p = .012). In other words, when students shared control over
problem selection with the system (i.e., the students were allowed to select the level
of the next problem), those who had access to an OLM learned significantly more on
the procedural skills for equation solving than their counterparts who did not. In addition, pairwise contrasts with Bonferroni Corrections revealed that the noOLM+noPS
condition also learned significantly more than the noOLM+PS condition (F (1, 236) =
6.056, p = .015; with corrections: p = .03) on the procedural items. Therefore, when the
Open Learner Model was not in effect, the fully system-controlled condition learned
significantly more than the shared control condition.
3.4.2 Correlations between the time spent on the OLM and students’ equation
solving abilities
To further investigate how the students’ interactions with the OLM might have affected
their learning processes, we extracted the time the students spent on the two main
components of the redesigned OLM, namely, the three self-assessment prompts and
the showing and updating of the skill bars. We then calculated the correlations between
the time data and students learning gains from pre- to post-tests. There were a total of
119 students in the two OLM conditions.
Table 8 shows the average time the students spent on interacting with the different
components of the OLM. The differences on the time were not statistically significant
between the OLM+PS and OLM+noPS condition.
Table 9 shows the correlations between the time students spent with the OLM
and their learning gains on procedural and conceptual items. Only for the OLM+PS
condition, the correlation between the total time students spent on the OLM (adding
up the time spent with the prompts and the skill bars) and students’ learning gains on
123
Enhancing learning outcomes through self-regulated learning…
75
Table 9 The correlations ( p-values in parentheses) between the time spent with the OLM and learning
gains in Experiment 2
Learning gains (procedural)
Learning gains (conceptual)
OLM+PS
OLM+noPS
OLM+PS
OLM+noPS
Total time with the OLM
.33 (.02)*
.20 (.10)
−.10 (.48)
−.16 (.21)
Time with the prompts
.30 (.03)*
.22 (.08)
−.03 (.83)
−.16 (.20)
Time with the skill bars
.22 (.11)
.12 (.33)
−.09 (.53)
−.11 (.38)
* Significance at the .05 level
Table 10 Means and SDs of process measures for all four conditions in Experiment 2
OLM+PS
OLM+noPS
noOLM+PS
noOLM+noPS
Total number of problems
29.02 (8.29)
30.61 (9.58)
32.31 (9.02)
31.88 (9.26)
Total number of steps
228.00 (98.93)
252.97 (113.61)
255.39 (114.49)
257.56 (121.97)
Incorrect attempts per step
.41 (.42)
.37 (.28)
.49 (.38)
.46 (.37)
Hints per step
.39 (.52)
.38 (.53)
.46 (.58)
.36 (.50)
procedural items is significant. Also, only for the OLM+PS condition, the correlation
between the time spent on the prompts only and students’ procedural learning gains is
significant. Thus, when students have shared control over problem selection with the
system, the more time they spent with the OLM, especially with the self-assessment
prompts, the greater the gain from pre- to post-test for procedural skills. On the other
hand, the correlations between the time students spent with the OLM and their learning
gains on conceptual items are not statistically significant.
3.4.3 Effects of the OLM and shared control on learning performance in the tutor
We also looked at the process measures from the tutor log data to compare the conditions’ learning performance in the tutor. Table 10 shows the averages for the conditions
of the total number of problems and steps completed to reach mastery in the tutor,
the average number of incorrect attempts per step, and the average number of hints
requested by the students per step. ANCOVA tests (using OLM and PS as independent
factors, teachers as co-variate, and the Logarithmic form of the process measures as
dependent variables) revealed significant main effects of OLM on the total number of
problems (F (1, 236) = 8.116, p = .005, d = .36) and total number of steps (F (1,
236) = 3.900, p = .049, d = .25). The two OLM conditions completed significantly
fewer problems and steps in the tutor, indicating more efficient learning to reach mastery than their counterparts who did not have the OLM. In addition, the main effect
of OLM was significant for the average number of incorrect attempts made by the
students on the equation solving steps (F (1, 236) = 13.239, p < .000, d = .47).
The students who learned with the OLM made significantly fewer incorrect attempts
123
76
Y. Long , V. Aleven
Table 11 Means and SDs of enjoyment ratings for all four conditions
Enjoyment ratings
OLM+PS
OLM+noPS
noOLM+PS
noOLM+noPS
4.54 (1.57)
4.47 (1.45)
4.33 (1.46)
4.38 (1.42)
per step in the tutor, as compared to the two noOLM conditions. No significant main
effect of PS was found for any of the process measures.
Lastly, a significant interaction between OLM and PS was found for the average
number of hints requested per step (F (1, 187) = 4.097, p = .044). Planned contrasts revealed that the noOLM+PS condition asked for significantly more hints than
the OLM+PS condition (F (1, 187) = 5.939, p = .016). The difference between the
noOLM+noPS and noOLM+PS condition was not significant with Bonferroni Corrections for pairwise comparisons on the average number of hints requested.
3.4.4 Effect of the OLM and shared control on enjoyment of using the tutor
To address Hypothesis 4, that having shared control over problem selection will
enhance students’ enjoyment of learning with the tutor, we analyzed students’ enjoyment ratings on the post-tests.
Table 11 shows the averages of enjoyment ratings for the four conditions. The
ANCOVA (using teachers as co-variate) tests found no significant main effects of
OLM or PS for the enjoyment ratings. No significant interaction was found between
the two factors either.
3.4.5 Self-assessment accuracy
To address Hypothesis 5, that after learning with a tutor that has an OLM, students
will more accurately self-assess their equation-solving abilities, compared to their
counterparts who learned without the OLM, we analyzed students’ self-assessment
ratings and absolute self-assessment accuracy on the pre- and post-tests.
Table 12 shows the averages of self-assessment ratings (based on a 7-point Likert
scale) on pre- and post-tests for the four conditions. It also shows the averages of the
absolute accuracy of self-assessment calculated based on the formula introduced in
Sect. 3.1.4, which measures the discrepancy between students’ self-assessed abilities
and their actual equation solving performance on the procedural items.
Overall the four conditions’ self-assessment ratings increased significantly from
pre- to post-test (F (1, 235) = 16.410, p < .000, d = .53), suggesting that the
students were more confident about their equation solving abilities on the post-test
than they were at the pre-test. However, no significant main effects of OLM or PS
on the change of self-assessment ratings from pre- to post-tests were found from the
ANCOVA tests, nor for the interaction between the two factors.
With respect to the absolute accuracy of self-assessment, the students showed relatively high self-assessment accuracy for their equation solving abilities both on the
pre- and post-tests (recall that the higher the absolute accuracy index, the less accurate
123
Enhancing learning outcomes through self-regulated learning…
77
Table 12 Means and SDs of self-assessment (SA) ratings and absolute accuracy for the four conditions in
Experiment 2
OLM+PS
OLM+noPS
noOLM+PS
noOLM+noPS
Pre-test SA ratings
4.89 (1.48)
5.30 (1.31)
5.01 (1.61)
5.20 (1.48)
Post-test SA ratings
5.38 (1.54)
5.62 (1.39)
5.04 (1.78)
5.44 (1.35)
Pre-test SA accuracy
.17 (.13)
.16 (.13)
.15 (.14)
.18 (.14)
Post-test SA accuracy
.14 (.14)
.11 (.11)
.14 (.13)
.13 (.11)
the students’ self-assessment is). The absolute accuracy of self-assessment improved
significantly for the four conditions together from pre- to post-tests (F (1, 235) =
16.296, p < .000, d = .53). However, no significant main effects or interaction were
found for the two factors on the change of absolute accuracy of self-assessment from
pre- to post-tests.
3.4.6 Student-selected problem sequences with the shared control
We used tutor log data to investigate students’ problem selection behaviors when they
had shared control. (Recall that in the shared control condition, the student could select
the Level from which the next problem would be given, although could select only
from the Levels with un-mastered skills, according to the system’s mastery criterion.)
Therefore, we analyzed the sequences of levels selected by the two shared control
(PS) conditions, as well as whether the students’ problem selection decisions were
influenced by the OLM.
A total of 120 students were in the two PS conditions. Tutor log data revealed that
61 out of the 120 students (50.8%) selected a fully ordered and blocked sequence
from Level 1 to Level 5, which corresponded exactly to the tutor’s problem-selection
method in the two noPS conditions, in which the system had full control over problem
selection. On the other hand, 59 out of the 120 students (49.2%) selected sequences
with varying ways of interleaving the levels. Furthermore, for the OLM+PS condition,
28 out of 53 (53%) students selected the ordered blocked sequence. For the noOLM+PS
condition, 33 out of 67 (49%) students followed the ordered blocked sequence, which
does not differ much from the OLM+PS condition.
To analyze the degree to which the student-selected sequences differed from a fully
ordered blocked sequence (i.e., the system-selected sequence), we counted the number
of reverse orders from the student-selected sequences against the ordered blocked
sequence. This count took into account whether and how far the student was skipping
a lower level. For example, with an ordered blocked sequence 1234, a student-selected
sequence of 2314 will lead to 2 reverse orders, a student-selected sequence of 3412
will lead to 4 reverse orders, and the maximum number of reverse orders against 1234
is 6. We normalized the counts by dividing them by the maximum number of reverse
orders possible against the relevant ordered blocked sequence; each student had a
unique ordered blocked sequence as the number of practiced problems at each level
was determined by his/her performance in the tutor (based on the tutor’s 95% BKT
123
78
Y. Long , V. Aleven
Table 13 Comparison of sequences of problem levels selected by the two PS conditions in Experiment 2
Percentage of students who selected
the ordered blocked practice (%)
Normalized average count of reverse
orders (SD)
OLM+PS
53
.06 (.15)
noOLM+PS
49
.07 (.14)
Table 14 Means and SDs of the test performance of the students by whether they selected an ordered
blocked sequence
Pre-test
(procedural)
Post-test
(procedural)
Pre-test
(conceptual)
Post-test
(conceptual)
Ordered blocked
.55 (.31)
.69 (.25)
.47 (.21)
.55 (.21)
Interleaved
.54 (.31)
.63 (.32)
.46 (.24)
.54 (.23)
Table 15 Means and SDs of the
enjoyment ratings of the students
by whether they have selected an
ordered blocked sequence
Enjoyment ratings
Ordered blocked
Interleaved
4.96 (1.38)
3.87 (1.44)
mastery threshold). As shown in Table 13, the normalized counts were small for both
PS conditions. An ANCOVA test (using OLM as the independent factor, teachers as
co-variate, and the normalized count as the dependent variable) revealed no significant
difference due to the OLM on the normalized counts of reverse orders. In other words,
the students by and large followed the same ordered blocked sequence, and the presence
of the OLM did not affect the sequence of levels for practice selected by students.
Next we investigated whether the student-selected interleaved sequences were associated with different effects on student learning and enjoyment, compared to the
student-selected ordered blocked sequence. We created a new binary factor for whether
the students selected an ordered blocked sequence or an interleaved sequence. Table 14
shows the average test performance of the students by this new factor. ANCOVA (using
the blocked/interleaved factor as the independent variable, teachers as co-variate) analyses revealed no significant main effects of this factor on students’ learning gains from
pre- to post-tests for the procedural items or the conceptual items. This could be due to
the fact that the interleaved sequences did not differ much from the blocked sequence.
On the other hand, as shown in Table 15, the students who selected the ordered
blocked sequence reported higher enjoyment ratings on the post-test. The difference
was statistically significant (F (1, 113) = 14.392, p < .000, d = .69) as revealed by
an ANCOVA test.
We also analyzed the effect of the blocked/interleaved factor on process measures from the log data, in order to investigate whether the different student-selected
sequences affected students’ learning performance in the tutor. Table 16 shows the
averages of the process measures by whether the students selected the ordered blocked
sequence.
Although on average, students who selected the ordered blocked sequence needed
fewer problems and steps in order to reach mastery, made fewer errors and requested
123
Enhancing learning outcomes through self-regulated learning…
79
Table 16 Means and SDs of process measures of the students by whether they selected an ordered blocked
sequence
Total number of problems
Ordered blocked
Interleaved
29.41 (8.49)
32.36 (8.98)
Total number of steps
235.02 (103.93)
251.85 (112.93)
Incorrect attempts per step
.40 (.34)
.51 (.44)
Hints per step
.33 (.50)
.53 (.60)
fewer hints, ANCOVAs (using the Logarithmic forms of the process measures as
dependent variables) found that the main effect of the blocked/interleaved factor was
only marginally significant for the total number of problems (F (1, 113) = 2.846, p =
.094, d = .31), and not significant for the other three process measures.
4 Discussion
4.1 Effects of the OLM with support for self-assessment on domain level
learning outcomes
We found positive effects of an OLM with support for self-assessment on students’
domain level learning outcomes in both experiments. In Experiment 1, we found a
significant main effect of the OLM on students’ learning of procedural knowledge,
although with a small sample size. In Experiment 2, we found a significant interaction between the two factors (i.e., OLM and PS). Specifically, the presence of an
OLM that integrates scaffolding for self-assessment leads to significantly better learning outcomes in an ITS when shared control over problem selection is granted. Both
experiments indicate that an OLM with self-assessment support can enhance students’
domain level learning outcomes, although are not fully consistent regarding the circumstances under which it does so; given the larger N of Experiment 2, we believe
that the presence of shared control over problem selection may be key. Therefore, we
mainly discuss and draw our conclusions based on results from Experiment 2 with
respect to the effect of the redesigned OLM and how it might have influenced student
learning.
How might the skill bars, combined with self-assessment prompts, enhance students’ domain level learning, as we found in the current experiments? In our prior work
(Long and Aleven 2013c), we found that skill bars with paper-based self-assessment
prompts led to more careful and deliberate learning processes, and resulted in significantly better learning outcomes for lower-performing students. Our interpretation
of these findings was that the OLM combined with prompts caused students to pay
more attention to the learning activities and perhaps spurred fruitful reflection on the
skills to be learned. Similarly, in the current experiments, it is probable that the selfassessment prompts and delayed updating of the skill bars exerted the same positive
effect on students’ learning process, thereby contributing to better learning outcomes.
In Experiment 2, we found a significant main effect of the OLM on some process
123
80
Y. Long , V. Aleven
measures from the tutor log data: students who learned with the OLM needed significantly fewer steps and problems to reach the tutor’s mastery criterion, and had fewer
incorrect attempts in the tutor. These results indicate that the OLM helped facilitate
students’ learning process. In addition, our correlation analyses indicate that the longer
the students interact with the OLM (especially with the self-assessment prompts) in the
shared control condition, the more they learned from the tutor on procedural items.
These self-assessment prompts might have prompted the students to reflect on the
skills they just practiced or to stay more focused on the practice in the tutor.
Furthermore, the effect of the OLM may have been amplified by having the shared
control over problem selection, as revealed by the significant interaction we found in
Experiment 2. Also, when we looked at the correlations between the time spent on
the OLM and students’ learning gains for the two OLM conditions, the correlation
was significant only for the OLM+PS condition (more time spent with the OLM
was associated with greater learning). Having control over problem selection might
nudge students to pay more attention to the prompts and the change of their own
skill mastery as displayed by the OLM, and thereby further enhance the reflective
process with the model. Although we gave no instructions regarding how to use the
information from the OLM to help make problem selection decisions, the students
might naturally attempt to look for such information when they were required to make
choices. Without an OLM, they had to recall and self-assess their learning status,
which may have increased the cognitive load they experienced and may have been
frustrating and consequently diminished their learning. In sum, our interpretation is
that the OLM with self-assessment support facilitates the process of self-assessing
and reflecting, which leads to more effective learning process. Its effect on learning is
further strengthened when shared control is granted.
One limitation of the current work was that we evaluated the redesigned OLM as
a single factor in our experimental design. Therefore, we cannot separate the effects
of the different components of the OLM, i.e., self-assessment prompts, delaying the
update of the skill bars to make the updating more salient, and a summary view of the
OLM. Although the correlations indicate that the self-assessment prompts were more
critical in facilitating students’ learning processes (when we separated the time spent
on the prompts and the skill bars, only the time spent with the self-assessment prompts
correlated significantly with learning gains for the OLM+PS condition), teasing apart
the effects of the individual components of the redesigned OLM remains as a goal for
future work.
4.2 Effects of shared control over problem selection on domain level learning
outcomes
4.2.1 Main effect of shared control over problem selection
We found no significant main effect of shared control on pre- to post-test learning gains
in both experiments. However, Experiment 2 revealed that without an OLM, full system
control over problem selection led to better learning outcomes than shared control.
This result is consistent with prior literature, which generally found superior learning
123
Enhancing learning outcomes through self-regulated learning…
81
outcomes with system control when compared to full student control (Atkinson 1972;
Niemiec et al. 1996). Without appropriate scaffolding for deciding which problem to
practice next (e.g., the OLM), students might be overwhelmed by the cognitive load
engendered by making problem selection decisions (Corbalan et al. 2008), which in
turn results in worse learning outcomes. OLMs can aid students in assessing their
learning status, which may be necessary for students when shared control is enabled.
This finding has implications for the design of learner control over problem selection
in online learning environments. Specifically, it highlights the importance of including
features like OLMs when learner control is granted.
4.2.2 The student-selected problem sequences with the shared control
With the two shared control conditions in Experiment 2, we first investigated the influence of the OLM on what sequences the students would select. Analysis of the log
data revealed that with shared control, students selected mostly the ordered blocked
sequence of practice (i.e., the system-selected problem sequence) in both PS conditions. The presence of the OLM did not significantly influence the problem sequences
selected by the students. This might be attributed to the design of the interface, which
positions Level 1 to Level 5 from left to right sequentially (as shown in Fig. 6),
thereby inviting students to work through the levels in order. Also, blocked sequences
are commonly seen in textbooks.
We also investigated how the sequences selected by the students might affect their
learning and motivation. Although overall, students did not deviate much from a
blocked sequence of practice, students who selected interleaved sequences reported
significantly lower enjoyment of using the system, but achieved the same learning
outcomes as the students who selected the blocked sequence. This is to a degree
consistent with theories of desirable difficulties (Bjork and Bjork 2006; Kapur and
Bielaczyc 2012), which argue that interleaved sequences could cause a tougher and
more frustrating learning process but ultimately lead to better learning outcomes. In
our case, the students who selected an interleaved sequence might encounter more
difficulties when they were practicing the higher levels early in the learning process,
which may have led to frustration and lower enjoyment, although not to better learning.
Perhaps the sequences were not interleaved enough for students to reap the benefits of
interleaving. Future work could investigate implementing or teaching students more
systematic ways of interleaving the problems types, such as following the Zone of
Proximal Development (Metcalfe and Kornell 2005).
4.3 Effects of the OLM and shared control on motivation
We did not find significant main effects of the OLM with support for self-assessment,
nor of shared control over problem selection, on students’ enjoyment of learning in
Experiment 2. Prior research has found correlations between perceived control and
students’ intrinsic motivation and enjoyment (Vandewaetere and Clarebout 2011). It
may be that the perception of control engendered by the shared control in Experiment 2
was not strong enough to bring about higher enjoyment. It is also likely that the contrast
123
82
Y. Long , V. Aleven
between the shared control and system control in the systems was overshadowed by the
similarities of the interfaces and other tutor features. It may also be that the specific
shared control approach tried in our experiment did not grant enough freedom to
students to be more enjoyable than having the system decide. Students could select
which level to work on but did not have control over when a level is finished—
this was done to protect students from possible adverse consequences of full student
control found in prior research. To conclude, although learner control is generally
preferred by students (Clark and Mayer 2011), our experiment shows that it is not
guaranteed that the control enabled through shared student/system control will always
lead to more enjoyable learning experiences in learning technologies. Future research
could also investigate the influence of shared control on other motivational constructs,
for example, by including measures of self-efficacy (Bandura 1994), and sense of
autonomy (Flowerday and Schraw 2003).
4.4 Self-assessment accuracy on equation solving
A surprising finding was the relatively high accuracy with which students assessed their
own equation-solving abilities in both experiments. Prior work focusing on memory
tasks and reading comprehension has documented poor self-assessment even with
adult learners (Dunlosky and Lipko 2007). Possibly, it is easier to judge the difficulty
of equations, in which easily observable features of problems may be a reliable measure
of the complexity of solving them (e.g., “bigger” equations tend to be more difficult,
or put differently, more terms on each side of the equation generally increase the
difficulty levels, as do features like parentheses, fractions). Although we found an
overall significant improvement on students’ confidence ratings and self-assessment
accuracy from pre- to post-tests, there was no significant effect of the OLM with selfassessment support on these measures. Therefore, the effect of OLMs on improving
self-assessment accuracy needs further investigation, possibly in a domain where selfassessment judgments are more challenging for students. Nonetheless, our results on
domain level learning gains highlight the benefits of facilitating the process of doing
self-assessment with skill bars.
4.5 Design recommendations and implications for OLMs in ITSs
Based on the results from the experiments, we offer design recommendations for
OLMs and ITSs that grant shared control over problem selection to students. Experiment 2 showed that with shared control over problem selection, students learn better
with the OLM with self-assessment support. We also found that shared control led to
worse learning outcomes than system control when no OLM was provided. Therefore,
future learning technologies that support learner control should incorporate designs
that support related SRL processes (e.g., self-assessment, making problem selection
decisions). It will also be useful to provide resources to help students learn to make
good problem selection decisions in the system (Long and Aleven 2016).
Our experiment demonstrates designs that can be integrated with an OLM to facilitate self-assessment, namely, self-assessment prompts, delaying the update of the skill
123
Enhancing learning outcomes through self-regulated learning…
83
bars to serve as feedback on students’ self-assessment, and the integration of a highlevel summary view of the OLM on the problem selection screen. These features are
tied to the progress information offered by the OLM, rather than the specific visualization of the skill meter, and thereby may also be integrated with other visualizations
of OLMs, such as concept maps.
4.6 Limitations and future work
There are some limitations of our current experiments that spur directions for future
work. Firstly, we evaluated the effects of the OLM as a whole, without separating
the effects of specific features. The correlations between the time spent viewing different components of the OLM and students’ learning outcomes shed some light on
understanding how the OLM might have enhanced students’ domain level learning.
However, we cannot rule out other factors that might have also contributed to the
enhanced learning outcomes. Therefore, future work can focus on designing experiments that tease out the effects of different OLM components, as well as conducting
more in-depth analyses of interaction logs from the tutor (Dimitrova and Brna 2016).
It may also be useful to conduct qualitative studies to uncover students’ perceptions of
these OLM features, such as interviews and think alouds with students. Secondly, the
current experiments did not investigate the influence of individual differences (e.g.,
self-efficacy, prior knowledge, working memory capacity) on how the students may
benefit from the OLM and shared control, which may be addressed in future work. The
interactions between the individual differences and the effects of the interventions may
help enable more personalized design of the system. Lastly, we could focus on designing new forms of shared control and OLMs to support other SRL processes (e.g., goal
setting), which may cause greater learning gains as well as higher motivation towards
learning.
5 Conclusions
This work investigated the effect of an OLM with integrated self-assessment support
and shared control over problem selection on students’ domain level learning outcomes, enjoyment, and self-assessment accuracy. We also explored the influence of
the OLM on students’ problem selection decisions, specifically, the order in which they
practiced the problem types in the tutor, as well as the effect of the student-selected
problem orders on learning and enjoyment.
Our first classroom experiment with 56 students showed some benefits of having
an OLM on students’ learning gains for equation solving. The students who learned
with the OLM achieved significantly better performance on the post-tests. The second
experiment with 245 participants replicated the procedure of the first and further solidifies the results with respect to the effect of the OLM on students’ domain level learning,
and offers more nuance in explaining how the OLM may enhance learning with shared
control over problem selection. The results of the two experiments illustrate the benefits of OLMs, and make several contributions to research on open learner modeling and
supporting self-regulated learning in ITSs. First, although OLMs are often viewed as a
123
84
Y. Long , V. Aleven
tool that can help learners reflect on their learning process (Bull and Kay 2010, 2016),
the current work is among the first controlled classroom experiments to establish that
an OLM can significantly enhance students’ domain level learning in an ITS through
scaffolding of critical SRL processes; this result was obtained with an ITS with shared
control over problem selection (between student and ITS). Although we found neither
a significant improvement on self-assessment accuracy due to the OLM from pre- to
post-tests, nor improved problem selection decisions attributed to the OLM, the selfassessment process facilitated by the OLM might have contributed to a more effective
learning process and enhanced learning outcomes with shared control. The current
work also highlights the interdependence of support of self-assessment and shared
control over problem selection, both integrated with an OLM. The presence of shared
control amplifies the effect an OLM can have in facilitating active self-assessing and
reflecting, and together the two forms of support enhance domain level learning outcomes. This finding contributes to the research on SRL by demonstrating an effective
way of scaffolding both self-assessment and making problem selection decisions to
enhance domain level learning in ITSs.
A second contribution of the current work is that it extends the prior literature
on learner control by showing that shared student/system control accompanied by an
OLM can lead to learning outcomes comparable to “cognitive mastery,” a method for
individualized task selection used in ITS that has been shown to substantially enhance
student learning over an ITS with a fixed problem sequence (Corbett 2000). Prior
work has generally found better learning outcomes with system control (Atkinson
1972; Niemiec et al. 1996), although in Corbalan et al. (2009), similar results were
obtained with shared control.
The current work contributes to the design of ITSs by offering design recommendations for ITSs that grant shared control to students over problem selection. Specifically,
it is critical to include an OLM in such systems. It can also be integrated with features
to facilitate self-assessment, i.e., the self-assessment prompts, delaying the update of
the skill bars, and integrating high-level summary view of the OLM on the problem
selection screen. We caution against offering full or shared control over task selection
to students without an OLM, as we found that even shared control without an OLM led
to worse learning outcomes than system control. Future work could investigate how
to help students learn to make good problem selection decisions with the information
offered by the OLM (Long and Aleven 2016).
To sum up, the current paper empirically establishes the beneficial role of using
an OLM to enhance students’ domain level learning outcomes through the support of
Self-Regulated Learning processes. The results of our experiments contribute to the
design of systems to facilitate SRL.
Acknowledgements We thank Jonathan Sewall, Borg Lojasiewicz, Octav Popescu, Brett Leber, Gail Kusbit and Emily Zacchero for their kind help with this work. We would also like to thank the participating
teachers and students. This work was funded by a National Science Foundation grant to the Pittsburgh
Science of Learning Center (NSF Award SBE0354420).
123
Enhancing learning outcomes through self-regulated learning…
85
References
Aleven, V.: Rule-based cognitive modeling for intelligent tutoring systems. Advances in intelligent tutoring
systems. In: Nkambou, R., Bourdeau, J., Mizoguchi, R. (eds.) Studies in Computational Intelligence,
vol. 308, pp. 33–62. Springer, Berlin (2010)
Aleven, V., Koedinger, K.R.: Limitations of student control: do students know when they need help? In:
Gauthier, G., Frasson, C., VanLehn, K. (eds.) Proceedings of the 5th International Conference on
Intelligent Tutoring Systems, pp. 292–303. Springer, Berlin (2000)
Aleven, V., Koedinger, K.R.: Knowledge component approaches to learner modeling. In: Sottilare, R.,
Graesser, A., Hu, X., Holden, H. (eds.) Design Recommendations for Adaptive Intelligent Tutoring
Systems, pp. 165–182. Orlando, US Army Research Laboratory (2013)
Aleven, V., McLaren, B.M., Sewall, J., Koedinger, K.R.: Example-tracing tutors: a new paradigm for
intelligent tutoring systems. Int. J. Artif. Intell. Educ. 19(2), 105–154 (2009)
Aleven, V., McLaren, B.M., Sewall, J., van Velsen, M., Popescu, O., Demi, S., Koedinger, K.R.: Exampletracing tutors: intelligent tutor development for non-programmers. Int. J. Artif. Intell. Educ. 26(1),
224–269 (2016)
Anderson, J.R., Corbett, A.T., Koedinger, K.R., Pelletier, R.: Cognitive tutors: lessons learned. J. Learn.
Sci. 4(2), 167–207 (1995)
Atkinson, R.C.: Optimizing the learning of a second-language vocabulary. J. Exp. Psychol. 96(1), 124–129
(1972)
Arroyo, I., Ferguson, K., Johns, J., Dragon, T., Mehranian, H., Fisher, D., Barto, A., Mahadevan, S., Woolf,
B.: Repairing disengagement with non invasive interventions. In: Proceedings of the International
Conference on Artificial Intelligence in Education, pp. 195–202. Marina del Rey, CA (2007)
Azevedo, R., Aleven, V. (eds.): International Handbook on Metacognition in Computer-Based Learning
Environments. Springer, Berlin, DE (2013)
Bandura, A.: Self-efficacy. In: Ramachaudran, V.S. (Ed.) Encyclopedia of Human Behaviour, vol. 4, pp.
71–81. Academic Press, New York (Reprinted in H. Friedman (Ed.). Encyclopedia of mental health,
San Diego: Academic Press, 1998) (1994)
Basu, S., Biswas, G., Kinnebrew, J.S.: Learner modeling for adaptive scaffolding in a computational
thinking-based science learning environment. User Model. User-Adapt. Interact. J. Personal. Res.
27 (2017) (Special Issue on Impact of Learner Modeling)
Bjork, R.A., Bjork, E.L.: Optimizing treatment and instruction: implications of a new theory of disuse.
In: Nilsson, L.G., Ohta, N. (eds.) Memory and Society: Psychological Perspectives, pp. 109–133.
Psychology Press, New York (2006)
Brusilovsky, P.: Adaptive navigation support: from adaptive hypermedia to the adaptive web and beyond.
Psychol. J. 2(1), 7–23 (2004)
Brusilovsky, P., Schwarz, E., Weber, G.: ELM-ART: an intelligent tutoring system on World Wide Web.
In: Frasson, C., Gauthier, G., Lesgold, A. (eds.) Proceedings of Third International Conference on
Intelligent Tutoring Systems, ITS-96, pp. 261–269. Springer, Berlin (1996)
Brusilovsky, P., Sosnovsky, S., Shcherbinina, O.: QuizGuide: Increasing the educational value of individualized self-assessment quizzes with adaptive navigation support. In: Nall, J., Robson, R. (eds.)
Proceedings of World Conference on E-Learning, pp. 1806–1813 (2004)
Brusilovsky, P., Somyürek, S., Guerra, J., Hosseini, R. Zadorozhny, V.: The value of social: comparing
open student modeling and open social student modeling. In: Proceedings of the 23nd Conference on
User Modeling, Adaptation and Personalization (UMAP 2015), June 29–July 3, 2015, Dublin, Ireland
(2015)
Bull, S.: Supporting learning with open learner models. In: Proceedings of the 4th Hellenic Conference:
Information and Communication Technologies in Education, Athens (2004)
Bull, S., Kay, J.: Metacognition and open learner models. In: Roll, I., Aleven, V. (eds.) Proceedings of
Workshop on Metacognition and Self-Regulated Learning in Educational Technologies, International
Conference on Intelligent Tutoring Systems, pp. 7–20 (2008)
Bull, S., Kay, J.: Open learner models. Advances in intelligent tutoring systems. In: Nkambou, R., Bourdeau,
J., Mizoguchi, R. (eds.) Studies in Computational Intelligence, vol. 308, pp. 301–322. Springer, Berlin
(2010)
Bull, S., Kay, J.: SMILI: a framework for interfaces to learning data in Open Learner Models (OLMs),
learning analytics and related fields. Int. J. Artif. Intell. Educ. 26(1), 293–331 (2016)
123
86
Y. Long , V. Aleven
Bull, S., Jackson, T., Lancaster, M.: Students’ interest in their misconceptions in first year electrical circuits
and mathematics courses. Int. J. Electr. Eng. Educ. 47(3), 307–318 (2010)
Bull, S., Ginon, B., Boscolo, C., Johnson, M.D.: Introduction of learning visualisations and metacognitive
support in a persuadable Open Learner Model. In: Gasevic, D., Lynch, G. (eds.) Proceeding of Learning
Analytics and Knowledge 2016. ACM (2016)
Clark, C.R., Mayer, E.R.: E-Learning and the Science of Instruction: Proven Guidelines for Consumers and
Designers of Multimedia Learning. Jossey-Bass, San Francisco (2011)
Cleary, T., Zimmerman, B.J.: Self-regulation differences during athletic practice by experts, nonexperts,
and novices. J. Appl. Sport Psychol. 13, 61–82 (2000)
Corbalan, G., Kester, L., Van Merriënboer, J.J.G.: Selecting learning tasks: effects of adaptation and shared
control on efficiency and task involvement. Contemp. Educ. Psychol. 33(4), 733–756 (2008)
Corbalan, G., Kester, L., van Merriënboer, J.J.G.: Combining shared control with variability over surface
features: effects on transfer test performance and task involvement. Comput. Hum. Behav. 25(2),
290–298 (2009)
Corbett, A.: Cognitive mastery learning in the ACT Programming Tutor. AAAI Technical Report SS-00-01
(2000)
Corbett, A.T., Anderson, J.R.: Knowledge tracing: modeling the acquisition of procedural knowledge. User
Model. User-Adapt. Interact. 4(4), 253–278 (1995)
Corbett, A.T., Bhatnagar, A.: Student modeling in the ACT programming tutor: adjusting a procedural
learning model with declarative knowledge. In: Jameson, A., Paris, C., Tasso, C. (eds.) User Modeling.
Springer, New York (1997)
Cordova, D.I., Lepper, M.R.: Intrinsic motivation and the process of learning: beneficial effects of contextualization, personalization, and choice. J. Educ. Psychol. 88, 715–730 (1996)
Desmarais, M.C., Baker, R.S.: A review of recent advances in learner and skill modeling in intelligent
learning environments. User Model. User-Adapt. Interact. 22(1–2), 9–38 (2012)
Dimitrova, V., Brna, P.: From interactive open learner modeling to intelligent mentoring: STyLE-OLM and
beyond. Int. J. Artif. Intell. Educ. 26(1), 332–349 (2016)
Dunlosky, J., Lipko, A.: Metacomprehension: a brief history and how to improve its accuracy. Curr. Dir.
Psychol. Sci. 16, 228–232 (2007)
Flowerday, T., Schraw, G.: Effect of choice on cognitive and affective engagement. J. Educ. Res. 96(4),
207–215 (2003)
Gong, Y., Beck, J.E., Heffernan, N.T.: How to construct more accurate student models: comparing and
optimizing knowledge tracing and performance factor analysis. Int. J. Artif. Intell. Educ. 21(1–2),
27–46 (2011)
Hartley, D., Mitrovic, A.: Supporting learning by opening the student model. In: Cerri, S.A., Gouardères, G.,
Paraguaçu, F. (eds.) Proceedings of the 6th International Conference on Intelligent Tutoring Systems,
pp. 453–462. Springer, Berlin (2002)
Jameson, A., Schwarzkopf, E.: Pros and cons of controllability: an empirical study. In: De Bra, P.,
Brusilovsky, P., Conejo, R. (eds.) Adaptive Hypermedia and Adaptive Webbased Systems: Proceedings
of AH 2002, pp. 193–202. Springer, Berlin (2002)
Kapur, M., Bielaczyc, K.: Designing for productive failure. J. Learn. Sci. 21(1), 45–83 (2012)
Kay, J.: Learner know thyself: student models to give learner control and responsibility. In: Halim, Z.,
Ottomann, T., Razak, Z. (eds.) Proceeding of the International Conference on Computers in Education.
AACE (1997)
Kerly, A., Bull, S.: Children’s interactions with inspectable and negotiated learner models. In: Woolf, B.P.,
Aïmeur, E., Nkambou, R., Lajoie, S. (eds.) Proceedings of the International Conference on Intelligent
Tutoring Systems, pp. 132–141. Springer, Heidelberg (2008)
Koedinger, K.R., Corbett, A.T.: Cognitive tutors: technology bringing learning science to the classroom. In:
Sawyer, K. (ed.) The Cambridge Handbook of the Learning Sciences. Cambridge University Press,
Cambridge (2006)
Koedinger, K.R., Aleven, V.: Exploring the assistance dilemma in experiments with cognitive tutors. Educ.
Psychol. Rev. 19(3), 239–264 (2007)
Koedinger, K.R., Corbett, A.T., Perfetti, C.: The Knowledge-Learning-Instruction (KLI) framework: toward
bridging the science-practice chasm to enhance robust student learning. Technical report, Carnegie
Mellon University, Human Computer Interaction Institute, Pittsburgh (2010)
Kostons, D., van Gog, T., Paas, F.: Self-assessment and task selection in learner-controlled instruction:
differences between effective and ineffective learners. Comput. Educ. 54(4), 932–940 (2010)
123
Enhancing learning outcomes through self-regulated learning…
87
Lee, S.J.H., Bull, S.: An open learner model to help parents help their children. Technol. Instr. Cognit.
Learn. 6(1), 29–51 (2008)
Long, Y., Aleven, V.: Students’ understanding of their student model. In: Biswas, G., Bull, S., Kay, J.,
Mitrovic, A. (eds.) Proceedings of the 15th International Conference on Artificial Intelligence in
Education, pp. 179–186. Springer, Berlin (2011)
Long, Y., Aleven, V.: Active learners?: redesigning an intelligent tutoring system to support self-regulated
learning. In: Proceedings of EC-TEL 2013: Scaling up Learning for Sustained Impact, pp. 490–495
(2013a)
Long, Y., Aleven, V.: Supporting students’ self-regulated learning with an open learner model in a linear equation tutor. In: Proceedings of the 16th International Conference on Artificial Intelligence in
Education, pp. 219–228 (2013b)
Long, Y., Aleven, V.: Skill diaries: improve student learning in an intelligent tutoring system with periodic
self-assessment. In: Proceedings of the 16th International Conference on Artificial Intelligence in
Education, pp. 249–258 (2013c)
Long, Y., Aleven, V.: Gamification of joint student/system control over problem selection in a linear equation
tutor. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panour-gia, K. (eds.) Proceedings of the 12th
International Conference on Intelligent Tutoring Systems, pp. 378–387. Springer, New York (2014)
Long, Y., Aleven, V. (2016). Mastery-oriented shared student/system control over problem selection in a
linear equation tutor. In: Proceedings of the 13th International Conference on Intelligent Tutoring
System, pp. 90–100
Mabbott, A., Bull, S.: Comparing student-constructed open learner model presentations to the domain. In:
Koedinger, K., Luckin, R., Greer, J. (eds.) Proceedings of the International Conference on Artificial
Intelligence in Education. IOS Press, Amsterdam (2007)
Martinez-Maldonado, R., Pardo, A., Mirriahi, N., Yacef, K., Kay, J., Clayphan, A.: The LATUX workflow:
designing and deploying awareness tools in technology-enabled learning settings. In: Proceedings of
the Fifth International Conference on Learning Analytics and Knowledge, pp. 1–10. ACM (2015)
Metcalfe, J.: Metacognitive judgments and control of study. Curr. Dir. Psychol. Sci. 18(3), 159–163 (2009)
Metcalfe, J., Kornell, N.: The dynamics of learning and allocation of study time to a region of proximal
learning. J. Exp. Psychol. Gen. 132(4), 530 (2003)
Metcalfe, J., Kornell, N.: A region of proximal learning model of study time allocation. J. Mem. Lang.
52(4), 463–477 (2005)
Mitrovic, A., Martin, B.: Scaffolding and fading problem selection in SQL-Tutor. In: Hoppe, U., Verdejo, F.,
Kay, J. (eds.) Proceedings of the 11th International Conference on Artificial Intelligence in Education,
pp. 479–481. Springer, Berlin (2003)
Mitrovic, A., Martin, B.: Evaluating the effect of open student models on self-assessment. Int. J. Artif.
Intell. Educ. 17(2), 121–144 (2007)
Mitrovic, A., Mayo, M., Suraweera, P., Martin, B.: Constraint-based tutors: A success story. In: Monostori,
L., Váncza, J., Ali, M. (eds.) Proceedings 14th International Conference on Industrial and Engineering
Applications of Artificial Intelligence and Expert Systems, pp. 931–940. Springer, Berlin (2001)
Niemiec, R.P., Sikorski, C., Walberg, H.J.: Learner-control effects: a review of reviews and a meta-analysis.
J. Educ. Comput. Res. 15(2), 157–174 (1996)
Parra, D., Brusilovsky, P.: User-controllable personalization: a case study with SetFusion. Int. J. Hum.Comput. Stud. 78, 43–67 (2015)
Pardos, Z. A., Heffernan, N. T.: Modeling individualization in a bayesian networks implementation of
knowledge tracing. In: deBra, P., Kobsa, A., Chin, D. (eds.) Proceedings of the 18th International
Conference on User Modeling, Adaptation, and Personalization, UMAP 2010, PP. 255–266. Springer,
Berlin (2010)
Pardos, Z.A., Bergner, Y., Seaton, D., Pritchard, D.E.: Adapting Bayesian knowledge tracing to a massive
open online college course in edX. In: D’Mello, S.K., Calvo, R.A., Olney, A. (eds.) Proceedings of the
6th International Conference on Educational Data Mining (EDM), pp. 137–144. Memphis, TN (2013)
Perez-Marin, D., Alfonseca, E., Rodriguez, P., Pascual-Neito, I.: A study on the possibility of automatically
estimating the confidence value of students’ knowledge in generated conceptual models. J. Comput.
2(5), 17–26 (2007)
Pintrich, P.R.: A conceptual framework for assessing motivation and self-regulated learning in college
students. Educ. Psychol. Rev. 16, 385–407 (2004)
Pintrich, P.R., De Groot, E.V.: Motivational and self-regulated learning components of classroom academic
performance. J. Educ. Psychol. 82, 33–40 (1990)
123
88
Y. Long , V. Aleven
Schneider, W., Lockl, K.: The development of metacognitive knowledge in children and adolescents. In:
Perfect, T.J., Schwartz, B.L. (eds.) Proceedings of Applied metacognition, pp. 224–257. Cambridge
University Press, Cambridge (2002)
Schraw, G.A.: Conceptual analysis of five measures of metacognitive monitoring. Metacognit. Learn. 4(1),
33–45 (2009)
Schraw, G., Flowerday, T., Reisetter, M.: The role of choice in reader engagement. J. Educ. Psychol. 90,
705–714 (1998)
Thiede, K.W., Anderson, M.C.M., Therriault, D.: Accuracy of metacognitive monitoring affects learning
of texts. J. Educ. Psychol. 95, 66–73 (2003)
Tongchai, N.: Impact of self-regulation and open learner model on learning achievement in blended learning
environment. Int. J. Inf. Educ. Technol 6(5), 343–347 (2016)
University of Rochester: intrinsic motivation inventory (IMI) (1994). http://www.psych.rochester.edu/SDT/
measures/IMI_description.php. Accessed January 2013
Vandewaetere, M., Clarebout, G.: Can instruction as such affect learning? The case of learner control.
Comput. Educ. 57(4), 2322–2332 (2011)
VanLehn, K.: The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring
systems. Educ. Psychol. 46(4), 197–221 (2011)
VanLehn, K., Lynch, C., Schulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Wintersgill, M.: The Andes
physics tutoring system: lessons learned. Int. J. Artif. Intell. Educ. 15(1), 147–204 (2005)
Waalkens, M., Aleven, V., Taatgen, N.: Does supporting multiple student strategies lead to greater learning
and motivation? Investigating a source of complexity in the architecture of intelligent tutoring systems.
Comput. Educ. 60, 159–171 (2013)
Winne, P.H., Hadwin, A.E.: Studying as self-regulated learning. In: Hacker, D.J., Dunlosky, J., Graesser, A.C.
(eds.) Metacognition in Educational Theory and Practice, pp. 277–304. Lawrence Erlbaum Associates,
Mahwah (1998)
Yudelson, M.V., Koedinger, K.R., Gordon, G.J.: Individualized bayesian knowledge tracing models. In:
Lane, H.C., Yacef, K., Mostow, J., Pavlik, P. (eds.) Proceedings of the 16th international conference
on artificial intelligence in education, AIED 2013, pp. 171–180. Springer, Berlin (2013)
Zimmerman, B.J., Martinez-Pons, M.: Development of a structured interview for assessing students’ use
of self-regulated learning strategies. Am. Educ. Res. J. 23, 614–628 (1986)
Zimmerman, B.J.: Attaining self-regulation: a social cognitive perspective. In: Boekaerts, M., Pintrich, P.,
Zeidner, M. (eds.) Handbook of Self-Regulation, pp. 1–39. Academic Press, San Diego (2000)
Yanjin Long University of Pittsburgh, Learning Research and Development Center, 3939 O’Hara Street,
Pittsburgh, PA, USA 15213. Dr. Yanjin Long was a Postdoctoral Research Associate at the Learning
Research and Development Center of the University of Pittsburgh. Dr. Long received her B.S. degree
in Psychology from Beijing Normal University and her M.A. degree in Cognitive Studies in Education
from Teachers College, Columbia University. She received her Ph.D. degree in Human Computer Interaction from Carnegie Mellon University. Dr. Long has worked in several areas of intelligent tutoring
systems, user-centered design, self-regulated learning, open learner models, educational data mining, and
educational games. Her work has won the Conference Best Student Paper Award at the 16th International
Conference on Artificial Intelligence in Education.
Vincent Aleven Carnegie Mellon University, Human Computer Interaction Institute, 5000 Forbes Avenue,
Pittsburgh, PA, USA 15213. Dr. Aleven is an Associate Professor in Human-Computer Interaction at
Carnegie Mellon University. He has over 20 years of research experience in advanced learning technologies based on cognitive and SRL theory. Much of his work investigates the how the effectiveness of intelligent tutoring systems can be enhanced. Together with his colleagues and students, he has devised ways
to support SRL skills such as self explanation, help seeking, and self-assessment, with positive effects on
student learning. Aleven is co-editor of the International Handbook on Metacognition in Computer-Based
Learning Environments (Azevedo and Aleven 2013) and is co-editor-in-chief of the International Journal
of Artificial Intelligence in Education. He has over 200 publications to his name. He and his colleagues
and students have won seven best paper awards at international conferences. He is or has been PI on nine
major research grants and co-PI on ten others.
123
Download