A Pedagogical Framework for Integrating

advertisement
A PEDAGOGICAL FRAMEWORK
FOR INTEGRATING INDIVIDUAL LEARNING STYLE
INTO AN INTELLIGENT TUTORING SYSTEM
BY
SHAHIDA M. PARVEZ
PRESENTED TO THE GRADUATE AND RESEARCH COMMITTEE
OF LEHIGH UNIVERSITY
IN CANDIDACY FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
IN
COMPUTER SCIENCE
LEHIGH UNIVERSITY
DECEMBER 2007
Approved and recommended for acceptance as a dissertation in partial
fulfillment of the requirements for the degree of Doctor of Philosophy.
_______________________
Date
_______________________
Accepted Date
___________________________
Professor Glenn D. Blank
Dissertation Advisor
Committee Chair
Computer Science and Engineering,
Lehigh University
Committee Members:
___________________________
Professor Hector Munoz-Avila
Computer Science and Engineering,
Lehigh University
___________________________
Professor Jeff Heflin
Computer Science and Engineering,
Lehigh University
___________________________
Professor Alec Bodzin
College of Education, Lehigh
University
ii
TABLE OF CONTENTS
ACKNOWLEDGEMENT .................................................................................................. v
LIST OF TABLES ............................................................................................................. vi
LIST OF FIGURES .......................................................................................................... vii
ABSTRACT ........................................................................................................................ 1
1
2
3
INTRODUCTION ...................................................................................................... 2
1.1
Learning Styles ................................................................................................... 3
1.2
Adapting feedback to Learning Style in ITS ...................................................... 9
1.3
Hypothesis......................................................................................................... 13
1.4
Research Questions ........................................................................................... 14
1.5
Contributions..................................................................................................... 16
RELATED WORK ................................................................................................... 17
2.1
Learning Style theories ..................................................................................... 17
2.2
Felder-Silverman learning style model ............................................................. 25
2.3
Application of learning styles in adaptive educational systems ....................... 34
2.4
Intelligent Tutoring systems and feedback mechanisms .................................. 39
2.5
Pedagogical Modules in ITS ............................................................................. 50
PEDAGOGICAL ADVISOR IN DESIGN FIRST-ITS ........................................... 55
3.1
Feedback ........................................................................................................... 59
3.1.1
Advice/Hint mode ..................................................................................... 59
3.1.2
Tutorial/Lesson mode ............................................................................... 62
3.1.3
Sync-mode ................................................................................................ 63
3.1.4
Evaluation mode ....................................................................................... 63
iii
4
LEARNING STYLE BASED PEDAGOGICAL FRAMEWORK .......................... 64
4.1
Feedback architecture ....................................................................................... 64
4.1.1
Felder-Silverman learning dimensions/feedback types ............................ 64
4.1.2
Feedback components ............................................................................... 65
4.1.3
Feedback component attributes................................................................. 68
4.1.4
Feedback components/learning styles dimension mapping ...................... 72
4.2
Feedback Generation Process ........................................................................... 73
4.2.1
Feedback Generation Process Inputs ........................................................ 75
4.2.2
Selection Process ...................................................................................... 76
4.2.3
Assembly process...................................................................................... 79
4.2.4
Learning Style Feedback........................................................................... 83
5
PEDAGOGICAL FRAMEWORK PORTABILITY ................................................ 91
6
FEEDBACK MAINTENANCE TOOL ................................................................. 101
7
EVALUATION....................................................................................................... 110
7.1
Feedback evaluation........................................................................................ 110
7.2
Learning style feedback effectiveness evaluation........................................... 114
7.3
Object Oriented Design Tutorial Evaluation .................................................. 123
7.4
Feedback maintenance tool evaluation ........................................................... 124
8
CONCLUSION ....................................................................................................... 127
9
FUTURE WORK .................................................................................................... 131
10
BIBLIOGRAPHY ............................................................................................... 132
iv
ACKNOWLEDGEMENT
I wish to express my sincere gratitude to my advisor, Dr. Glenn D. Blank, for
giving me the opportunity to work on this research project and for the guidance and
encouragement he has given me throughout my research. I would also like to thank my
Ph.D. committee members: Dr. Hector Munoz-Avila, Dr. Alec Bodzin and especially Dr.
Jeff Heflin for his guidance and help during my time at Lehigh.
I am grateful to many individuals at Lehigh University who helped me during my
studies at Lehigh. I am grateful to Fang Wei, Sharon Kalafut and especially Sally H.
Moritz for helping me in my research and participating in my evaluation studies. I am
also grateful to many of my fellow graduate students for their encouragement and support
when things did not go well.
Finally I would like to thank my parents, my daughters and especially my
husband for their unconditional love and support. I dedicate this dissertation to my late
father for being my inspiration, to my mother for her tireless prayers and to my husband
for his consistent encouragement, support and love.
This research was supported by National Science Foundation (NSF) and the
Pennsylvania Infrastructure Technology Alliance (PITA).
v
LIST OF TABLES
Table 1 – Characteristics of typical learners in Felder-Silverman learning style model . 27
Table 2 – Felder-Silverman model dimensions / learning preferences............................. 65
Table 3 – Learning style dimension and feedback component mapping .......................... 72
Table 4 – Concept/related concept .................................................................................... 95
Table 5 – Concept/action/explanation phrases ................................................................. 95
Table 6 – Error codes/concept/explanation phrases......................................................... 95
Table 7 – Student action record ........................................................................................ 96
Table 8 – Feedback evaluation ....................................................................................... 112
Table 11 – No-feedback group data ............................................................................... 116
Table 12 –Textual-feedback group data ......................................................................... 117
Table 13 – Learning-style-feedback group data ............................................................. 118
Table 14 – Summary statistics ........................................................................................ 119
Table 15 – Pedagogical advisor evaluation .................................................................... 122
Table 9 – Tutorial evaluation .......................................................................................... 123
Table 10 – Feedback maintenance tool evaluation ......................................................... 125
vi
LIST OF FIGURES
Figure 3-1 DesignFirst-ITS Architecture .......................................................................... 56
Figure 4-1 Feedback component attributes ....................................................................... 71
Figure 4-2 Definition Component ..................................................................................... 72
Figure 4-3 Picture component ........................................................................................... 72
Figure 4-4 Datatypes ......................................................................................................... 73
Figure 4-5 Attributes ......................................................................................................... 74
Figure 4-6 Feedback generation process........................................................................... 74
Figure 4-7 Feedback message ........................................................................................... 80
Figure 4-8 Substitution process. ....................................................................................... 81
Figure 4-9 Visual feedback examples ............................................................................... 84
Figure 4-10 Visual/sequential ........................................................................................... 85
Figure 4-11 Visual/global ................................................................................................. 85
Figure 4-12 Visual/global ................................................................................................. 86
Figure 4-13 Visual/sequential ........................................................................................... 86
Figure 4-14 Verbal/sequential........................................................................................... 87
Figure 4-15 Verbal/global ................................................................................................. 87
Figure 4-16 Visual/sensor ................................................................................................. 88
Figure 4-17 Visual/active .................................................................................................. 89
Figure 5-1 Feedback components ..................................................................................... 98
Figure 6-1 Feedback Maintenance Interface................................................................... 102
Figure 6-2 Input Advice feedback-1 ............................................................................... 103
vii
Figure 6-3 Input Advice feedback-2 ............................................................................... 103
Figure 6-4 Input Advice feedback-3 ............................................................................... 104
Figure 6-5 Input Advice feedback-4 ............................................................................... 105
Figure 6-6 Input Advice Feedback-5 .............................................................................. 106
Figure 6-7 Input New Concept ....................................................................................... 107
Figure 6-8 View/modify/delete tutorial feedback-1 ....................................................... 107
Figure 6-9 View/modify/delete tutorial fdbck-2 ............................................................. 108
Figure 6-10 View/delete concept/related concept-1 ....................................................... 109
Figure 7-1 Feedback evaluation ...................................................................................... 113
Figure 7-4 Learning Gain – No-feedback group............................................................. 116
Figure 7-5 Learning gains – Textual-Feedback group .................................................... 117
Figure 7-6 Learning gains – learning-style-feedback group ........................................... 118
Figure 7-7 Learning gains for all three groups ............................................................... 119
Figure 7-8 Pedagogical advisor survey ........................................................................... 122
Figure 7-2 Tutorial evaluation - Questions 1-5............................................................... 124
Figure 7-3 Feedback maintenance tool evaluation ......................................................... 126
viii
ABSTRACT
An intelligent tutoring system (ITS) provides individualized help based on an
individual student profile maintained by the system. An ITS maintains a student
model which includes each student’s problem solving history and uses this student
model to individualize the tutoring content and process. ITSs adapt to individual
students by identifying gaps in their knowledge and presenting them with content to
fill in these gaps. Even though these systems are very good at identifying gaps and
selecting content to fill them; however, most of them do not address one important
aspect of the learning process: the learning style of a student.
Learning style theory states that different people acquire knowledge and learn
differently. Some students are visual learners; some are auditory learners; others learn
best through hands-on activity (tactile or kinesthetic learning).
The focus of this research is to integrate the results of learning style research into
the pedagogical module of an ITS by creating a learning style based pedagogical
framework that would generate feedback that is specific to the learner.
This
integration of individual learning styles will help an ITS become more adapted to the
learner by presenting information in the form best suited to his or her needs. This
framework has been implemented in the pedagogical module of DesignFirst-ITS,
which help students learn object-oriented design. This pedagogical module assists the
students in two modes: the advice/hint mode, which provides real time feedback in
the forms of scaffolds as the student works on his/her design solution, and the
lesson/tutorial mode, which tutors students about specific concepts.
1
1
INTRODUCTION
Intelligent tutoring systems (ITS) are valuable tools in helping students learn
instructional material both inside and outside of the classroom setting. These systems
augment classroom learning by providing an individualized learning environment that
identifies gaps and misconceptions in the student’s knowledge to provide him/her with
appropriate information to correct these misconceptions and fill in the gaps. A typical ITS
contains three main components: the expert module, the student model, and the
pedagogical module. The expert module contains the domain knowledge and methods to
solve problems; the student model keeps track of the student knowledge; and the
pedagogical module contains instructional strategies that it uses to help the student learn.
The purpose of the ITS is to replicate human tutoring behavior and provide
individualized help to each learner. A human tutor is able to observe various student
problem solving behaviors, identifies deficiencies in student’s knowledge, and helps the
student in overcoming these deficiencies. Likewise, ITSs adapt to individual students by
identifying gaps in their knowledge bases in terms of their problem solving behavior and
then presenting them with appropriate content to bridge the gaps. Different ITSs use
different methodologies, such as comparing student solutions to a predefined expert
solution[s], or by the errors in the student solutions to determine how well the student
knows the domain concepts. Once the system knows what the student needs help with, it
can provide guidance by way of specific feedback.
Even though these systems are very good at identifying gaps and selecting content to
fill in these vacancies, they only address one dimension of adaptability the knowledge
level of the student. This being the case, students with similar knowledge gaps are
2
presented the same information content in the same format. The individual characteristics
and preferences of the student that impact his/her learning are not taken into account
while individualizing the tutoring content and process. These individual characteristics
and preferences of the students are dubbed individual learning styles.
1.1
Learning Styles
The term learning style refers to individual skills and preferences that affect how a
student perceives, gathers, and processes information (Jonassen & Grabowski, 1993).
Each individual has his/her unique way of learning material. For instance, some students
prefer verbal input in the form of written text or spoken words, while others prefer visual
input in the form of items such as maps, pictures, charts, etc. Likewise, some students
think in terms of facts and procedures while, others think in terms of ideas and concepts
(Felder, 1996). Researchers have identified individual learning styles as a very important
factor in effective learning. Jonassen and Grabowski (1993) describe learning as a
complex process that depends on many factors, one of which is the learning style of the
student.
Learning style research became very active in the 1970’s and has resulted in over 71
different models and theories. Some of the most cited theories are Myers-Briggs Type
Indicator (Myers, 1976), Kolb’s learning style theory (Kolb, 1984), Gardner’s Multiple
Intelligences Theory, (Gardner, 1983) and Felder-Silverman Learning Style Theory
(Felder & Silverman, 1988; Felder, 1993). Even though there are so many different
learning style theories and models, not all researchers agree that learning style-based
instruction results in learning gains.
3
Studies involving the effectiveness of learning style-based instruction have yielded
mixed results with some researchers concluding that students learn more when presented
with material that is matched with their learning style (Claxton & Murrell, 1987), while
others have not seen any significant improvements (Ford & Chen, 2000). One of the
problems with determining the effectiveness of learning styles in an educational setting is
that there are many variables to consider, such as learner aptitude/ability, willingness,
motivation, personality traits, the learning task and context, prior student knowledge, and
the environment (Jonassen & Grabowski, 1993). In a classroom full of students, not all
individuals grasp the instructional material at the same pace and level of understanding.
Similarly, some students are more willing and motivated to work harder and learn more
than some others. The personality traits of each individual student also play an important
role in the learning process. Some students are naturally anxious, have low tolerance for
ambiguity and tend to get frustrated easily, while others are patient and are able to work
through ambiguity without getting frustrated.
The disparity in the data supporting the effectiveness of learning style-based
instruction has resulted in controversy in learning style research. Some of the potential
problems that critics see in the application of learning styles involve the potential to
pigeonhole students into a specific learning style and simply label them as such. Another
potentially problematic area is the stability of learning style (whether an individual’s
learning style can change over a period of time). Some researchers believe that learning
style is a permanent attribute of human cognition, while others believe that it can change
over time. All these issues and learning styles will be discussed in detail in the related
research section of this document.
4
In spite of all this controversy, learning style research has been integrated in
various settings and at different levels. In K-12 education, learning style models are used
to determine the individual learning style of children who are struggling as well as
children who are gifted. The results of the research are used to develop materials that can
be used to teach children with various learning styles (Dunn & Dunn, 1978). At the
college level, learning style models and instruments are used to determine the learning
style of the students and the teaching style of educators (Felder, 1996). The results are
used for multiple purposes such as making the students aware of their own learning
styles, helping students chose the best studying methods based on their individual
learning styles, translating the instructors’ insightful information into creative class
materials that would appeal to the vast majority of students and improving their teaching
style.
In industry, corporations are using learning style research to create supportive work
environments that foster communication and productivity. The Myers-Briggs Type
Indicator® (MBTI) (Myers, 1976) is the most widely used instrument for understanding
personal preferences in organizations around the globe to assist in developing individuals,
leaders, and teams. The MBTI helps participants understand their motivations, strengths,
weaknesses, and potential areas for growth. It is also especially useful in helping
individuals understand and appreciate those who differ from themselves. Learning style
research is also being used in industry to create training materials that are suitable for
employees with different learning styles.
Learning style is also being integrated in adaptive e-learning environments with many
designers creating systems based on learning style research. Adaptive e-learning systems
5
are ideal for creating learning style-based instructional material as they do not face the
same limitations as human instructors who are unable to cater to individual students due
to the lack of resources (Jonassen & Grabowski, 1993). Adaptive educational hypermedia
(AEH) systems are an extension of hypermedia systems that contain information in the
form of static pages and present the same pages and the same links to every user. The
goal of adaptive hypermedia is to improve usability of hypermedia by adapting the
presentation of information and the overall link structure, based on a user model
(Brusilovsky, 1999). The user model usually consists of information such as student
knowledge of the subject, navigation experience, student preferences, background, goal,
etc. Many of these factors are determined by observing the student’s behavior and
interaction with the system. Information in the user model is used to provide presentation
adaptation and navigation adaptation (Brusilovsky, 1996).
AEH can provide two types of adaptation; adaptive presentation which refers to the
form in which content is presented (text, multimedia, etc.) and adaptive navigation
support which includes link hiding, annotation, direct link, etc. (Brusilovsky, 2001).
Traditionally, AEH systems adapt instructional material based on a student
knowledge model which consists of prior knowledge and ability. Recently, a number of
AEH systems have been developed that use various learning style models to personalize
domain knowledge and avoid the “one size fits all” mentality. These systems use two
different methods to obtain the learning style of the user. The first method is to have the
user fill out a learning style questionnaire which usually accompanies the learning style
model on which the system is based. The second method is to infer the student
preferences from his/her interaction with the system, such as the pages the student visits
6
and the links that he/she follows. After obtaining the student learning style, these systems
use that information to adapt the sequence and/or presentation form of the instructional
material to the student.
CSC383 (Carver, Howard, & Lane, 1999), an AEHS for a computer systems
course, (CSC383) modifies content presentation using the Felder-Silverman learning
style model. Learners fill out the Index of Learning Style questionnaire (ILS), which
categorizes them as sensing/intuitive, verbal/visual and sequential/global (Felder &
Silverman, 1998). For example, sensing learners like facts, while intuitive learners like
concepts, visual learners like pictures/graphics, while verbal learners like written
explanations, and sequential learners like a step by step approach, while global learners
like to see the big picture right away. CSC383 matches the presentation form of the
content to the student’s learning style. For example, visual students are presented
information in graphical form, while verbal students receive the information in text form,
etc. Informal assessment, including feedback from the teachers and instructors conducted
over a 2-year period, indicated that students gained a deeper understanding of the domain
material. Different students rated different media components on a best to worst scale,
indicating that students have different preferences. Instructors also noticed dramatic
changes in the depth of student knowledge with substantial increases in the performance
of the best students.
AES-CS (Triantafillou, Pomportsis, & Demetriadis, 2003) is an AEHS that is based
on Witkin’s field dependence/independence model, which is a bipolar construct. The two
ends of the spectrum are field dependence and field independence, which relate to how
much a learner is influenced by the environment. AES-CS adapts the navigation aids
7
based on the cognitive style of the user. Before starting the tutorial, the student fills out a
learning style questionnaire to determine their learning style. During the tutorial, the
student also has an ability to change his student model. The system adapts the learner
control (either as directed by the student or by observing the student’s navigation), and
lesson structure (concept map or graphic indicator). An evaluation of the system was
conducted with 64 students, half of whom used the AES-CS and half used traditional
hypermedia. The evaluation results suggest that learners performed better with the
adaptive system than with the traditional system.
ACE (Spect & Opperman, 1998) adapts content presentation and sequence based on
various teaching strategies such as learning by example, learning by doing, and reading
text. Adaptation takes place at two levels, the sequencing of learning units and the
sequencing of learning material within each unit. The sequence of the material is
dependent on the current strategy. A particular strategy is chosen according to the
students’ interactions with the system and based on the success of the current strategy,
which is measured by how well the student does on tests. Studies conducted have shown
that learning style adaptability does improve efficiency and learning is also improved
compared to non-adaptive hypermedia that simply displays static pages and links.
Evaluations of these systems and other learning style-based adaptive hypermedia
have shown that adapting the learning environment to individual learning styles of each
student does result in increased learning gains.
Even though there is much controversy about learning styles in the context of
adapting learning environment and instructional content for individual students, they are
being used in various settings to create learning environments that are suitable for
8
students with different learning styles. They are being used to make teaching and learning
more effective by providing insight into how different students approach learning and
trying to address the variety of approaches through teaching styles. They are also being
used in adaptive educational systems to adjust the instructional material to suit students
with various learning styles. Learning styles have also been used in industrial settings to
improve communication and productivity of the employees. Based on their use in various
settings, learning styles do show promise for use in intelligent tutoring systems.
1.2
Adapting feedback to Learning Style in ITS
There are a number of challenges in creating a learning style-based ITS
pedagogical module, such as selecting the appropriate learning style model, creating a
learning environment and instructional material to match the underlying learning style
model, and addressing the multiple dimensions of the learning style model. Selecting an
appropriate learning style model is very important because not all learning style models
address the characteristics that can be used in customizing instructional materials and
learning environments. Researchers categorize various learning style models using
Curry’s (1983) onion metaphor which has four distinct layers. Personality (basic
personality characteristics) is the innermost layer, information processing (how people
take in and process information) is the second layer, social interaction (student behavior
and interactions in classroom) is the third layer and instructional preference is the fourth
and the outmost layer. The traits that are at the core and closer to the core are the most
stable and less likely to change in response to different teaching environments. The
instructional layer refers to the individual choice of learning environment and is the most
9
observable yet unstable layer. Information processing refers to an individual’s intellectual
approach to processing information and is considered a rather stable layer (Jonassen &
Grabowski, 1993). The two most relevant layers to learning style adaptability are the
instructional preferences and information processing layers. One addresses the student’s
preferences for the environment and the other addresses the content and presentation of
instructional material. The Felder-Silverman learning style model that is the basis for the
pedagogical framework in this dissertation falls into these two layers. This model will be
described in detail in the related work section.
Another challenge is that most learning style models are multidimensional, which
makes creating adaptive learning content and environments more complex. In order to
address all different dimensions of a given model, one has to create multi-dimensional
feedback. Not all the dimensions of a given model are applicable to all of the different
learning contexts and situations. One way that AEH systems address this problem is that
they only use selective dimensions of a given model to create the adaptive environment
(EDUCE, CSC383).
Yet another challenge in creating a learning style based pedagogical module is
that most learning style theories do not provide any guidance on how to create
instructional materials and environments based on a given model. There is no standard
methodology that one can follow to create instructional material and environments to
match the underlying learning style model. Most AEH developers create systems based
on the description of the dimensions in the model and use experts to determine if their
adaptive content and environment match the underlying model. In an ITS, this is an even
more difficult task because the ITS focuses more on student interpretation and
10
understanding the domain knowledge rather just then the presentation mode and delivery
of it as in AEH systems.
Intelligent tutoring systems help a student learn domain knowledge by
diagnosing the source of mistakes that the student makes and providing feedback that is
targeted to the source of the mistake. Different intelligent tutoring systems use different
approaches in providing feedback to students. For example, ANDES (Gertner &
VanLehn, 2000), a successful tutor for teaching Newtonian physics, employs the model
tracing methodology to trace the solution path of the student and provides feedback to the
student when he/she strays off the solution path. The model tracing methodology helps
tutors provide problem solving support similar to a human tutor who follows the
student’s problem solving behavior step by step, jumps in and offers the appropriate level
of help when the student makes a mistake (Merrill, Reiser, Ranney, & Trafton, 1992).
The model tracing methodology is also employed in other successful tutors such as the
PUMP algebra tutor (Koedinger, 2001), and LISPITS (Corbett & Anderson, 1992) a tutor
for LISP. Another common attribute of these successful ITSs is that like a human teacher,
they offer multiple levels of feedback, starting with a general hint and proceeding to more
specific hints related to the student’s erroneous action. If the student does not respond
well to the feedback, then he/she is given the next step in the solution.
Constraint-based tutoring is another methodology for an ITS to provide feedback
to students. Constraint-based tutors represent the domain model as a set of constraints.
These systems analyze the student solution by determining the constraints that it violates.
These systems do not try to determine the underlying cause of student mistakes because
they are based on the “learning from performance error” theory (Ohlsson, 1996). This
11
theory states that humans make mistakes while performing a learned task because they
violate a rule in the procedure that helps them apply a piece of knowledge. This theory
also claims that if the task is practiced enough and the student is aware of the errors that
he/she has made, he/she will eventually fix the rule that he/she has violated when he/she
made the mistake. Therefore, these ITSs do not attempt to find the underlying cause of
the mistake. The feedback they provide is linked to each constraint that the student
solution violates. The feedback is not provided by the system until the students asks for
it. These systems have multiple levels of feedback which range from no feedback,
feedback for each violated constraint, and ultimately feedback about the entire solution.
There are certain benefits to this type of student modeling and feedback strategy, notably
efficiency since it does not use any complicated computational algorithm to model the
student. Also, the feedback is quite direct, to the point and simple to create and maintain.
Many ITSs, with the exception of constraint-based tutors, react to the students’ erroneous
actions immediately because they do not want the students to go on the wrong path.
The pedagogical framework, that is the focus of this dissertation, uses elements of
successful ITSs. In this framework, the system reacts to student errors immediately,
providing feedback based on how well the student understands domain concepts, as well
as providing multiple levels of feedback in the context of the current problem/solution.
But it also adds another dimension of adaptability which is taking into account how a
student takes in and processes information. The advantage of this framework is that it
provides feedback that is best suited to the learning style of the student. In addition to the
feedback, this pedagogical framework is designed to provide a tutorial on domain
12
knowledge concepts which will also match the lesson content with the learning style of
the student.
1.3
Hypothesis
Learning styles play an important part in the learning process and educators and
researchers are using it to design instructional material and educational systems that adapt
to individual learners based on their individual learning styles. Evaluation studies of
learning style-based adaptive educational systems show that these systems do result in
increased learning gains.
Intelligent tutoring systems help students learn domain knowledge by guiding them
in problem solving activities and providing feedback on their work. Typically, this
feedback is adapted to the student knowledge model only and doesn’t take into account
the individual learning style of the student. It is not a trivial task to create learning style
feedback as there are many issues as to what individual characteristics should be used,
how the feedback should be created and organized, when and how this feedback should
be provided, etc.
There are many learning style theories and models that describe how people take in
and process information. I propose that it is possible to use a learning style model to
create a pedagogical framework that would allow an ITS to create and provide learning
style-based feedback. This pedagogical framework would consist of a feedback
architecture that would address different dimensions of learning style and a methodology
that would use this architecture to create feedback that is appropriate for individual
students in the context of their problem solving behavior.
13
1.4
Research Questions
The focus of this research was to create a pedagogical framework based on the
Felder-Silverman learning style model that can serve as the basis for creating a
pedagogical system that supports individual learning styles. The Felder-Silverman
learning style model was chosen for this research for many reasons: it has been
successfully used by instructors to create traditional and hypermedia courses; it has
limited dimensions that make it feasible to create multidimensional feedback; it is
accompanied by a validated instrument that makes it easy to categorize the learner’s
specific learning style. The Felder-Silverman model is discussed in detail in the related
work section. This pedagogical framework helps an ITS adapt to an individual learner by
presenting domain knowledge in a form that is consistent with his/her learning style.
This pedagogical framework has been implemented in DesignFirst-ITS, an ITS for
novices learning object-oriented design using UML and Java, a complex and open-ended
problem solving task for novice learners. A learning style-based approach is ideal for
DesignFirst-ITS because students have difficulty learning this domain and learning style
feedback could make it easier for the students to learn object-oriented design concepts.
This dissertation attempts to answer the following questions:
1. How can learning style based feedback architecture be created using a learning
style model?
2. How can this feedback architecture be used to create learning style based
feedback?
3. How can this feedback architecture be generalized to make it domain
independent?
14
4. How can this feedback architecture be made extendible, such that the instructor
can easily add/update the feedback without requiring any help from the ITS
developer?
5. How can this feedback architecture be used to incorporate multiple pedagogical
strategies into an ITS?
6. How effective is this learning style ITS in helping students understand the domain
knowledge?
Research question 1 (feedback architecture) is addressed by creating different feedback
categories, levels, and components based on the Felder-Silverman learning style model.
Each of these feedback components has a set of attributes that contain information about
the component such as relevant concept, category, feedback level, feedback type, etc.
Research question 2 is answered by developing a process that makes use of these
attributes to assemble and create feedback during the tutoring process. This process takes
into account student profile information such as the knowledge level of the student,
feedback history, and learning style preferences. Research question 3 (domain
independence) is addressed by generating a sample of learning style feedback for another
domain. Question 4 (extensibility) is addressed by a graphical user interface that guides
an instructor to add/modify feedback information to the pedagogical framework.
Question 5 (multiple strategies) is addressed by creating feedback that implements
strategies such as learning by example and learning by doing.. The last question, question
6, is addressed by designing an evaluation experiment involving human subjects.
15
1.5
Contributions
This research has contributed to several different domains. First, it furthers the field
of intelligent tutoring systems by taking the adaptability of an ITS one step further by
catering to the needs of individual learners. It provides a novel pedagogical framework
based on the Felder-Silverman learning style model, which was developed specifically to
address the needs of engineering/science students. This domain-independent framework
provides developers with a standard methodology to integrate learning styles into an ITS
without starting from scratch.
This pedagogical framework is extendible and allows an instructor to add
additional feedback through a graphic user interface, thereby minimizing the task of
knowledge acquisition. This automated knowledge acquisition eliminates the middleman
and allows experts to add their knowledge into the system so that it is instantly usable.
In summary, the contributions of this research are:
1. It provides a novel domain-independent pedagogical framework to integrate
learning styles into an intelligent tutoring system. This framework provides a
standard methodology to ITS developers to adapt the feedback to the needs of
individual learners.
2. This research contributed towards creating a pedagogical advisor in Design FirstITS, an ITS for teaching object-oriented design and programming.
3. This research provides a novel, graphic user interface for extending the feedback
network.
4. The object-oriented design tutorial can be used by an instructor as a resource to
introduce object-oriented concepts to introductory class students.
16
2 RELATED WORK
This chapter will discuss the relevant background research, which falls into three
categories: learning style research (models and instruments); application of learning style
research in adaptive educational systems; intelligent tutoring systems and feedback
mechanisms; and pedagogical modules in intelligent tutoring systems.
2.1 Learning Style theories
According to Sim & Sim (1995), effective instruction and training has to go beyond
the delivery of information and take into account the model of minds at work. Effective
instructors do not view the students as sponges ready to absorb information that is
delivered to them; instead they see students as active participants in their own learning
process. The instructor can create an environment that is conducive for all students by
acknowledging the validity and presence of diverse learning styles and using instructional
design principles that take into account the learning differences of students, thereby
increasing the chances of success for all different types of learners (Sim & Sim, 1995).
Learning style is a term that has been used to refer to many different concepts
such as cognitive style, sensory mode, etc. As a result, there seem to be as many
definitions of learning style as there are number of researchers in the field. Cornett (1983)
defined learning style as “a consistent pattern of behavior but with a certain range of
individual variability.” Messick & Associates (1976) define learning styles as
“information processing habits representing the learner’s typical mode of perceiving,
thinking, problem-solving, and remembering.” The most widely accepted definition of
learning style came from Keefe (1979) who defines learning style as the “composite of
17
characteristic cognitive, affective and psychological factors that serve as relatively stable
indicators of how a learner perceives, interacts with and responds to the learning
environment.”
Researchers have developed many different learning style models to explain how
different people approach learning as well as how they acquire and process information.
All these models offer a different perspective on what elements of individual
characteristics affect the learning process. Claxton and Murrell (1987) categorize
different learning style models using Curry’s four layer onion metaphor as noted
previously. The innermost layer is the cognitive personality layer, which describes the
most stable attributes. The next layer is the information processing layer that describes
how people process information. The third layer is the social interaction layer, which
contains models that explain how students act and interact in classroom environment. The
fourth layer, which is the outermost layer, describes the student preferences with respect
to instructional environment.
The cognitive personality layer contains Witkin’s bipolar construct of field
dependence/field independence and the Myers-Briggs Type Indicator (Myers, 1976).
Witkin’s model is a bipolar construct that classifies individuals as field dependent or field
independent depending upon how much the individual is influenced by his surroundings.
Field independent individuals tend to be highly motivated, independent, and nonsocial
people who think logically/analytically and are not influenced by their surroundings. On
the other hand, field dependent individuals are very much affected by their environment,
have difficulty extracting important information from their surroundings, have a short
attention span and tend to make decisions based on human factors rather than logic.
18
Witkin developed the embedded Figure Test, and Group Embedded Figure Test (GEFT)
to categorize people as field independent or field dependent (Jonassen & Grabowski,
1993).
The Meyers-Briggs Type Indicator (MBTI) consists of four dichotomous scales:
introvert/extrovert
(I-E),
thinking/feeling
(T-F),
sensing/intuiting
(S-N),
and
judging/perception (J-P). There are sixteen possible personality types that one can fall
into based on the indicators set up by Meyers and Briggs; for example, an individual
could be ISTJ (introvert, sensor, thinker and perceiver) while another person could be
EFSP (extrovert, feeler, sensor and perceiver). Extroverts are outgoing, try things out
before thinking and interact with people, whereas introverts are reserved and think before
trying things out. Thinkers use logic to make decisions, while feelers make decisions
based on personal and humanistic elements. Sensors are detail-oriented and focus on
facts, while intuitors are imaginative and concept-oriented. Judgers are organized and
plan, while perceivers are spontaneous and can adapt to a changing environment. The
MBTI instrument is used to determine the personality type of an individual.
The personality learning models, such as the MBTI and Witkins models, have been
used in education to determine how people with different personality types approach
learning. There have been numerous studies conducted using these two models to
categorize the personalities of instructors and students and the results have been used to
develop various curriculums and programs to accommodate students with different types
of personalities (Felder, 1996; Jonasses & Grabowski 1993; Messick et al., 1976). These
models have been used by various corporations to assess personalities of employees for
different purposes; to match people to their ideal jobs based on their personality type; to
19
create work environments where people understand individual differences, all leading to
better communication. The MBTI was bought by Consulting Psychologists Press Inc. and
is available as a commercial product that has been translated into 30 different languages
with more than 100 training manuals and books.
The information processing layer contains Kolb’s experiential learning model
(Kolb, 1984) and Theory of Multiple Intelligences (Gardner, 1983). Kolb views learning
as a multi-stage process that begins with stage 1, when the learner goes through concrete
learning experiences. In stage 2, the learner reflects on his/her concrete experience. In
stage 3, the learner derives abstract concepts and generalizations. Ultimately, the learner
tests these generalizations in new situations using active experimentation in stage 4. Kolb
identifies 4 different types of learners in his model: diverger (creative, generates
alternatives), assimilator (defines problems, creates theoretical models), converger (likes
practical applications, makes decisions), and accommodator (takes risks, gets things
done). Kolb developed a learning style inventory (LSI) to assess these student learning
styles.
The theory of Multiple Intelligences (MI) approaches the learning process from an
intelligence point of view (Gardner, 1983, 1993). This theory was developed by Howard
Gardner, a professor of Education at Harvard University. Gardner views the traditional
definition of intelligence (measured by I.Q test) as too narrow and proposes the following
eight different types of intelligences: linguistic (being able to use words and language),
logical/mathematical (having the ability to use numbers and being skilled at reasoning,
problem solving and pattern recognition), musical (being skilled at producing and
recognizing rhythms, beats, tonal patterns and sounds), spatial (being able to accurately
20
perceive and visualize objects, spatial dimensions and images),bodily (kinesthetic, or
being skilled at controlling body movements and handling objects),naturalist (being
skilled at dealing with the various functions and mechanisms of life),interpersonal
(having
the
capacity
for
person–to–person
communications
and
relationships),intrapersonal (having the ability for spirituality, self–reflection and self–
awareness).
The traditional educational process is geared towards the first two intelligences
leaving the rest ignored. Gardner believes that students should have an opportunity to use
all the preferred intelligences during learning and that teachers should stimulate the
student to do so. Gardner also believes that differences among students have to be taken
into account to personalize the educational process. The teaching material should be
varied, meaning it should include activities such as multimedia, role playing, active
learning, cooperative learning, field trips, etc.
The social interaction layer of the onion contains the Grasha-Reichmann Student
Style Scale model (Grasha, 1972; Riechmann & Grasha, 1974). This model focuses on a
student’s attitudes toward learning, classroom activities, teachers, and peers. It describes
learners as independent, self motivated, need little direction, confident; dependent, reliant
on teachers, little intellectual curiosity; collaborative, sharing, cooperative, like working
with others; competitive, motivated by competition, want to do and be the best;
participant enjoys attending class, takes responsibility for learning, does what is required;
and avoidant uninterested in course work content, does not participate.
The instructional environment layer contains the Dunn and Dunn learning style
model (1978), which is a complex model based on environmental and instructional
21
preferences. Its dimensions are: environmental (sound, light, temperature, and classroom
design), emotional (motivation, responsibility, persistence, and structure), sociological
(learning alone or in groups, presence of authority figure, learning routine patterns) and
physiological (perception, intake, time, and mobility). This model contains two
instruments to measure the following factors for learning that Dunn and Dunn found
significant: Learning Style Inventory (LSI) (Dunn, Dunn, & Price, 1979, 1989a) for
children; and Productivity Environmental Preference Survey (PEPS) for adults (Dunn et
al., 1982, 1989b).
Research on learning styles evolved from psychological research on individual
differences, which was widespread in the 1960s and 1970s (Curry, 1987). Learning style
research has resulted in the development of more than 70 models and instruments that
have been used to understand how individuals approach learning. In spite of the growing
popularity and interest in learning styles, it is still a controversial subject and researchers
have yet seem to agree on any aspects of learning style including the definition. One
explanation of this disagreement comes from Hickcox (1995), who believes that two
different learning style research strands, with different approaches towards learning style,
cause the disparity among researchers. According to Hickcox (1995),
The North American researchers developed their concept of
learning style from their background in psychology and
cognitive psychology and emphasized psychometric
considerations from the beginning. European and
Australian researchers developed concepts based on
European approach to learning style research. This
approach began with detailed observations of learning
behaviors of small numbers of learners. As a result, the
conceptualization and interpretation of observable behavior
of the learner is viewed differently by both groups. North
American researchers have focused on behavioral strategies
22
that learners use, and which, by their nature, are unstable
and relatively easy to change. European and Australian
researchers have regarded observed learning behaviors as
indicative of underlying psychological characteristics that
are stable and relatively difficult to change.
This basically explains the array of definitions for learning styles and the difference of
opinion over learning style stability.
Lack of a standard definition for learning style has resulted in learning style models
and instruments that are based on different concepts of learning style, and therefore cause
variation in standards for reliability and validity of psychometric instruments (Curry,
1987). Researchers including Claxton and Murrell (1987), Hickcox (1995), and Messick
and Associates (1976) bring up another interesting point in relation to validity and
reliability of psychometric instruments which is related to ethnicity of learners. Most of
these instruments were developed based on college educated Caucasian reliability and
validity samples. So when these instruments are used with diverse populations, the
reliability and validity of the instrument might not hold. The increasingly diverse
population has prompted researchers to conduct research on various ethnic groups. One
study involving Native American students in a biology course at a community college
was performed with a focus on improving the curriculum and teacher-student learning
process (Haukoos & Satterfield, 1986) cited by (Claxton & Murrell, 1987). Data was
gathered from a group of 20 native students and 20 nonnative students. The native
students were found to be visual-linguistic in their behavior and preferred not to express
themselves orally, while the nonnative students were mostly auditory-linguistic and
preferred to express themselves orally. Based on the result of the study, the course for
23
Native American students was modified to accommodate their visual-linguistic
tendencies and include more discussions rather than lectures, more time for student
questions, slides and graphics, as well as small study groups. These changes had a
tremendous impact in terms of improving group interactions, course completion rate also
increased, and more students ended up pursuing advanced degrees (Claxton & Murrell,
1993).
One problem that many learning style critics have with categorizing learning styles
is a concern that students will be labeled and pigeon-holed into one learning style
category. Researchers who support use of learning styles insist that the purpose of
obtaining learning style information is to help teachers design classes that appeal to a
majority of students (Felder & Brent, 2005; Hickcox, 1995). Teachers can encourage
collaborative and active learning by augmenting a traditional lecture style with charts,
diagrams, pictures and slides. Felder and Brent (2005) advocate another use of learning
style information which is to help students understand exactly how they learn so they can
try to maximize their learning opportunities by using strategies that work best with their
particular learning style. However, supporters of learning styles do not advocate that
students should only be taught with instruction matched to their learning styles. In fact,
many researchers acknowledge the importance of challenging students to promote
flexible thinking by presenting information that is mismatched with their learning style
(Messick & Associates, 1976).
Even though learning style research lacks focus in terms of a consistent definition
for the concept and seems to be full of disagreements, it still has made significant
difference in changing the role of learners in an educational process. Learners are
24
considered active participants in the learning process as opposed to sponges that absorb
information. Learning style research has a lot to offer as it gives the student and teacher
insight into the most effective medium for maximum knowledge transfer. Knowing
learning styles of the students in a class enables the teacher to develop instructional
material that can be effective for students of various learning styles. It also helps the
student to know their own learning styles as it could help them study effectively and
efficiently at their present grade level and into the future of their learning experiences.
2.2 Felder-Silverman learning style model
The Felder-Silverman learning style model will be used for the pedagogical
framework described in this document. The reasons for choosing this model will be
described in detail later in this document. This model was developed at North Carolina
State University by Richard Felder and Linda Silverman to improve engineering
education (Felder & Silverman, 1988). According to Felder, optimal learning takes place
when the reception of information is aligned with the manner in which it is processed.
Each person has his/her own unique way of processing information. For example, when
going to a new location, some people prefer written instructions while others choose to
use a map. Eventually, each individual will make it to their destination but one may get
there on time while the other might be late due to a mismatch in the information transfer,
meaning that perhaps one or the other could not read the map or follow the directions,
respectively. This analogy is quite relevant to a student who receives instruction in a form
that is not matched to his/her learning style. He/She might eventually understand the
content but not without experiencing some level of frustration.
25
According to Felder and Brent (2005), a student’s learning style can be defined by
answering the following four questions:
1. Information Perception: “What type of information does the student
preferentially
perceive:
sensory
(external)—sights,
sounds,
physical
sensations; or intuitive (internal) — (memory, thought, insights)?”
2. Input Modality: “Through which sensory channel is external information
most
effectively
perceived:
visual—pictures,
diagrams,
graphs,
demonstrations; or verbal (written and spoken explanations)?”
3. Information Processing: “How does the student prefer to process information:
actively— through engagement in physical activity or discussion; or
reflectively— through introspection?”
4. Understanding: “How does the student progress toward understanding:
sequentially (in logical progressions of incremental steps); or globally (in
large jumps, viewing the big picture, holistically)?”
These questions may imply that the Felder-Silverman learning model categorizes
a student’s learning style into discrete categories: sensing-intuitive, visual-verbal, activereflective and sequential-global. In fact, the learning style dimensions of this model are
continuous and not discrete categories. This means that the learner’s preference on a
given scale does not necessarily belong to only one of the poles. It may be strong, mild,
or almost non-existent. Table 1 summarizes learning environment preferences of typical
learners from each of the four dimensions of the Felder-Silverman model.
26
Active
Reflective
Sensing
Intuitive
Visual
Verbal
Sequential
Global
Tries things out, works within a group, discusses and
explains to others
Thinks before doing something, works alone, or as a
pair
Learns facts, solves problems by well-established
methods, patient with details, memorizes facts, works
slower, likes hands-on work
Discovers possibilities and relationships, is innovative,
grasps new concepts, abstractions and mathematical
formulations, works quickly
Visualizes mentally with pictures, diagrams, flow
charts, time lines, films, multimedia content and
demonstrations
Comprehends written and spoken explanations
Learns and thinks in linear/sequential steps
Learns in large leaps, absorbing material almost
randomly
Table 1 – Characteristics of typical learners in Felder-Silverman learning style model
Felder and Silverman (1988) purpose a multi-style teaching approach to
accommodate students with various learning styles. They believe that most instructors
tend to favor their own learning style or they teach the way they were taught, which is
usually through traditional lecture style courses that tend to favor students that are
intuitive, verbal, reflective and sequential learners. Felder proposes a teaching style
approach that is parallel to the learning style model. The teaching style can be defined by
answering the following questions:
1. What type of information is emphasized by the instructor: concrete (factual)
or abstract (conceptual, theoretical)?
2. What mode of presentation is stressed: visual (pictures, diagrams, films,
demonstrations) or verbal (lectures, readings, and discussions)?
27
3. What mode of student participation is facilitated by the presentation: active
(students talk, move, and reflect) or passive (students watch and listen)?
4. What type of perspective is provided on the information presented: sequential
(step-by-step progression), or global (context and relevance)?
According to Felder, optimal learning takes place when the learning style of the
student matches the teaching style of the instructor. For example, a student who prefers
the sensing dimension would respond well to an instructor who teaches facts and data,
and a student who prefers the intuitive dimension will respond well to an instructor who
teaches concepts and principles. A student who prefers the sequential dimension will
respond well to an instructor who presents information step-by-step, while a student who
prefers the global dimension will respond well to an instructor who presents information
in the context of the "big picture." The same can be inferred for the other learning style
dimensions. Felder (1993) makes the following recommendations for instructors to
design course work that best appeals to various learning styles:
1. Teach theoretical material by first presenting phenomena and problems that relate
to the theory.
2. Balance conceptual information with concrete information.
3. Make extensive use of sketches, plots, schematics, vector diagrams, computer
graphics, and physical demonstrations in addition to oral and written explanations
in lectures and readings.
4. To illustrate abstract concepts or problem-solving algorithms, use at least some
numerical examples to supplement the usual algebraic examples.
28
5. Use physical analogies and demonstrations to illustrate the magnitudes of
calculated quantities.
6. Provide class time for students to think about the material being presented and for
active student participation.
7. Demonstrate the logical flow of individual course topics, but also point out
connections between the current material and other relevant material in the same
course, in other courses in the same discipline, in other disciplines, and in
everyday experience.
8. Encourage collaborative learning.
To ensure that the Felder-Silverman learning style can be used practically, Felder and
Soloman (2001) developed the Index of Learning Style (ILS) psychometric instrument
which categorizes an individual's learning style preferences along the Felder-Silverman
learning style model. The ILS is a questionnaire containing 44 questions, 11 of which
correspond to each of the four dimensions of the learning style model. Each question is
designed to determine if a respondent tends to belong to one category or another on that
dimension. It does so by asking the respondent to choose only one of two options where
each option represents one category. Since there are 11 questions for each dimension, a
respondent is always classifiable along each dimension. The range of data for each
dimension is from 0 to 11. Since there are four dimensions and each dimension has two
poles there are 16 possible combinations, i.e. types of learner, in this model. An example
of ILS results is shown below in figure 1.
29
The Felder-Silverman learning style model has been used by educators in various
ways to help improve engineering education. Richard Felder has used this model and the
ILS to determine the learning styles of his students and has designed his engineering
courses to address all different learning styles (Felder, 1996). Longitudinal studies have
confirmed that designing and delivering courses that take different learning styles into
account significantly improve learning and the overall educational experience of the
students (Felder, 1993). At the University of Michigan, multimedia instructional modules
were developed for an engineering course based on the learning styles of the students as
categorized by the Felder-Silverman learning style model.
ILS has been used in
numerous universities to determine the learning style of engineering students and faculty
members (Felder, 2005).
30
An empirical study was conducted at Open Polytechnic University of New
Zealand to determine the differences between the learning styles of computer science
students and the teaching style of their teachers (which is based on the learning style of
the individual) using the Felder-Silverman model (Kovacic, 2003). The study results
showed a significant difference between the student learning styles and teaching style.
The results were used to make recommendations to the teachers to improve their teaching
style to match the various learning styles of the students.
Thomas et al. (2002) used the Felder-Silverman model to conduct a study at the
University of Wales, UK to determine effects of different learning styles on student
performance in an introductory class. The learning styles of 107 students in an
introductory programming course were assessed using the ILS. Their performance on the
exams and programming assignments was analyzed. The two statistically significant
differences provided that reflective learners scored higher on the exam portion than the
active learners (p=.015) and verbal learners scored higher than visual learners (p=.027).
A small number of verbal and sequential learners scored higher on programming
assignments than global learners. These results confirm Felder’s observation that
engineering education is biased towards reflective, intuitive, verbal and sequential
learning styles. The authors used the results of this study to create course content to
address the needs of learners with different learning styles.
Many Adaptive Hypermedia Systems such as CS383 (Carver et al., 1999),
(Bajraktarevic, 2003), TANGOW (Paredes & Rodriguez, 2003) and WHURLE (Brown
& Brailsford, 2004), use this model to adapt course presentation/sequence to individual
learners.
31
One of the reasons that the Felder-Silverman model is useful in adapting instruction is
that it only has four dimensions compared to some of the other models that contain
several more (Dunn and Dunn, Gardner’s Multiple Intelligence), which makes it feasible
to implement in an adaptive system. Another advantage of this model is that the
accompanying Index of Learning Style (ILS) questionnaire (Felder & Soloman, 2001)
provides a convenient and practical approach to establish the preferred learning style of
each student. It is simple, easy to use and the results can be linked easily to adaptive
environments.
Before a psychometric instrument can be used to collect data and conclusions can be
drawn from this data, the instrument has to be proven reliable and valid. Reliability
means that it reproduces the same results when used over a period of time. Validity
means that the instrument measures that which it is designed to measure. The Index of
Learning Style (ILS) instrument that is used to categorize an individual’s learning style
along the Felder-Silverman model has been a focus of a number of validation studies.
Litzinger et al. (2005) collected learning style data of 572 students from the liberal arts,
education, and engineering colleges at Penn State University. A Cronbach alpha
coefficient was calculated for each of the four scales of the ILS. The Cronbach
Coefficient Alpha is used to find the degree of t correlations among a set of variables. It
assesses how well a set of items measures a single one-dimensional object (e.g. attitude,
phenomenon etc.). The Cronbach alpha coefficient value for sensing-intuitive (S-N) scale
was 0.77; visual-verbal (V-V) scale was .74; active-reflective (A-R) scale was 0.60;
sequential-global (S-G) was 0.56. Since these Cronbach alpha coefficient values are well
within minimum range of 0.50 for this type of instrument (attitude), the ILS was found to
32
be reliable. Results of factor analysis performed on this data, combined with reliability
results, indicated that the ILS was valid and reliable.
Zywno (2003) conducted test-retest and Cronbach alpha coefficient analysis on learning
style data for 557 students and faculty at Ryerson University. The Cronbach alpha
coefficient value for sensing-intuitive (S-N) scale was 0.70; visual-verbal (V-V) scale
was .63; active-reflective (A-R) scale was 0.60; sequential-global (S-G) was 0.53. Based
on the Cronbach alpha coefficient and factor analysis Zywno concluded that ILS was
reliable and valid instrument.
Livesay et al. (2002) used a sample size of 255, while Felder et al. (2005) used a
sample size of 584, and both came to the same conclusion that ILS was reliable and valid.
On the other hand, Van Zwanenberg et al. (2000) conducted a study with a sample size of
279 and came to the conclusion that ILS has low reliability. It also rose concern about
construct validity. According to Zywno (2003), some of the reasons that Van
Zawanenberg did not find the ILS to be reliable were that they did not do the test-retest
analysis, but rather used the instrument to predict academic performance and failure rate
even though ILS is designed only to find the learning style preferences of the student, not
to predict performance.
Felder and his colleagues discuss the implications of varying learning styles of
students and teaching styles in a classroom environment in detail (Brent & Felder, 2005;
Felder, 1993, 1996; Felder & Silverman, 1988). They suggest that teachers can
effectively engage students in the learning process by using a multi-style approach that
can appeal to various learning styles and not just favor a single dimension. Felder also
provides guidance to students on how to approach learning and what learning methods
33
would be appropriate for them based on their learning styles. This literature is very
helpful in creating materials suitable for each learning dimension of this learning style
model.
More importantly, ILS has been accepted and used by national and international
engineering professors (Felder, 1996; Rosati, 1999; Smith, Bridge, & Clarke, 2002). The
ILS has been translated into many foreign languages and in 2002 the online ILS Website
received 100,000 hits (Zywno, 2003). A number of validation studies have also been
conducted on the ILS reliability and construct validity (Felder & Spurlin, 2005; Livesay
et al., 2002; Van Zwanenberg et al., 2000; Zywno, 2003). These studies generally
conclude that the ILS is an acceptable and suitable psychometric assessment tool for
identifying the learning styles of students in engineering and the sciences (Zywno, 2003).
2.3 Application of learning styles in adaptive educational systems
In addition to AEHS described previously, there have been a number of other adaptive
systems that use learning style as a basis for adaptability. EDUCE (Kelly & Tangney,
2002) is a web-based system that teaches science to middle school age children. EDUCE
uses Gardner’s Multiple Intelligence theory to construct the domain knowledge and the
student model. The knowledge content in the domain model is stored in different forms
corresponding to the intelligences modeled in the system. The student model consists of
two types of Multiple Intelligence student profiles, a static profile that is generated from
the Multiple Intelligence inventory completed by the student and a dynamic profile which
is created by the system based on student behavior and navigation. EDUCE implements
four of the eight intelligences identified by Howard Gardner: Mathematical/logical,
34
visual/spatial, verbal/linguistic, and musical/rhythmic. The static profile is generated
from MIDAS, a questionnaire that assesses a student’s Multiple Intelligence strengths
and weaknesses. In EDUCE, the student can choose from material that is based on VL
(verbal/linguistic), ML (mathematical/logical), VS (visual/spatial), or MR (musical/
rhythmic) intelligence by clicking on one of the four icons representing these
intelligences. The material based on ML uses numbers, puzzles, logical ordering,
mathematical representations, etc. The VS material contains visual representations such
maps, drawings and pictures, while VL material mostly concentrates on verbal
explanation and narration of procedures. MR consists of background music, jingles,
mood setting, etc. Students have an option of navigating through the domain material by
themselves or allowing EDUCE to help navigate by presenting appropriate types of
resources based on their dynamic profiles.
Two evaluation studies of EDUCE were conducted with children between the ages of
twelve and seventeen, who were given a pre-test before using EDUCE and also given a
post-test. In EDUCE the students have the option to learn the same concept using
multiple resources. For example, a student can learn the same concept using any or all
types of resources (MR, VL, VS, and MR). The students were categorized as high,
medium or low activity based on how many resources they used to learn a given concept.
The study results indicated that matching instructional resources to individual learning
style did not result in any learning gains for the students.
The Arthur system (Gilbert & Han, 1999, 1999a, 2002) assumes four learning
styles (auditory, visual, tactile or a combination) and contains course material to match
each style. Arthur’s designers believe that the one-to-many instruction model of a
35
traditional classroom is not as effective as the many-to-one instruction model in which
the student could be presented material prepared by different instructors, each catering to
a different learning style. The course content in Arthur is broken down into small
modules. These modules are distributed among different instructors who prepare online
material suited for a given learning style such as auditory, visual, tactile, etc. This
material is then entered into Arthur.
The first time the student uses Arthur the system randomly assigns a module to
him/her. As the student navigates through the course material, the system monitors the
student’s activity. Based on the student’s evaluation of the covered material, the system
determines the student’s learning environment preferences. Arthur uses the mastery
learning approach which states that given enough time and proper instruction, any student
can master any concept (Bloom, 1968). This approach requires that instructional material
be broken down into smaller units which contain several concepts and objectives that the
student needs to master. A student is considered to have mastered a unit if he/she scores
80% or above on the quiz following the unit. In Arthur, if the student scores 80% or
above on the quiz after he/she has learned a concept, the student is then assumed to have
a learning style that is in sync with the teaching style of the instructor who prepared the
module. The student is then presented with the next concept from the same module, but if
he/she scores less than 80% on the quiz, the module from a different instructor is chosen
to teach the same concept to the student. Arthur maintains a student model that captures
student activity consisting of learning path, quiz details, etc. Arthur uses this student
model to guide the system in adapting future course presentation to the individual
36
student. As Arthur is not based on any particular learning style theory, it does not use any
type of instrument to categorize the student’s learning style.
Empirical studies were conducted to determine how many students could
complete the course while performing mastery level quizzes after every lesson. Study
results indicated that the majority of learners (81%) out of a group of 21 completed the
course with attaining mastery of domain content. These results suggest that providing
instructional material addressing a variety of learning styles is beneficial for the students.
Bajraktarevic, Hall and Fullick (2003) performed an empirical evaluation to determine
if matching instructional material to student learning style results in increased learning.
The study consisted of creating multimedia for a two-year geography course based on the
global/sequential dimension of the Felder-Silverman model. For the students who prefer a
global learning style, the material contained things such as tables of content, summaries,
diagrams, overviews of information, etc. The material for the sequential learning style
students contained chunks of information with “forward” and “back” buttons. In the first
part of the study, the students filled out the Index of Learning Style (ILS) questionnaire
that categorized their preferred learning style along the Felder-Silverman learning style
model. After filling out the ILS, the students took a pre-test, navigated through subject
hypermedia that matched their learning style, the students then took a post test. In the
second part of the study, the students were again given a pre-test for another subject.
Then they navigated through material that was not matched to their learning style and
took a post test. The pre-test and post-test from both conditions were analyzed. The
difference between the pre-test and post-test was greater under matched conditions than
between the pre-test and post-test under mismatched conditions.
37
After statistically
analyzing the data, the researchers concluded that students benefit more when the
learning material is adapted to their learning styles.
.
Ford and Chen (2001) also conducted an empirical study with 73 post-graduate
students
to
determine
the
relationship
between
matched
and
mismatched
learning/teaching styles between instructors and learners. The students were classified as
field dependent, field independent, and intermediate (Witkin, 1976). Intermediate refers
to individuals who possess some traits of both field dependent and field independent
poles. The instructional material consisted of an HTML tutorial that was developed in
two different formats: depth-first and breadth-first. Depth-first would appeal to field
independent students and breadth-first to field dependent students. The students were
given a task sheet that required them to create a web page. They completed a pre-test
before they reviewed the material, created the web page, and then completed a post-test
afterwards. 15 field independent, 12 field dependent and 9 intermediate students were
assigned to breadth-first (appropriate for field independent). 16 field independent, 12
field dependent and 9 intermediate students were assigned to depth-first (appropriate for
field dependent). The learning gain was determined by analyzing the pre-test and posttest and how many items had been successfully completed from the task sheet. The
students achieved significantly higher scores on the post-test than the pre-test when the
instructional material matched their learning style.
As we can see, adaptive educational systems are going beyond adapting to the
knowledge state of the student only and are attempting to use learning styles to broaden
the scope of adaptability. Even though learning style adaptability in adaptive educational
38
systems does not always result in learning gains for students, in many instances, it does
seem to help students gain a better understanding of domain material. The idea of
creating systems that provide this kind of adaptability is relatively new so there are no set
methods or standards that developers and designers can use to create them. Due to the
lack of set methods or standards to integrate learning styles into a system, it is difficult to
determine if the system does indeed provide learning style support for various learners.
The pedagogical framework, the focus of this dissertation, is being developed to address
the need for definite methodology to integrate learning style into an adaptive educational
system. The rationale behind creating such a pedagogical framework is to provide a
framework that developers can use to create learning style-based systems without starting
from scratch again.
2.4 Intelligent Tutoring systems and feedback mechanisms
ITSs emerged from the application of artificial intelligence techniques to
computer aided instruction (CAI) systems which attempted to individualize instruction
using various branching techniques. These systems were only able to present preprogrammed information and followed a certain path depending upon whether the student
answered questions correctly or incorrectly. They were incapable of modeling the
student’s cognitive processes and domain knowledge; therefore, they could not provide
the responsive individualized learning environment comparable to what a human tutor
could offer.
Most intelligent tutoring systems overcome deficiencies of CAI systems by
maintaining an accurate model of the student and using this model to adapt to individual
39
student needs. Burns and Capps (1988) identify three components of an Intelligent
Tutoring System as the expert module, the student module and the tutoring module. The
expert module encompasses the expert knowledge and problem solving characteristics of
a given domain. The student module captures the student knowledge and understanding
of the domain as well as the problem solving history of the student. The tutoring module
encompasses teaching strategies and essential instructions which it adapts based on the
information stored in the student model.
The driving force behind intelligent tutoring systems research was the desire to create
a learning environment that encompassed effective tutoring techniques possessed by
human tutors. The tutor in an ITS behaves much like a human instructor in a one-on-one
tutoring session, observing the student’s problem solving actions while offering advice
and guidance (Goodman, Soller, Linton, & Gaimari, 1997). A number of successful ITSs
have been built in various domains such as ELM (Weber & Schult, 1998) and LISPITS
(Corbett & Anderson, 1992) for LISP, ANDES (Gertner & VanLehn, 2000) for Physics,
and PAT (Koedinger, 2001) for Algebra. Evaluation of these ITSs shows that just like
human tutors, these systems can help students achieve a better understanding of domain
knowledge efficiently.
These systems are based on different learning theories and consequently use different
strategies to support problem solving. These systems are similar in that they all analyze
student problem solving behavior, identify gaps in student knowledge and provide
feedback to bridge the gaps. Feedback is an essential part of individual tutoring and is
considered an important aspect of learning and instruction. These systems differ in the
philosophy that they employ to provide the content, degree, and timing of feedback. In e40
learning systems, feedback is defined as, “any message or display that the computer
presents to the learner after a response” (Wager & Wager, 1985). The goal of effective
feedback is to assist learners in identifying their false beliefs, making them aware of their
misconceptions and areas of weakness, and reconstructing their knowledge to help
learners to achieve the underlying learning goals (Mory, 1996).
One important issue in feedback is timing. When is it a good time to intervene if a
student makes an error? The tutor can either provide immediate feedback following an
error or they can choose to delay the feedback and let the student attempt to find and
correct the error. However, with this in mind, studies conducted in various educational
settings have found immediate feedback to be more effective than delayed feedback
(Kulik & Kulik, 1988).
Corbett and Anderson (2001) conducted a study with LISPITS to determine the
effectiveness of different types of feedback. LISPITS is a Cognitive tutor that teaches
Lisp and is based on the ACT theory of cognition. It uses model tracing to keep track of
how the student solves the problem. The purpose of this tutor is to help students with
homework exercises. For each exercise, the student is given a description of a short
program to write and as the student types in his/her program, the tutor monitors and
offers help if the student makes a mistake. Like PAT (the algebra tutor) and other
cognitive tutors, feedback is provided immediately after an error is detected. The study
consisted of four different versions of LISPITS; in the immediate feedback version, the
students received full or complete feedback immediately following an error. In the flag
error feedback version, the students received feedback that told them only whether their
solution was correct or incorrect. In the demand feedback version, the tutor provided
41
feedback at the request of the student. And in the no feedback version, students did not
receive any feedback at all. On a post-test, student performance showed that all feedback
conditions were better than no feedback, and there were no significant differences in
effectiveness when feedback was provided. However, immediate feedback learners
completed their tasks faster than no feedback and on demand feedback.
Lewis and Anderson (1985) conducted a study to determine the effects of immediate
feedback versus delayed feedback using an adventure game that required players to go
through a maze of rooms to find treasure. Under the immediate feedback condition,
players were provided feedback right away after they performed an action that would
lead them to a dead end. In the delayed feedback condition, the players did not receive
feedback until they were well on their way down a path that would lead them to a dead
end. In a post-test the immediate feedback students performed much better in choosing
correct actions that would lead them to the treasure than students who received delayed
feedback.
Researchers have come to believe that problem solving should be a guided process.
Immediate feedback helps guide the student to create a correct solution without
excessively floundering and breeding frustration. Furthermore, some researchers also
argue that it is important to address the error while the information that caused said error
is still fresh in the student’s mind. Immediate feedback prevents students from wasting
time on recovering from compounded errors by helping them correct each error as it
happens (Gertner & VanLehn, 2000). Model tracing tutors such as ANDES, PAT, and
LISPITS, all provide immediate feedback and have been found to be very effective in
helping students learn domain knowledge.
42
Another very important feedback issue involves the amount and what type of
feedback that should be provided to the student when he/she seems to be struggling
during problem solving activities. A considerable amount of research has been conducted
to determine the effectiveness of various types of feedback. Feedback can range from
simply telling the student if his/her response is correct or incorrect, to providing hints and
information that guides the student to a correct problem solving action (Dempsey &
Sales, 1993). Kulhavy and Stock (1989) view feedback as consisting of both verification
and elaboration components. The verification component determines if the learner’s
response is correct while the elaboration component provides hints or clues to guide the
learner to the correct response. This type of feedback can point out errors in learner
response, provide correct response options and provide information that strengthens
correct responses, thereby making them memorable (Mason & Burning, 2001). Feedback
that contains both the verification component and the elaboration component is termed,
elaborate feedback. A number of studies comparing the effectiveness of various types of
feedback have found elaborate feedback to result in larger knowledge gains for the
student (Gilman, 1969; Roper 1977). Most intelligent tutoring systems do provide some
kind of elaborate feedback that consists of telling the student that they made an error and
also providing them with hints to fix the mistake. The following section will discuss
several successful ITSs and the methodology employed to provide feedback to learners.
One of the most successful ITSs is the ANDES tutor in the domain of Physics.
ANDES is a joint project created by the University of Pittsburgh and the US Naval
Academy in 1995. ANDES was built on the “coached problem solving” methodology
(VanLehn, 1996) which teaches cognitive skills through a collaboration between the tutor
43
and the student during problem solving activity. According to this theory, the studenttutor interaction is based on the progress that the student makes in reaching a solution. As
long as the student keeps on solving problems correctly, the tutor simply agrees with the
student, but as soon as the student makes a mistake, the tutor is able to interject with a
hint to lead the student to the correct path. In order to determine if the student is solving a
problem correctly, ANDES employs the “model tracing” technique, which means that
every student problem solving action is matched to a step in the expert solution. ANDES
creates a “solution graph” of a given problem based on the description entered by a
human expert. This solution graph is a model of the problem solution space, therefore, it
contains all the various alternate solution paths for a given problem. When a student
performs an action, ANDES attempts to find a matching action in the solution graph. If it
cannot find a match, then ANDES has to determine which solution path the student was
trying to follow by using the solution graph and the student model. Once ANDES
identifies the solution path, it is able to figure out what the correct action should be and
then provides the feedback to the student. When a student inputs his/her solution into
ANDES, immediate feedback is provided by changing the color of the entry to green, for
a correct entry, and red, for an incorrect entry. If the student clicks on the red entry or
clicks on the “What’s wrong with this?” optional menu button, ANDES provides a hint to
the student. The student has an option of asking ANDES to explain this hint by clicking
on “Explain this”. If the student is stuck at any point, he/she can ask for help and ANDES
will provide “Procedural help” by first selecting the node that is most relevant to what the
student has been doing and what he/she will most likely want to do. This feedback
template associated with this node is used to provide the hint to the student.
44
Evaluations were performed with ANDES every fall at the US Naval Academy from
1999 to 2003. The experimental group consisted of students who used ANDES for
homework, versus the control group, who did not use ANDES and instead did their
homework with paper and pencil. All students were given the same tests and final exam.
The results of the experimental group were significantly higher than the control group.
The results also indicated that students who were working towards humanities degrees
benefited significantly more than the science and engineering majors.
Cognitive tutoring is another category of successful tutors which are based on the
Adaptive Control of Thought (ACT*) theory of cognition (Anderson, 1983). This theory
views learning as a process that involves several phases and requires both declarative and
procedural knowledge. The first phase involves learning declarative knowledge,
including factual knowledge (such as theorems in a mathematical domain), which is
represented as chunks. This declarative knowledge is later turned into procedural
knowledge, which is goal-oriented, and therefore more efficient in terms of use.
According to ACT, procedural knowledge cannot be acquired by watching or seeing but
rather through constructive experience. Procedural knowledge is represented in the form
of if-then production rules. These systems also use model tracing to analyze and
understand student problem solving behavior. The tutor solves the problem along with
the student and matches each student input to a rule. If it matches a correct rule, the
student is allowed to proceed; otherwise, the tutor provides immediate feedback to the
student. Cognitive tutors provide immediate feedback upon the first detection of an error
to avoid leaving student floundering and inevitably getting frustrated and wasting time to
recover from prior errors.
45
The Pump Algebra Tutor (PAT) is a very successful Cognitive tutor that teaches
beginning algebra by applying mathematics to solve real life situations in the form of
word problems. PAT is a result of the collaboration between Carnegie-Mellon University
and Pittsburgh public school teachers and is being used in over 100 classrooms in more
than 75 schools around the country, including middle schools, high schools, and colleges.
Evaluation studies conducted with PAT in Pittsburgh and Milwaukee over a three year
period have shown that students who used PAT performed 15% – 25% better on
standardized tests and 50% - 100% better in problem solving than students who were
taught through traditional algebra classes (Koedinger, Anderson, Hadley, & Mark, 1997).
One of the reasons that PAT is successful is because it is based on the Pump curriculum,
which was created by CMU researchers and public school teachers who intended it to be
for a cognitive tutor. One important point that is stressed in this curriculum is that it is not
enough for the students to know mathematical concepts but that they should also be able
to represent any problem in a mathematical form. In PAT, students are presented with a
problem description which they use to create multiple models of the problem using
graphs, tables and eventually, equations. Then students use these multiple problem
models to answer questions about the problem, which gives them an opportunity to
reflect on their actions. Creating multiple models for the same problem provides the
student with a deeper and broader understanding of the problem because he/she is
looking at it from different angles.
PAT provides two types of feedback: just-in-time, which is immediate feedback
when a student makes an error, or on-demand hints, which are provided on a student’s
request. If the student makes an error, his action is matched to a buggy rule so immediate
46
feedback is provided that tells the student what is wrong with his/her action or a hint on
how to fix it. The rationale behind giving the student just a hint is to encourage the
student to think about his/her action leading to a corrective action. If the students are
given answers too soon, they develop a tendency to rely too much on the tutor and not
think on their own. The immediate feedback following an incorrect action helps avoid
frustration for a struggling student. The on-demand hints are provided when the student
gets stuck and does not know what to do next. Because of the model tracing methodology
that PAT employs, it knows the reasoning process that the student is following and is able
to help the student in reaching the correct solution. Cognitive tutors provide immediate
feedback, as it is more effective when done in the context of an error and if feedback is
delayed, the context is lost and feedback becomes useless or less effective. Another
reason for giving immediate feedback is so that the student will not embark on an
erroneous solution path and compound his/her problem by making additional errors,
resulting in wasted time.
Another type of ITS, constraint-based tutors, provide on-demand feedback. These
tutors represent the domain knowledge as a set of constraints and provide feedback based
on the constraints that the student solution violates. The philosophy behind constraintbased tutoring is that all correct solutions are similar in the sense that they do not violate
any domain principles. These systems do not care about the sequence of steps that the
student uses to create the solution. They just care that each step in the solution does not
violate any domain principles. As a result, these systems do not have an expert model that
generates expert solutions or analyzes student solutions; instead the student solution is
analyzed by applying a set of constraints to the student solution. The student model in
47
these tutors is very simple and only consists of information such as the constraints that
the student violated. The feedback is limited because they cannot provide step by step
guidance to the student while solving a problem due to the lack of a proper student model
and expert solution/knowledge. As mentioned previously, these tutors are based on
Ohlsson’s theory of learning from performance errors, which describes learning from
errors as a two-step process: the recognition of an error and correction of an error. This
theory claims that people make mistakes performing a learned task because they have not
internalized declarative knowledge into procedural knowledge and the sheer number of
decisions involved in performing a task is the cause of mistakes, which is the same as
violating a rule in the procedure. It also states that by practicing the task and recognizing
the errors, one can improve the procedure by incorporating the correct rule, the one that
was violated.
A number of constraint-based tutors have been developed in various domains: SQLTutor (Mitrovic & Ohlsson, 1999), a tutor for teaching SQL, KERMIT (Suraweera &
Mitrovic, 2002), a tutor for database design, and Collect-UML (Baghaei & Mitrovic,
2005), a tutor for teaching object-oriented design. Evaluation studies, performed with
these tutors, show that students who used these systems experienced learning gains
compared to students who did not use them. All the systems maintain multiple levels of
feedback. The first level of feedback consists of positive and negative feedback (learner
answer correct or incorrect), followed by an error flag that identifies the erroneous clause,
then a hint, a partial solution (corrected learner response), and finally a complete solution
and a list of any and all errors. The first time a constraint is violated, the student is given
positive and/or negative feedback. On the second attempt, the erroneous clause is
48
identified. On third try, the student is given a hint. The other three levels of feedback are
available by the request of the student. Unlike the cognitive tutors, these systems can only
provide limited feedback and cannot guide the student to the next step in the solution if
he/she gets stuck. The student is provided feedback upon request or when he/she is
finished constructing the entire solution.
Feedback is a very important part of an intelligent tutoring system because it is an
essential part of the tutoring process. Feedback timing and content play a major role in
making the instructional environment effective for a learner. Most of the successful ITSs
use immediate feedback when a student makes an error and they provide multiple levels
of feedback that help a student get on track and stay to achieve a correct solution. The
pedagogical framework discussed in this dissertation also provides immediate feedback
on multiple levels, although there is a slight difference in that it ranks errors as critical
versus non-critical. Immediate feedback is provided for critical errors each time that
particular error is repeated. For non-critical errors, immediate feedback is provided the
first time only. If the student repeats the same non-critical error several times, then he/she
is provided feedback that is basically just a reminder. An example of a non-critical error
message in object-oriented design would be “NO_JAVADOC”, which means that the
student failed to document his/her action. Technically, this is not an error but it is
certainly good design practice, so it is treated as an error by DesignFirst-ITS, an ITS in
which the pedagogical framework has been implemented. The first time the student does
not document his/her action, feedback is given, such as, “It is a good idea to document
your design...Use Javadoc tab to enter comments”. If the student continues to generate
the same error multiple times, he/she will be given feedback as a reminder, “You did not
49
document the attributes and methods in your design...Use Javadoc to enter comments.”
The point of labeling errors critical and/or non-critical is to make sure that the student is
not bogged down with feedback about trivial errors and starts ignoring feedback
altogether. This allows them sufficient time and energy to focus on important problem
solving activities.
2.5 Pedagogical Modules in ITS
Intelligent tutoring systems have proven to be effective in providing significant
learning gains for students in various instructional domains. One of the major
components of an ITS is the pedagogical module which utilizes various tutoring
strategies and techniques to help students learn domain knowledge. The behavior and
design of a pedagogical module in an ITS depends upon the underlying learning theory
that is the basis for it. Some of the tutoring systems that have been discussed so far in this
document have been the cognitive tutors that are based on ACT*R theory of cognition;
for instance, the constraint-based tutors that are based on learning from performance
errors theory; the ANDES physics tutor that is based on coached problem solving theory.
Even though all these ITSs are based on different theories, they all provide varying
degrees of individualized instruction to make learning effective for students. Some of the
techniques that these ITSs employ are cognitive student modeling, model tracing,
constraint representation of domain knowledge, and hint sequencing. Other ITSs attempt
to make tutoring more effective by using different techniques such as incorporating
animated agents into the pedagogical module, using natural language to communicate
50
with students and using different instructional strategies such as learning by example,
learning by teaching, and cooperative learning.
Design-a-Plant (Lester et al., 2001) is an example of a learning environment that uses
an animated pedagogical agent. Pedagogical agents are animated characters that are
designed to facilitate learning in an educational setting. Most of these agents assume the
role of coach by guiding a student in his problem solving activity through various forms
of scaffolding. These agents typically use text, speech, animation, and gestures to
communicate with the student.
For example, the animated agent Herman-the-Bug
inhabits the Design-a-Plant learning environment which teaches middle school students
botanical anatomy and physiology. The students learn domain concepts by actually
designing a plant that will thrive under a given set of natural environmental factors such
as amount of sunlight, water, etc. The students graphically construct plants by choosing
plant parts such as roots, leave and stem from a plant structure library. Herman-the-Bug
is a quirky, talkative, churlish insect that is capable of walking, flying, shrinking,
expanding, swimming, fishing, bungee jumping and performing acrobatics. As a student
creates a plant for a given set of environmental conditions, Herman-the-Bug monitors his
problem solving behavior and guides him by giving hints, asking questions, and
explaining the relevant domain concepts. Herman-the-Bug’s lifelike presence is a result
of a coherence-structured-behavior-space framework and a behavior sequence engine
(Stone & Lester, 1996). This framework contains different types of behaviors such as,
manipulative behaviors, visual attending behaviors, re-orientation behaviors, locomotive
behaviors, gestural behaviors, and verbal behaviors. It contains more than 30 animated
behaviors, 160 utterances, and a large library of soundtrack elements. Lester and his
51
team (Lester, Converse, Stone, Kahler, & Barlow, 1997) conducted an evaluation study
with 100 middle school students who used Design-a-Plant for approximately 2 hours for
a period of eight days. Students were given pre-tests and post-tests before and after
interacting with the environment, respectively. The study results showed significant
improvements on the post-test scores compared to the pre-test scores.
AutoTutor (Graesser, Wiemer-Hastings, Wiemer-Hastings, & Kreus, 1999; Person,
Graesser, Kreuz, & Pomeroy, 2001) is an example of an ITS that uses natural language to
communicate with students and teach an introductory computer literacy course.
AutoTutor behaves like a human tutor in its attempt to comprehend student interactions
and simulates a dialogue to teach college students fundamentals of computer hardware,
software, and the internet. The user interface has multiple windows with an animated,
talking head icon that delivers dialogue through synthesized speech, intonations, facial
expressions, it even nods and gestures. AutoTutor uses curriculum scripts to generate
questions that require students to articulate and explain their responses. The curriculum
scripts are well defined, structured lesson plans that include important concepts,
questions and problems for a given topic. These scripts also contain topic-relevant
information such as a set of expectations, hints and prompts for each expectation, a set of
common misconceptions and their corrections, and pictures or animations. AutoTutor
begins the tutoring session with a brief introduction followed by a general question about
computers. The student types his/her response into an interaction window using the
keyboard. AutoTutor uses latent semantic analysis (LSA) to evaluate the quality of the
student’s response. It uses results of LSA and fuzzy production rules to determine the
next appropriate dialogue move. AutoTutor contains 12 dialog moves, which include five
52
forms of immediate short-feedback (positive, positive neutral, neutral, negative
neutral,
and negative),
pump (questions that pump the student for information ),
positive pump, hint, splice, prompt, elaborate, and summarize. The system provides
three types of feedback, backchannel feedback, evaluative feedback, and corrective
feedback. The backchannel feedback acknowledges student response. Evaluative
pedagogical feedback is based on correctness of student response and is delivered by the
facial expression and intonations. For example, a quick nod is used if the answer is
correct, a shake of the head if answer is incorrect or a skeptical look if answer is not
clear. The corrective feedback helps the student fix his/her errors and misconceptions.
Student errors that are not part of the curriculum script are ignored. AutoTutor evaluation
studies have shown a half letter grade improvement for students that used the system
versus students who reread the relevant chapters in the textbook only (Graesser et al.,
2001).
ELM-PE (Weber & Möllenberg, 1995) is an example of an ITS that uses examplebased learning to teach students the programming language LISP. ELM-PE uses casebased reasoning to store student problem solving history and then utilizes the history to
provide feedback to the student. Case-based reasoning (CBR) uses a collection of
previous solutions from similar problems to solve new problems. In ELM-PE, a student
starts working on a problem by typing his/her code into a Lisp editor that checks the code
for syntax errors. ELM contains domain knowledge in the form of three different types
of rules; good, bad (correct but too complicated), and buggy (represent common
misconceptions). It applies these rules to generate its own solution which it then matches
to the student solution. As the student works on his solution, ELM analyzes the student’s
53
work and generates a derivation tree of rules and concepts that the student used to create
the solution so far. The derivation tree for the entire solution is merged into a single tree
that represents the episodic learner model. This episodic learner model is used for
multiple purposes: to find and show examples and reminders from students own problem
solving history; to select exercises that are appropriate for the student’s knowledge level;
and to adapt feedback hints to the student knowledge level. The individual problem
derivation trees are not stored together but are divided into snippets. Each snippet is
merged into the learner model, which is organized by concept.
When the student seems to be having difficulty with a specific step in the problem
solving process, ELM uses the episodic learner model to retrieve the concept that is
applicable to the step and identifies a previously stored problem solution that may be
applied to the concept. The student is provided feedback to guide him/her on how the
concept was applied in creating a solution for a previous problem. The student also has
the option of asking the system for help and viewing examples of Lisp code that were
discussed in the learning material of the Lisp course or Lisp functions that the student
previously coded. Empirical studies were conducted where one group of students
attended a course that used ELM-PE and another group that attended a course that only
used the Lisp editor part of ELE-PE. Evaluation results showed that on the final exam,
75% of students who used ELM-PE completed the programming tasks, versus 45% who
did not use ELM-PE (Weber & Mollenburg, 1995). Later, ELM-PE became the basis for
ELM-ART, which is an ITS integrated with a web-based course that helps students learn
LISP using examples and detailed explanations (Weber & Brusilovsky, 2001).
54
In summary, there are many intelligent tutoring systems in various domains that use
different techniques to make learning more effective for students. Some of these ITSs are
more effective and more widely used in academic settings than others. The methods and
techniques used by various ITSs depend on the target domain, the audience, and the
available resources. The pedagogical module, that is the focus of this dissertation, is
intended to provide problem solving support to high school students and college
freshmen who are new to object-oriented design and Java programming. This
pedagogical module will use the individual learning style of a student to make instruction
maximally effective for each student.
3
PEDAGOGICAL ADVISOR IN DESIGNFIRST-ITS
The learning style based pedagogical framework that is the focus of this
dissertation, was implemented in the pedagogical advisor component of the DesignFirstITS. This chapter briefly describes the DesignFirst-ITS and the various modes in which
the pedagogical advisor operates in this ITS.
DesignFirst-ITS is an intelligent tutoring system that provides one-on-one tutoring
to help beginners in a CS1 course learn object-oriented analysis and design, using
elements of UML (Blank et al., 2005). DesignFirst-ITS is based on a “design-first”
curriculum that teaches students to design a solution and the objects that comprise it
before coding (Moritz & Blank 2005). This curriculum enables the students to understand
the problem without getting bogged down with programming language syntax.
55
Figure 3-1 DesignFirst-ITS Architecture
DesignFirst-ITS is composed of five distinct components. The Curriculum
Information Network (CIN) consists of domain knowledge which is object-oriented design
concepts. These concepts are linked together through various relationships such as
prerequisite and equivalence, and assigned a measure of learning difficulty. For example,
prerequisite (class: object), shows that the concept “object” is a prerequisite of “class”. In
other words, the student must understand what an object is before he/she can create a class.
The Expert Evaluator (EE), the second component of DesignFirst-ITS, interfaces with a
student through the third component, which is the LehighUML plug-in created for the
Eclipse Integrated Development Environment (IDE). The Eclipse IDE is a Java
development environment that can be extended by integrating plug-ins (software modules)
to provide additional functionality. The LehighUML plug-in for Eclipse IDE allows the
student to create UML class diagrams. As the student designs a solution for a given
problem in the plug-in environment, the EE evaluates each step of the student solution in
the background by comparing it with its own solution, and generates a data packet for a
56
correct student action and an error packet for an incorrect action. These packets are sent to
the fourth component, the student model (SM). The student model analyzes these packets
to determine the knowledge level of the student for each concept and attempts to find
reasons for student errors (Wei et al., 2005). There could be a number of reasons that a
student makes an error: he/she did not read the problem description carefully; he/she does
not understand one or more prerequisites of the concept that is related to the action; he/she
does not understand the current concept, etc. If the SM determines that the student does not
understand the prerequisites for the current concept, it generates possible reasons for the
student making the error and updates the student model information. The pedagogical
advisor (PA), which is the fifth component, uses the curriculum information network
(CIN), EE data packet, individual learning style profile, and the student model information
to determine the content and the amount of feedback that should be provided to the student.
DesignFirst-ITS is based on a client-server architecture. It has two major
implementation components: the server component and the client component. The server
component contains the EE, SM, and Pedagogical-Tutorial component. The client
component contains the Eclipse Integrated environment with the Lehigh-UML plug-in and
the Pedagogical-Advice component embedded in it. The server component (EE, SM) is
active all the time whether an individual is using the ITS or not. The client component
(LehighUML plug-in, PA) become active when the student starts creating his/her solution.
Specifically, PA becomes active only if the student decides to log into the ITS. The student
can use the LehighUML plug-in/Eclipse IDE environment without using the ITS. If the
student decides not to use the system, his/her problem solving behavior is not tracked, nor
is he/she provided any feedback. If the student creates a solution without logging on to the
57
ITS or works on his/her solution in multiple sessions without logging onto ITS for all of
them, then the solution (stored on the client) becomes out of sync with the ITS solution
(maintained on the server). So whenever a student logs into the ITS, a sync-up of his/her
solution file from the client is performed with the copy of the solution on the server.
The PA can function in four different modes: hint/advice, tutorial/lesson,
evaluation, and sync-up. The hint/advice mode is a homework-helper mode where the
student is working on his/her problem design with the PA watching in the background. If
a student seems to be struggling, the PA will attempt to help the student by giving hints
that are intended to prompt him to think about his problem solving actions, leading to a
resolution. The tutorial mode provides the student with more instruction about given
domain concept[s]. The evaluation mode enables the student to have his/her solution
evaluated by the ITS. The sync-up mode updates the solution on the server using the
student solution maintained on the client.
The pedagogical advisor performs the following functions in Design First-ITS:
1. It provides immediate feedback for erroneous student actions depending on the
severity of the error.
2. It provides a detailed tutorial to a student for the concepts that the student cannot
apply correctly.
3. It generates an analysis explanation report for the student solution during sync-up
when the student solution from the client is synced with the copy of the solution
on the server. The analysis explanation report is based on the data packets
generated by the EE during the sync-up. This report is presented to the student
before he/she starts working on his solution. It provides the student with a
58
summary of his work so far and can serve as a memory refresher to help the
student resume working where he/she left off.
4. It provides feedback/explanations about the student solution in the form of an
evaluation report when the student requests an evaluation of his/her work. The
student can request an evaluation of his/her solutions at any time whether the
solution has been completed or not.
3.1
Feedback
This section will discuss feedback that is provided in various modes that the
pedagogical advisor operates in Design First-ITS. It also includes a brief discussion of
important feedback issues such as timing and content.
3.1.1 Advice/Hint mode
The advice/hint feedback is the most critical feedback because it is time-sensitive and
has to be both concise and precise. It is time-sensitive because it is in response to an
erroneous student action and research indicates that feedback is most effective when
given in the context of the error (Mory, 1996). Let’s assume that a student is creating a
class diagram and student action #3 was an erroneous action and he/she is not provided
feedback until after action #6. Is the feedback still as effective as it would have been if it
was provided after action #3? The answer to this question is tricky. It depends on a lot of
factors, especially the context of the situation. Let’s assume a student has to model an
ATM machine that has three simple behaviors: deposit, withdraw, and show balance.
Instead of modeling the ATM as a class, the student erroneously models one of the
behaviors as a class. If the student does not receive feedback after the erroneous action,
59
he/she will assume that everything is correct and continue to build on that erroneous
action, straying further and further from the correct solution. Lack of feedback could lead
to serious problems for the student as he/she is not aware of the errors that he/she is
making and the list of misconceptions about the domain concepts keeps growing. In this
case, the student made one critical error that leads to a cascade of errors.
On the other hand, if the student created a class ATM in action #3 but did not
document it, then it is considered an error from a good design principle standpoint, but is
it really a critical error? In DesignFirst-ITS, whenever a student adds an element to the
class diagram, he/she is expected to use the “Javadoc” field to add a comment. If the
student does not add a comment, the “NO-JAVADOC” error is generated. This indicates
that the student is not following good design principles; however, this error will not cause
a multitude of errors that will have to be fixed.
The Pedagogical Advisor in Design First-ITS handles the problem of providing
feedback on time by categorizing errors as critical or non-critical. Design First-ITS is
designed to teach good design principles, so it would generate an error even if a student
violates a good design rule, such as “documenting the design”. Critical errors are those
that result in more errors, while non-critical errors could be categorized as standalone
errors. Prior research on feedback has shown that human tutors delay the feedback for
non-critical errors (Kulhavy & Stock, 1989).
In the two examples we just saw, mistaking a method for a class is a critical error and
not documenting is a non-critical error. So, for the first error, the PA provides immediate
feedback. For non-critical errors, the pedagogical advisor uses a tunable variable
(feedback frequency) that can be set to how often the feedback should be provided for
60
non-critical errors. If the feedback frequency variable is set to one, then feedback is
provided after every non-critical error. If this variable is set to 2, then feedback for noncritical errors is provided after two non-critical errors.
In the advice/hint mode, there are 3 different feedback levels which are supported in
this pedagogical framework. Level 1 is a simple hint that serves as a gentle reminder to
the student to rethink his/her action while Level 3 feedback is a more detailed explanation
of the concept. The multiple hint strategy is called “hint sequencing” and it refers to a
sequence of hint templates that are used to generate feedback (Gertner et al., 1998). The
first hint is usually very general and as the student continues to need help on a given
concept, the hints become more and more specific. Many of the tutors described in
related work use this methodology to provide feedback. If the student continues to have
trouble with the same concepts after receiving three levels of feedback, then the
pedagogical advisor automatically goes into lesson mode and displays the lesson screen
highlighting concepts that the student needs help with. Here is an example of three
different feedback levels.
Let us assume that the student is confused between two concepts, attributes and
parameters, and the student is a verbal learner. The first level of relevant feedback for this
student would be, “Remember, attributes are used to model characteristics of an object
while parameters are used to pass information to a method.” The second level of
feedback would be, “You seem to be having difficulty with the concepts, attribute and
parameter. Remember an attribute represents a characteristic of an object that persists
through the life of the object. Parameters represent the information that is passed to a
method.” The third level of feedback would then be, “Remember, attributes model
61
characteristics of an object as data, and this data is available to all methods of a class. A
parameter is used to pass information to a method that the method needs to perform its
action. If information is available in the form of an attribute, then a method does not need
to receive this information through a parameter.”
The three levels of feedback provide a student with enough hints to make him/her
aware that his/she is not applying a concept correctly. After the third hint, if the student
still makes an error applying the same concept, then he/she is provided detail explanation
for the concept through the object oriented design tutorial.
3.1.2 Tutorial/Lesson mode
The tutorial/lesson mode helps the student learn object oriented design concepts by
explaining the concepts in detail. The tutorial mode is designed to be used either with the
advice/hint mode or as a standalone-object oriented concept design tutorial. When it is
used with the advice/hint mode, the pedagogical advisor initiates tutorial mode if a
student has received all three levels of feedback for a given concept while creating a class
diagram in Eclipse IDE/Lehigh-UML plug-in. Once in tutorial mode, the student will see
detailed feedback/explanations about the concept.
When the tutorial mode is used as a standalone-object oriented design concept
tutorial, it requires the student to log into the system. The student login is used to retrieve
the student’s learning style from the database for adapting concept feedback/explanation
to his/her learning style.
62
3.1.3 Sync-mode
When a student initially logs into DesignFirst-ITS to work on an existing solution, the
EE performs a sync-up of the student solution on the client machine with the student
solution on the server. During the sync-up, the EE analyzes the solution and generates
data packets for each student action. The PA processes these packets and generates an
analysis explanation report that tells the student what is wrong with the solution and gives
hints to help fix it.
3.1.4 Evaluation mode
During the problem solving process, the student can click on the ‘evaluate’ button at
any time to get the solution evaluated. The EE analyzes the solution and generates data
packets which the PA uses to generate an evaluation report of the student’s solution. It
explains the errors that the student made and provides hints on way to fix the errors.
63
4 LEARNING STYLE BASED PEDAGOGICAL
FRAMEWORK
This chapter will describe the learning style based pedagogical framework. This
framework consists of feedback architecture and the feedback generation process. The
feedback architecture consists of various components that are based on the FelderSilverman learning style model. The feedback generation process uses these components
to create feedback that matches a student’s learning style. This chapter discusses the
feedback architecture and the feedback generation process in detail.
4.1 Feedback architecture
The learning style based feedback architecture consists of eight different types of
components that correspond to various dimensions of the Felder-Silverman model. The
Felder-Silverman learning style model dimensions and the feedback components are
discussed in detail in the following section.
4.1.1 Felder-Silverman learning dimensions/feedback types
The Felder-Silverman Learning Style Model categorizes a student’s learning style on a
sliding scale of four dimensions: sensing/intuitive, visual/verbal, active/reflective and
sequential/global. Table 2 lists the dimensions and appropriate feedback type for each
dimension. The information listed in this table was used as a guide to create the feedback
architecture.
64
Learning Style
Dimensions
Active
Reflective
Verbal
Visual
Sensor
Intuitive
Global
Sequential
Feedback Type
Hands on, likes to work in a group
Likes to think before trying, works alone
Words, written/spoken
Diagrams, maps, flowcharts
Facts, concrete, procedures, practical
Innovative, theories, meanings, relationships
Big Picture, large steps
Orderly, linear, incremental
Table 2 – Felder-Silverman model dimensions / learning preferences
4.1.2 Feedback components
The feedback architecture consists of eight different types of components that contain
information suitable for different types of learners. Each of these component has a set of
attributes that identify the component and its content. The following section describes
these components in detail.
1. Definition – This component contains definitions of domain concepts in
textual form and is used while introducing a concept. An example of this
component would be, “Attributes are characteristics of an object that persist
through the life of that object.” This component is mostly used for the verbal
learner who likes to see information in the form of text.
2. Example – This component contains examples that illustrate a given concept.
This component can be used for almost any learning style, especially the
sensor style which prefers a practical approach to concepts. An example of
feedback in this component might be, “Attributes of a car might be its color,
model, make, etc”.
65
3. Question – This component contains questions that could serve as hints
during the advice mode. There are two different types of questions: closedended questions that require a learner to simply answer yes or no, or just
provide a factual answer, and open-ended questions, which require a student
to think about his/her problem solving behavior.
This component is
important in making the learner think about problem solving action. The
question component is very important for a reflective type learner, as it gently
nudges him/her to reflect on his/her action. It can also be useful for intuitive,
global, and sequential learners as the open-ended questions can lead to
thought
processes
regarding
the
relationships
between
different
concepts/steps, about the big picture, and about the steps involved in creating
the solution. An example of a closed-ended question in this component might
be, “Is the correct data type to represent money a double?” and an example
of an open-ended question would be, “Why did you set the data type for
money to string?” Open-ended questions are beneficial because they make
the student think about his/her reasoning process.
4. Scaffold – This particular component is helpful to nudge a learner, who
might be lost, towards a correct solution and point him/her in the right
direction. Many times it is not enough to tell a novice that an action is
incorrect but one should also guide him/her where to find the right answer.
An example of this component would be something of this nature, “Read the
problem description carefully to determine the attributes for your class.”
66
5. Picture – This particular component consists of pictures, images,
multimedia, animation, or video and is very much for the visual learner. It is
used to visually explain/illustrate a concept. This component is also useful for
global learners as it allows them to view the big picture.
6. Exercise – This component is used for the active learner who likes to learn
through hands-on activity or by applying concepts. This component is used in
the tutorial mode.
7. Relationships –The relationship component contains information that helps a
learner understand how a concept fits into the overall problem solving
activity.
Often learners understand a concept but have a difficult time
determining how it fits into the context of the problem. For example, a
student might understand what attributes and methods are but might not know
the relationship between the two in the context of the problem.
This
component is mainly useful for global and intuitive learners who like to know
the big picture first.
8. Application – This component contains information about a concept that
extends beyond the definition and is useful in telling the student how the
concept is applied. For example, a student might know the definition of a
constructor but might not know that a class could have multiple constructors.
This component is mostly suitable for sensor learners because they like
practical application of concepts.
67
4.1.3 Feedback component attributes
In order for the feedback generation process (FGP) to choose components that can be
used to create feedback to help student fix errors, the FGP must have detailed information
about each component such as component type, presentation mode, concept it applies to,
feedback content, etc. This type of information about each component is stored in the
form of its attributes. These attributes provide the FGP with enough information to select
the appropriate components that can be used to generate feedback for a given situation.
These attributes are described in the following section in detail.
1. Concept – Each component applies to a given concept from the curriculum
information network. This attribute specifies the concept the feedback
component applies to. Some examples of concepts for object oriented design
are: attribute, method, parameter, and data type, etc.
2. Related concept – Related concept is also a concept from the curriculum
information network (domain concepts) and is used in conjunction with the
concept attribute to create feedback at a granular level. A given concept could
be very broad and a student might not understand different aspects/parts of it.
For example, in object oriented design a method is a behavior of an object and
a parameter is used to provide information to a method. If a student makes
errors that indicate that he/she does not understand the concept of a method,
then a component whose concept and related concept have the value of
‘method’ will be used. Instead if the student errors indicate that student does
seem to know the concept of method but is struggling with the concept of
68
parameter, then feedback component that has the concept set to ‘method’ and
related concept set to ‘parameter’ will be used.
3. Level – The advice mode supports three levels of feedback that use
components that vary in detail. Each component is assigned a numeric
feedback level ranging from 1 to 3 at the time it is created. Level 1 components
contain information that introduces a concept while level 2 and 3 components
contain information that further explains the concept. The level attribute of a
component is used during the selection process while creating feedback.
4. Type – this attribute specifies if the component is of type definition, picture,
example, relationship, application, question, exercise, or scaffold.
5. Presentation mode – This attribute is used to specify the form in which
information is contained in a component. Information can be present in the
form of text or images. By default, the verbal component has the presentation
mode “verbal” while the picture component has the presentation mode “visual”
but other components could have either verbal or visual presentation mode.
6. Content – This attribute is used to store name of the file that contains visual
content for a component. If a component is of type definition or if the
presentation mode of a component is “verbal,” the value of this component
would be the keyword “text.” If a component is of type “picture” or the
presentation mode of the component is “visual,” this attribute would contain
the name of the image/multimedia file that illustrates/explains feedback.
7. Category – all components are categorized into four different categories that
are based on the four Felder-Silverman learning style model dimensions. These
69
four
categories
are:
VV
(visual/verbal),
AR
(active/reflective),
SI
(sensor/intuitive), and GS (global/sequential). Components are assigned
category based on the dimension they support. For example, the picture and
definition components both are assigned the VV category.
8. Text – This attribute contains feedback string for a concept that the component
applies to.
For example, if we have a verbal component for the concept
attribute and related concept datatype, the text could be, “A datatype of an
attribute determines the type of data the attribute represents.” If we have a
component that visually explains the datatype of an attribute, the text could be,
“Look at the relationship between attribute and its datatype.” This attribute is
optional for a component of type picture or for a component that has its
presentation mode set to visual.
9. Usability factor – The purpose of this attribute is to ensure that feedback
components are not over or under utilized. Each component is assigned a
usability factor of 1 at the time of its creation. The value of this factor
decreases every time it is used to generate feedback for a given concept/related
concept. The updated usability factor for each component is maintained in
component usage history record for each user.
10. Dimension – This attribute specifies if a component satisfies the
global/sequential dimension in addition to its primary dimension for which it is
designed. Feedback components can be designed in such a way that they can
apply to two different dimensions; the dimension they are designed for and the
global/sequential dimension. Let us assume there is a picture component that
70
describes the concept of datatype in relation to the concepts attribute and class.
This component would apply not only to the visual dimension, but also to the
global dimension. The value for the dimension attribute would be global. If the
picture component only explains the datatype concept, then value for the
dimension attribute would be sequential and the component would apply to
dimensions visual and sequential.
11. Component status – A component can be turned on or off by setting the value
of this attribute to ‘active’ or ‘inactive’. Only components that are active are
used for generating feedback.
Figure 4-1 displays feedback components graphically
Active/inactive
Domain
concept
Component status
concept
Sequential and/or global
dimension
Domain / prerequisite concept
Usability factor
Related concept
Feedback
component
Level of feedback
the component
applies to
1,2,3
text
Feedback string
level
category
VV / SI / AR /SG
Type
Component type
def,pic,ques,appl
scaf,rel,exer,exam
Value =1
content
Presentation mode
“text” or visual file name
Visual/verbal
Figure 4-1 Feedback component attributes
71
Example of component attributes:
Concept: Attribute
Related concept: attribute
Level: 1
Type: def
Cat: VV
Content: text
Status: active
Text :An attribute is a characteristic of an object.
usability factor: 1
dimension: sequential
Presentation mode:verbal
Figure 4-2 Definition Component
Concept: Attribute
Related concept
Level: 1
Type: pic
Cat: VV
Text:
Content: attribute.jpeg
Status:active
Usability factor:1
Dimension: global
presentation mode: visual
Figure 4-3 Picture component
4.1.4 Feedback components/learning styles dimension mapping
Table 3 shows the mapping between the Felder-Silverman learning style model
dimensions and feedback components. This mapping is used to select components types
that match a students learning style.
Learning Style Dimensions
Verbal
Visual
Sensor
Intuitive
Active
Reflective
Feedback components
Definition
Picture
Application, example
Relationship, scaffold
Exercise
Question
Table 3 – Learning style dimension and feedback component mapping
72
4.2 Feedback Generation Process
As mentioned previously, tThe learning style pedagogical framework consists of a
feedback architecture which is comprised of various types of feedback components and
the feedback generation process. The previous section discussed the feedback
components and their relationship to Felder-Silverman learning style model dimensions
in detail. This section will concentrate on the feedback generation process including the
inputs, selection, and assembly of feedback components.
The feedback components contain feedback at atomic level. Each component is
designed such that it is a complete feedback unit and does not need to be combined with
other components to make sense. This feature of the feedback components makes it easier
to create/maintain feedback components as well as to assemble them. For example, if we
look at figure 4-4 that shows a component of type “picture” for the concept “datatype.”
This component contains enough information that it can be used as a single feedback
unit.
Figure 4-4 Datatypes
Figure 4-5 shows example of another feedback unit that explains the concept of
attribute in detail.
73
Figure 4-5 Attributes
The Feedback Generation Process (FGP) creates feedback by combining multiple
components that are selected based on certain inputs. These inputs come from different
sources, such as the data packet generated by the Expert Evaluator, the feedback history,
the student model information, the curriculum information network, and student learning
style. Figure 4-6 shows the FGP process and the inputs. This section describes these
inputs in detail.
Figure 4-6 Feedback generation process
74
4.2.1 Feedback Generation Process Inputs
The following information serves as input to the Feedback Generation Process.
1. Concept – The concept that the student needs help with. It could be any
domain concept that is in the curriculum information network. For example
attribute, method, etc.
2. Action – The incorrect problem solving action performed by the student. Each
concept is mapped to multiple problem solving actions that the student can
perform. For example, in OO design, a student can add attributes, delete
attributes, modify attributes, etc.
3. Error – Error that the student made while performing the above action. Each
action in the domain knowledge is cross-referenced with a list of possible
errors. For example, while adding an attribute, the student used the wrong data
type.
4. SM – The student model processes the error packet and generates possible
reasons for the student’s erroneous action.
5. LS – Before the student uses the ITS, he/she fills out the index of learning style
survey online, which categorizes the preferred learning style of the student and
logs it into the database. The learning style of the student is the key to
generating learning style based feedback.
6. Feedback history – The student can receive up to three levels of feedback. The
feedback history record is used to set the current level of feedback that the
student should receive.
75
7. Used components – The component usage history is used to keep track of
components that have been used so far and how often they have been used to
provide feedback for the given concept/related concept to the user.
4.2.2 Selection Process
Creating feedback consists of identifying and selecting feedback components and
assembling the contents of these components to generate feedback. This process can be
divided into two parts: the selection process and the assembly process. The selection
process selects the components based on inputs described in the previous section; the
assembly process assembles these components into feedback. Following, is the step-bystep process to select feedback components.
1. Use concept, student model reasons and curriculum information network to
choose the related concept. The concept and related concepts are used as an
index into feedback components. The related concept is either set to reasons
generated by the student model or it is extracted from the expert evaluator
data packet if student model reasons are not available.
2. Set the feedback level for the given concept using feedback history. For
example, if the feedback history shows that the learner already received
feedback level 2 for the concept of attribute, then the current feedback level
is set to 3.
3. Use the student learning style to set the content display mode (verbal, visual)
for the feedback. The content display mode for a student who prefers verbal
is set to verbal but the content display mode for a visual student is set to both
76
verbal and visual because the feedback for a visual student consists of visual
and verbal content. If the content display mode is set to verbal, then only
components that have the presentation mode attribute set to verbal can be
used to generate feedback. If the content display mode is set to visual/verbal,
then components that have their presentation mode attribute set to either
verbal or visual can be used in the feedback generation process.
4. Use the learner’s learning style to set the type of components that apply. A
cross-reference of components and learning style dimensions is used to set
the components types that can be used for a learner with a particular learning
style. Let us assume that a learner is a reflective/visual/sensor/global type of
learner; the types of components that will be used for this type of learner
would be picture, question, application, and example.
5. Use the concept and related concept to identify components that can be used
to generate feedback. This step takes the concept and the related concept that
the student is having problems with and matches that against the concept and
related concept attributes of the feedback components. This step eliminates
all components that do not apply to this concept/related concept.
6. Eliminate all components that are marked inactive. A component can be
marked active or inactive. If a component is marked active, it can be used in
the feedback generation process. If it is marked inactive, it is turned off and
cannot be used in the feedback generation process.
7. Eliminate any components whose presentation mode does not match the
content display mode. For example, if the content display mode is verbal,
77
then all components that have presentation modes of visual will be
eliminated.
8. From step 7, eliminate components that do not apply to the current feedback
level.
9. For each identified component, set the usability factor for each using the
component usage history. The component usage history records the number
of times a component is used for generating feedback for any given concept
for a learner. Initially, each component is assigned a usability of 1 but each
time a component is used, its usability decreases by .1. The usability factor is
used to ensure that the same components are not used repetitively, making the
feedback redundant and boring.
10. Eliminate any components that do not apply to global/sequential dimension
of student’s learning style. A component can contain information that
presents a given concept as a standalone concept, which appeals to sequential
learners or it can present a concept by relating it to other terms, which
appeals to global learners. A component containing feedback for sequential
learners has its dimension set to sequential; while component containing
information appealing to global learners has its dimension set to global. A
component that contains information that can be used for either the sequential
or the global learner has its dimension set to ‘*’. If the student’s learning
style is that of global, any component that has the dimension attribute set to
sequential will be eliminated. Any components that have their dimension
attribute set to global or ‘*’ would remain on the list.
78
11. Rank the remaining components by usability factors within each component
type. The fewer times a component has been used, the higher its usability
factor.
12. Choose one component from each component type with the highest usability
factor. Create a list of selected components such that the primary component
is at the top of the list. The picture component is designated as the primary
component for visual learners and the definition component is designated as
the primary component for verbal learners. At this point the list contains a
component from each component type (component types that match the
learning style of the student) with the highest usability factor.
13. Rank the selected components based on the usability factor of each
component. This will result in the least used components being on the top of
the list except for the primary component which still ranks the highest. This
step is required because each feedback level has a limitation on how many
components can be used to create the feedback. For example, for the first
level of feedback, only two components can be used to generate feedback.
The two components with the highest usability factors will be used to
generate feedback.
4.2.3 Assembly process
The assembly process not only generates feedback from the selected components
but it creates a complete feedback message that is presented to the student. Each
feedback message consists of the following three parts: student action explanation,
79
action status, and relevant concept feedback. Figure 4-7 graphically displays the
feedback message
Feedback Message
Student
Action
explanation
Action
Status
Relevant concept
feedback
Figure 4-7 Feedback message
The first part which is student action explanation simply articulates to the student the
action that he/she just preformed. It is very helpful for novices because it helps them to
reflect on their actions and reinforces the object-oriented design vocabulary. Let us assume
that a student is modeling an ATM machine and creates a class called ATM. Then the
student enters “pin” as a method to the ATM class and makes an error. This is an error
because
in object oriented design a method models behavior of an object and ”pin” is not a behavior
of the ATM machine. The user response explanation would be, “You just added method pin
to the class ATM”. This feedback would help the student remember that a behavior of a
class is referred to as a method. Another example would be when the student chooses the
datatype of “String” for a method returnChange which is also an error because the method
returnChange should have the returntype of ‘double’. The user response explanation would
be, “You set the returntype for the method returnChange as a String.” In this example, the
intent is to reinforce the relationship between a method and returntype.
80
The second part of the feedback message, action status simply tells the student if
his/her action is correct or not. The third part which is the relevant concept feedback is
generated from selected components (previous section) explains the concept that the
student is having difficulty applying. Following is the step-by-step assembly process.
1. Use concept, action to get a phrase from the phrase library to create student
action explanation part of the feedback message. Phrase library is simply a
cross- reference of concept/action and English phrases that are used to explain
the action. Make substitutions using student input data. Figure 4-8 shows the
substitution process.
Example: student adds a method to a class.
Phrase: “You just added the method %studentaction% to class %class%.
“You just added the method withDraw() to class ATM.”
Input
Phrase library
concept,
action,
Student data
get phrase
Error Packet
Make substitutions
with student data
User
response
explanation
Figure 4-8 Substitution process.
2. Use the error code to retrieve a phrase for the action status part of the
feedback. Each error code is cross-referenced with a phrase that can be used to
explain to the learner if his/her action is correct or incorrect. Some examples
81
of phrases are, “The data type is not correct” or, “I cannot identify this
class.”
3. Generate the relevant concept feedback from the list of selected components
by the following:
i. Choose the components from the selected component list based on
how many are allowed by the current feedback level. Each feedback
level can only have a certain number of components that can be used
to create a feedback message. The level 1 feedback message can have
2 components; level 2 can have 3 components; and level 3 can have
four components.
ii. If the components that were chosen in the previous step have more
than one visual component in them, keep the primary component and
drop the other visual component. Go back to the previous step (i) and
choose another component that is not visual.
4. Create the feedback message by concatenating the student action explanation
phrase, student action status phrase and the content of the selected
components.
5. Generate a feedback history record and update the component usage record.
So far, the feedback generation process that has been described is used to create
feedback for the advice/hint mode. The process that creates the tutorial mode
feedback/explanations is very similar to the advice/hint mode feedback generation
process with a few exceptions. The tutorial mode feedback generation process skips step
1 (choosing the related concept), step 2 (setting the current feedback level), and step 8
82
(eliminating components based on the feedback level) of the advice mode feedback
generation process. Since the concept/related concept information is passed to the tutorial
mode when it is initiated by the pedagogical advisor, there is no need for the first step.
The tutorial mode does not use any feedback levels so it does not need to perform steps 2
and 8, which deal with feedback level. Therefore, it does not have the limitation on
number of components that can be used for explanations. All components that are on the
selected component list are used to explain/illustrate a given concept.
4.2.4 Learning Style Feedback
The pedagogical advisor uses the feedback components that match an individual’s
learning style while creating the feedback. The visual/verbal dimension of the learning
style model plays the most important role in creating learning style feedback because it
dictates the presentation style of the information to the learner. A visual learner relies
heavily on diagrams, pictures, and visual images; while a verbal learner prefers verbal
explanation of concepts.
Creating feedback for a verbal learner is straightforward
because it only requires written explanations but creating feedback for a visual learner
could be tricky. It is not always easy to transfer information using visual presentation
only. In most cases, visual content is interlaced with written explanations to make it more
effective. One has to analyze the information content carefully to determine which part of
content can be represented visually and which part can be in the form of written
explanations. As a simple example, let us assume that a verbal learner did not document
his/her action. The feedback could be, “You did not document your design. Use Javadoc
to document your design.” It was very easy to come up with this feedback. When the
83
same feedback has to be designed for a visual learner, one has to be a little more creative
to come up with it. Figure 4-9 shows some examples of feedback designed for a visual
learner. Nevertheless, it is a difficult task for a feedback designer to determine the
balance of visual and verbal content for a visual presentation.
Figure 4-9 Visual feedback examples
The second important dimension of the learning style model is the
global/sequential dimension because it determines if a given concept should be explained
in the context of another relevant concept or if it should be presented as a standalone.
Global learners like to learn concepts in relation to other concepts. They have a need to
understand how a concept relates to other relevant concepts that they are learning. The
sequential learners prefer to learn the concept by itself and do not require any contextual
information. Sequential learners tend to get confused when presented with too much
detail because they do best when they concentrate on grasping one concept at a time. For
84
example, the feedback for the concept of parameter for a visual/sequential learner might
look something like figure 4-10.
Figure 4-10 Visual/sequential
The feedback is appropriate for a sequential learner because it only deals with the
concept of parameter. Figure 4-11 shows feedback for same concept for a global/visual
learner.
Figure 4-11 Visual/global
The global learner would like this feedback because it not only explains the
concept of parameter but it also makes a connection with other relevant concepts of
method and datatype. The visual/verbal and global/sequential dimensions are used
together to create a content list for tutorial mode. When tutorial mode is initiated, the left
panel shows the learner the concepts that are covered in the tutorial. This tutorial concept
85
index is generated based on the visual/verbal and global/sequential dimensions. The
following section illustrates concept presentations that are used in tutorial mode for
different type of learners.
Figure 4-12 Visual/global
Figure 4-13 Visual/sequential
Figure 4-12 shows the tutorial concept index for a visual/global learner who
prefers visual representation and likes to know how concepts are related to each other.
This particular representation provides an overview of the tutorial concept and how each
concept relates to others. Figure 4-13 represents the tutorial concept index for a
86
visual/sequential learner. This type of learner prefers to see each concept separately and
does not need to know relationships among different concepts. This representation
highlights the concepts but does not confuse the learner by providing extra information.
Figure 4-14 Verbal/sequential
Figure 4-15 Verbal/global
The tutorial concept index in figure 4-14 is designed for a verbal/sequential
learner who prefers written explanations and does not care about the relationship among
concepts. The tutorial concept index illustrated in figure 4-15 is targeted to a
verbal/global learner who prefers written explanations but likes to view the relationships
87
among concepts. The bulleted lists help accomplish the goal of showing the verbal/global
learner sub-concepts under a given concept.
The intuitive/sensor dimension of the learning style model is used for providing
feedback that contains concrete facts or abstract concepts in the form of
examples/questions. Feedback for a verbal/sensor learner contains a concrete example in
the form of an explanation, and would be something like this: “If you model an object
person, some of its attribute would be hair color, height, eye color, age, etc.”
Figure 4-16 shows the same example but for a visual/sensor learner who likes
examples in visual form.
Figure 4-16 Visual/sensor
The feedback for the intuitive learner consists of information about concepts. For
example, “All objects have many characteristics. Only the characteristics relevant to the
problem description are modeled as attributes.”
The last dimension that is considered for the feedback is the active/reflective type
learner. The reflective learner likes to think solutions out in his/her mind, while the active
learner likes to learn with hands-on activities. The reflective learner responds well to
questions because he/she likes to think about concepts before applying them. The
88
questions are in verbal form such as, “Should “numOfTickets” be a behavior of your
class or should it be an attribute?” This type of verbal feedback is used for both the
verbal/reflective and visual/reflective type learners.
The active learner prefers to learn by active involvement in hands-on activity or by
applying concepts. This type of learner does not learn by just reading or visually seeing
information. The tutorial mode provides an active learner with hands-on activities to help
them grasp a given concept. For example, if a visual/active learner is learning the
concept of string, he/she will see figure 4-17, which shows a simple example of a handson activity where the learner clicks on the button and sees examples of the data type
string appearing in the box.
Figure 4-17 Visual/active
In addition to creating the learning style feedback, the PA uses two different tutoring
strategies: learning-by-doing and learning-by-example. It implements the learning-bydoing strategy in the advice mode by giving the student hints and not answers. It forces
the student to think about problem solving actions by posing questions as part of the
feedback. Tutorial mode offers interactive exercises. It implements the learning-by-
89
example strategy by using examples to clarify concepts during advice/hint mode and by
showing
examples
during
90
the
lesson
mode.
5 PEDAGOGICAL FRAMEWORK PORTABILITY
One of the goals of this dissertation was to develop a standard methodology that
facilitates an incorporation of learning styles into an intelligent tutoring system (ITS).
The pedagogical framework described in this dissertation provides a unique way of
integrating learning style research into an ITS. This pedagogical framework is domain
independent and can be implemented in any intelligent tutoring system. It uses the
Felder-Silverman learning style model and the accompanying Index of Learning style
model to categorize a learner’s learning style.
This framework consists of two parts: the feedback components and the methodology
that uses these components to generate learning style based feedback. The feedback
components are designed to contain feedback that supports different dimensions of the
Felder-Silverman learning style model. Feedback is generated in response to student
actions, which provide the inputs that are used to select and assemble feedback
components. Each component has a set of attributes that are used in the selection
process.
This pedagogical framework can be used in another domain at two different levels:
the conceptual level and the application level. At the conceptual level, a designer can use
the idea of creating feedback components to match an underlying learning style model
and use a feedback generation methodology to generate feedback from these components.
Feedback components allow a designer to modularize feedback content, which makes it
easier to update and maintain the feedback content. If an instructor feels that feedback for
a concept can be made clear through use of examples, he/she can add one or more
91
example components and now the feedback for the concept will provide examples to the
learner.
Components make it easy to create feedback that is appealing to many different types
of learners. Even if an intelligent tutoring system does not support learning styles, it can
still use different feedback components to generate feedback that would be rich in
presentation and content. It can mix and match verbal explanations with graphical
images, examples, and hands-on activities. Felder (1993) has recommended this type of
approach for classroom instruction to accommodate learners with various learning styles.
At the application level, this pedagogical framework can be used if a designer
chooses to use the Felder-Silverman learning style model, Index of Learning Style, and
the same infrastructure, such as the data interfaces and the relational database system.
The information required to integrate this framework into an intelligent tutoring system
can be categorized as domain information and student information. The following domain
information has to be added to the database.
1. Concepts and related concept – Concept and related concept are basic
building blocks of this pedagogical framework. The domain knowledge is
represented in the form of individual concepts. A concept is the link that ties
all different types of information, such as student actions, errors, and
feedback components in the framework together. Related concept is used
along with the concept to determine the part of the concept that the student is
having difficulty with.
2. List of student actions – A learner can perform a list of actions that are
mapped to each concept. These actions must be maintained in the database.
92
For example, if a student is creating a class diagram and he/she is working
with the attribute concept, he/she can add/delete/modify an attribute.
3. Concept/error cross reference – All possible errors that a learner can make
while applying a concept must be maintained in the database. For example, if
a student adds an attribute, he/she might generate INC_DATATYPE, which
means that he/she chose the wrong data type for the attribute.
4. Action/explanation phrases cross-reference – A cross-reference of
possible actions and explanation phrases has to be maintained in the
database. The actions are used as an index to articulate student response. For
example, if a student performs the add method action, this action can be used
to get the phrase, “You just added %studentAction%” (studentAction is
replaced with the name of the actual method that the student added).
5. Error/explanation phrase cross-reference – A cross-reference of all
possible errors and explanation of the error has to be maintained in the
database so that learner can be informed of what he/she did wrong. For
example, the error code INC_RETRUNTYPE would result in the phrase,
“The returntype for your method is not correct.”
6. Feedback components – All feedback components that contain feedback for
all concept/related concepts also have to be added to the database.
7. Feedback components/learning model dimensions cross-reference – this
cross-reference specifies the types of components that can be used for each
dimension of the learning style model.
93
The student information consists of student learning style, student feedback history,
and used component record. Database tables have to be created to store all student
information.
Example:
Let us assume that the pedagogical framework in being integrated into an intelligent
tutoring system that helps students learn basic chemistry concepts, such as an atom.
Given an element name, the students are expected to construct a diagram of an atom. The
students can construct an atom by drawing a circle and then drawing another inner circle
to represent the nucleus. They can draw protons, electrons, and neutrons, and assign them
electrical charges.
The relevant domain knowledge is described in the tables 4, 5 and 6. Table 4
describes the concept and related concept. The concept/related concept pair is used to
divide a concept into smaller units that can be used to pinpoint exactly what a student
does not understand about a concept. A concept can be very broad. For example, for the
concept of atom, a student might understand the concept itself but the errors that he/she
makes indicate that he/she does not comprehend the proton part of the atom.
Concept
Atom
Atom
Atom
Atom
Atom
Atom
Proton
Neutron
Electron
Proton
Electron
related concept
Atom
Nucleus
Proton
Neutron
Electron
Atomic mass number
Proton
Neutron
Electron
Electrical charge
Electrical charge
94
Table 4 – Concept/related concept
Table 5 lists all the actions that a student can perform to apply a concept. For
example, when a student is applying the concept of atom, he/she can add an atom; he/she
can add a proton, etc. The purpose of the explanation phrase is to articulate student action
and is used in the feedback message.
Concept
Atom
Atom
Atom
Atom
Atom
Proton
Action
Add atom
Add proton
Add electron
Add neutron
Add atomic mass
number
Add electric charge
Table 5
Explanation Phrase
You just added an atom.
You just added a proton.
You just added a electron.
You just added a neutron.
You entered the atomic
mass number.
You added electrical
charge to the proton.
– Concept/action/explanation phrases
Table 6 shows the mapping of the concept and the error that the student could make
while applying a concept. The purpose of the explanation phrase is to tell the student
why his/her action is not correct and is used in the feedback message.
concept
Proton
Proton
Error code
INC_PROTONCHRG
INC_PROTONCNT
Atom
Atom
MISS_NEUCLEUS
MIS_ELECTRON
Atom
MIS_PROTON
Atom
MIS_NEUTRON
electron
INC_ELECTRONCHRG
Atom
INC_ATOMICMASSNUM
Explanation phrase
The proton charge is not correct.
Number of proton in your atom is not
correct.
This atom does not have a nucleus.
The electron configuration is not
correct.
The proton configuration is not
correct.
The neutron configuration is not
correct.
The electric charges in the atom are
not correct.
The atomic mass number of the atom
is not correct.
Table 6 – Error codes/concept/explanation phrases
95
Let us assume that the student assigned an incorrect atomic mass number to the atom
that he/she constructed. As a result of students erroneous action, the following inputs were
generated that will be used to generate the feedback.
Input
Concept
Action
Error
Value
Atom
Add atomic mass number
INC_ATOMICMASSNU
M
SM
Mass number
Feedback history
None
Student learning
Verbal/sensor/reflective/gl
style
obal
Used components
None
Table 7 – Student action record
1. According to this input, the student is having difficulty with the concept of
atom and specifically the mass number of an atom so the components that will
be chosen will have the concept set to atom and related concept set to mass
number.
2. Since there is no feedback history, which means that the student has not
received any feedback so far, the feedback level will be set to 1. Only
components with a feedback level of 1 will be chosen.
3. Since there are no used components, all the components will have the usability
factor of 1.
4. According to the student’s learning style, he/she is verbal, which means that the
content display mode will be set to verbal. So only components with verbal
presentation mode will be chosen.
96
5. Since the student learning’s style is verbal/sensor/reflective/global, the
components that can be used are of type definition, question, application and
example.
6. In summary, the selection criteria for choosing the components would be the
following:
Concept = atom
Related concept = mass number
Feedback level = 1
Dimension = {global, *}
Presentation mode = verbal
Type = {def, ques, appl, exm}
All components that meet these selection criteria will be on the selected
component list. Since the feedback level is set to 1, only two components from the
selected component list will be chosen. These components will be used to generate the
relevant concept feedback, part of the feedback message. The feedback message consists
of three parts: student action explanation (SAE), action status (AS), and relevant concept
feedback (RCF). The student action explanation (SAE) part is created by using the
concept and action inputs to select a phrase from table 5. The action status (AS) part of
the feedback message is created by choosing the phrase from table 6 using the concept
and the error code. The SAE and AS would be:
SAE = “You entered atomic mass number”’
AS = “The atomic mass number of the atom is not correct.”
97
Let us assume that we have the following feedback components:
Figure 5-1 Feedback components
Assume that the two feedback components that are used in this feedback level are the
definition (def) and question (ques) components.
Def = “Mass number is the total number of particles in a nucleus of an atom”
Ques =”Do you know how many protons and neutrons are in your atom?”
So the relevant concept feedback text would be:
RCF = “Mass number is the total number of particles in a nucleus of an atom.
Do you know how many protons and neutrons are in your atom?”
The feedback message that the student will see would be:
“You entered atomic mass number. The atomic mass number of the atom is not
correct. Mass number is the total number of particles in a nucleus of an atom. Do
you know how many protons and neutrons are in your atom?”
98
This example shows that the pedagogical framework is domain independent and
can easily be implemented in another domain. The task of implementing this framework
can be divided into two parts:
1. Setting up the infrastructure
Setting up an infrastructure refers to the task of creating database tables and
storing information in these tables used by the framework. Setting up tables is one
of the easier tasks because a script is provided that contains all database table
definitions and create table statements. This script also sets up the mapping of
learning style dimensions and feedback component types.
Once the tables are created, information can be added through various
methods, such as batch data loading scripts that are provided by the underlying
relational database management system. The domain information required by the
framework (discussed earlier in this section) will have to be created and added by
the ITS designer. The student learning style information is added to the database
when a student completes the online Index of Learning Style Inventory that is
provided with this framework.
2. Creating learning style feedback
Creating learning style feedback is a little bit of a challenging task because it not
only requires creativity but use of tools to generate graphical / multimedia files.
The good news is that there is a wide variety of graphical and/ multimedia
creation software packages and most of which are user friendly and easy to use.
Another thing that makes creating learning style feedback a little easier is the
documentation that is provided with feedback maintenance tool. This
99
documentation describes the pedagogical framework and its components in detail.
Once the feedback content (graphical and textual) is created, it is easy to add the
feedback components to the framework using the feedback maintenance tool
(FMT). As the feedback components are added to the framework, they can be
viewed, modified or deleted with just a click of a button.
Implementing this framework requires very little time because of the automated
database script, the documentation, online Index of Learning Style survey and the
feedback maintenance tool. What makes this framework unique is that once the system is
functional, an instructor can easily extend the feedback architecture by adding/modifying
feedback components.
100
6 FEEDBACK MAINTENANCE TOOL
Traditionally, one of the bottlenecks in developing and maintaining an intelligent
tutoring system is maintaining/updating domain knowledge in an ITS. It is usually a
cumbersome process which not only requires the domain experts but also the system
designers who must elicit and represent domain knowledge in a form that is usable by the
ITS.
To facilitate the process of knowledge acquisition for the learning style based
pedagogical framework, a feedback maintenance tool has been developed that allows an
instructor/domain expert to extend the pedagogical framework. This feedback
maintenance tool is an easy to use web-based tool that allows the user
(instructor/instruction designer) to delete or modify existing feedback and to add new
feedback for any concept in the domain. This tool can be used to add feedback for both
the advice mode and the tutorial mode. The following section will describe in detail this
feedback maintenance tool.
In order to use the feedback maintenance tool, a valid login is required. Once the user
provides a valid login, he/she sees the following (figure 6-1) screen.
101
Figure 6-1 Feedback Maintenance Interface
This screen gives the user seven options: creating feedback, input advice feedback,
input tutorial feedback, input new concept, view/modify/delete tutorial feedback,
view/modify/delete advice feedback and view/delete concept/related concept.
Choosing the first option “creating feedback” provides the user an overview of
learning style feedback and how to create learning style feedback. Choosing the second
option “Input Advice feedback” allows the user to add feedback for the advice mode.
Once the user clicks on it, he/she will see the following screen (figure 6-2).
102
Figure 6-2 Input Advice feedback-1
If the user clicks on “existing concept”, he/she will see the screen in figure 5-3,
which allows him to add feedback for any existing concept in the domain.
Figure 6-3 Input Advice feedback-2
Figure 6-3 displays concept/related concept and an option to view the current
feedback for each concept/related concept pair. Each concept is broken down into smaller
103
units with the use of related concept. There are many different things about a concept that
a learner might not understand. The related concept is used in conjunction with the
concept to specify what a learner might not understand about the given concept. For
example, if the concept is parameter and the related concept is also parameter, that
indicates that the learner does not understand the concept of parameter. If the concept is
parameter and related concept is datatype then it means that the student does not
understand the concept datatype in the context of an parameter.
If the user wants to see the current feedback components for a concept, he/she can
click on the “click here” icon which will open a new window that displays the current
feedback components for the concept that the user chooses. Let’s assume that the user
wants to view the feedback for attribute/attribute (concept/related concept) so he/she
clicks on “view feedback”. He/she will see the screen in figure 6-4.
Figure 6-4 Input Advice feedback-3
104
This screen shows the existing feedback components for the concept attribute and
related concept attribute. It displays the component type, feedback level, dimension
(global/sequential), presentation mode, status (active/inactive), text, and the visual image.
It also enables the user to view the text or the image/multimedia content by clicking on
the “click here” button. For example, clicking on the “click here” icon in figure 6-4 for
the component AT_EXMP2, which is of type example with visual presentation mode,
would show the following screen.
Figure 6-5 Input Advice feedback-4
Figure 6-5 shows the image that the component contains. This particular
component shows an example of attributes. If the user chooses the option “new concept”
in figure 6-2, he/she will be presented with the screen shown in figure 6-6, which will
allow him/her to add a new concept to the system.
105
Figure 6-6 Input Advice Feedback-5
Once the user enters a new concept/related concept, he/she once again sees the
update screen (figure 6-2) where he/she can choose the “existing concept” and add
advice and/or feedback as described in this section:
If the user chooses “Input tutorial feedback” from the main screen (figure 6-1),
then the process is the same as for the option “Input Advice Feedback.” The difference
is that the feedback that the user inputs is added to the tutorial mode feedback. If the user
chooses “Input new concept” from the main screen (figure 6-1), he/she sees the
following screen as shown in figure 6-7.
106
Figure 6-7 Input New Concept
Once the user inputs the concept/related concepts and clicks on “submit”, it is added to
the system and the user sees the main screen (figure 6-1). The new concept that the user
just added can be used to add feedback for both the tutorial feedback and advice feedback.
If the user chooses the “view/modify/delete tutorial feedback” from the main screen
(figure 6-1), he/she will see the following screen (figure 6-8).
Figure 6-8 View/modify/delete tutorial feedback-1
107
Once the user clicks on a concept/related concept, he/she sees the screen in figure 6-9.
Figure 6-9 View/modify/delete tutorial fdbck-2
If the user clicks on the “delete” icon, the feedback component is deleted, if the user
clicks on the “modify” icon, he/she sees a screen that allows him/her to alter the
component. If the user chooses the option “view/modify/delete advice feedback” from
the main screen (figure 6-1), he/she will go through the same process as described for
“view/modify/delete tutorial feedback” except he/she will be working with advice
mode feedback. If the user chooses the option “view/delete concept/related concept”
from the main screen (figure 6-1), he/she sees the screen in figure 6-10. The user can
delete a concept/related concept by clicking on the “delete” icon.
108
Figure 6-10 View/delete concept/related concept-1
The feedback maintenance tool is very useful in extending the feedback architecture.
The user does not need to know how the information is represented in the system. He/she
just has to understand the different feedback components and their function. This tool is
easy to use and can be learned in a short period of time. Any feedback that the user inputs
is readily available for the system to use.
This tool is very useful for domain
experts/instructors who want to modify existing feedback or add new feedback. It makes
the system flexible and extendible.
109
7 EVALUATION
The pedagogical framework that is the focus of this dissertation was implemented
and evaluated using DesignFrst-ITS, an intelligent tutoring system that helps students
learn object oriented design. The evaluation process was multi-step and consisted of the
following:
1. Feedback evaluation
2. Learning style feedback effectiveness evaluation
3. Object oriented design concept tutorial evaluation
4. Feedback maintenance tool evaluation
Each of these evaluation types will be discussed in detail in the following section.
7.1 Feedback evaluation
The feedback evaluation can be divided into two parts. The first part deals with the
relevance of these components to the underlying learning style model and the second part
evaluates the quality and accuracy of the feedback in relation to the goals of the system.
The pedagogical framework is based on the Felder-Silverman learning style model
that has four dimensions: verbal/visual, active/reflective, sensor/intuitive, and
global/sequential. This model is accompanied by literature that describes the attributes of
each dimension clearly and provides recommendations on creating materials to suit these
dimensions. Also, this learning style model has been used in other educational systems
and there is a significant amount of literature that describes the various ways in which
content is adapted to different dimensions of the model. The pedagogical framework
components were designed using the learning style literature relevant to this learning
110
style model. The pedagogical framework component types were created and verified by
Dr. Alec Bodzin from Lehigh’s College of Education. Prof. Bodzin reviewed the
feedback components and suggested modifications to ensure that the feedback
components did indeed match the learning style model. For example, the most obvious
dimension of the model is the verbal/visual dimension which dictates the presentation of
information to the learner. The two components that match this dimension are the
definition component, which contains information in written form and the picture
component that contains information in visual form. The framework maintains a crossreference of feedback components with the learning style dimensions. This crossreference is used to select the components that apply to a learning style. One requirement
is that the components must contain the content in the form that is dictated by their
function. For example, the question component must contain a question and the example
component will contain an example.
The second part of the component evaluation is determining if the content that is
contained in these components is accurate and relevant to the goals of the system. In this
case, system refers to DesignFirst-ITS, whose goal is to help novices learn object
oriented design by creating a class diagram. The system is designed to provide feedback
to learners when they make an error while applying a concept. In this case, the feedback
content that is provided to the learner must help him/her understand and clarify the
concept that the learner is having trouble with.
The feedback component content and the feedback that is generated from these
components was verified by Sally Moritz and Sharon Kalafut, who are both experienced
Java instructors and know object oriented design and programming domain very well.
111
The instructors were given an overview of the learning style based pedagogical
framework and the feedback maintenance tool. They were given a questionnaire to fill
out as they reviewed the feedback content. The instructors were asked to look at the
individual components using the feedback maintenance tool. This tool allows an
instructor to view individual feedback components that are used in the feedback
generation process. The feedback maintenance tool interface allowed them to see
individual feedback components by concept/related concept. They viewed the feedback
component content to see if it made sense and explained the concept clearly.
To determine the quality of the feedback that is generated as a response to a student
action, they were asked to create a class diagram in the LehighUML plug-in for the
Eclipse IDE. They used different logins that were already set up in the system with
different learning styles. The questionnaire consisted of eight questions; some were
yes/no questions while other questions required them to choose from a scale of 1 - 5 (1 =
poor, 5 = best). The results of the questionnaire are as follows:
Quest #
1
2
3
4
5
6
7
8
Question
Average rating/answer
Quality of feedback
5
Quality of visual content
5
Feedback addresses knowledge gap
4.5
Does the feedback support different
Yes
learning styles
feedback coherent /easily
Yes
understood
Feedback content boring/dull
No
Enough feedback
Yes
Enough examples in the feedback
Yes
Table 8 – Feedback evaluation
112
5
4
3
2
1
Quality of
feedback
How w ell
feedback
addresses
know ledge gap
quality of visual
content
Figure 7-1 Feedback evaluation
According to the evaluation data, feedback is coherent and easy to understand and
it supports different learning styles. The feedback contains enough examples and is not
dull or boring. The feedback that is provided is sufficient to help a student fix his/her
design. The quality of visual content in the feedback received a rating of 5 as did the
overall quality of the feedback. The measure of how well the feedback addressed the
knowledge gap received a rating of 4.5, which indicates that feedback message could add
a little more detail specific to the student’s error. Another reason for this rating is because
the feedback is created as a response to inputs generated by other components of the
systems. If the other components are not hundred percent in their diagnoses, the feedback
might not effectively address student knowledge gap.
The evaluation data does indicate that the system is generating quality feedback
that provides enough information for students to fix their object oriented designs.
113
7.2 Learning style feedback effectiveness evaluation
Evaluating any learning system with human subjects is a challenging task,
particularly for an ITS that teaches object oriented design concepts. One of the biggest
challenges is to find human subjects to evaluate the system. This is especially true
because the target audience for this system is students who are novices to object oriented
design concepts, and enrollments in introductory programming courses are quite low.
Another challenge is to motivate the participants to learn the domain content and to use
the system to work on their assigned problem. If the students are not motivated to learn
the domain content, the learning gain is minimized regardless of how well designed and
comprehensive a given system is.
Nevertheless, the system was evaluated with 42 area high school students during
the spring and summer of 2007 during multiple studies. The data from these studies was
compiled and analyzed to determine if learning style feedback resulted in bigger gains.
All students who participated in these studies were novices to object oriented design and
programming.
The evaluation consisted of setting up three different groups: a no-
feedback group which did not receive any feedback at all; a textual-feedback group
which received feedback in the form of plain text; and a learning-style-feedback group
which received feedback that matched their learning style. The materials for the study
were:
1. object oriented (OO) domain concept lecture materials
2. CIMEL multimedia, which further illustrates OO concepts and introduces the
student to using LehighUML plug-in for Eclipse Integrate Development
Environment (IDE)
114
3. an algorithm to create object oriented design from a problem description
4. Index of Learning Style survey – used for categorizing student learning style,
which the students filled it out before using the system
5. pre-test - administered before using the system
6. post-test – administered after using the system
7. pedagogical advisor evaluation questionnaire – used for qualitative data,
which the students filled out after using the system
8. DesignFirst-ITS system
9. a problem description for the movie ticket vending machine (assignment that
the students would be working on during the study)
The evaluation process consisted of introducing object oriented design concepts to
students through lecture and multimedia lessons, teaching students an algorithm to create
object oriented design from a problem description, and also administrating a pre-test and
Index of Learning Style survey (which categorized the student learning style along the
Felder-Silverman Learning Style model and logged the data into the DesignFirst-ITS
database). The pre-test was designed to measure prior knowledge and to give a baseline
on which to compare the post-test. Once the students were ready to work on their
assignment, they logged into the DesignFirst-ITS and started creating their object
oriented designs.
The students who belonged to the no-feedback group did not receive any feedback as
their work was not evaluated by the ITS. As the students belonging to the textualfeedback group and the learning-style-feedback group were working on their designs, the
DesignFirst-ITS was analyzing these designs and recording every student’s action in the
115
database. When a student made an error, the textual-feedback group received textual
feedback and the learning-style-feedback group received pedagogical advice that
matched their learning style. Once the students completed their designs, they were given
the post-test and the pedagogical advisor evaluation questionnaire to fill out. The results
of the study are summarized below.
NO-FEEDBACK GROUP:
Hypothesis: There should be no significant learning gains for no-feedback group.
The following table shows details of the data including the sample size for this group.
Sample size (n)
Variance
Standard deviation
Minimum
Maximum
Range
Average
16
5.98333
2.44609
-3.0
6.0
9.0
.875
Table 9 – No-feedback group data
Figure 7-2 Learning Gain – No-feedback group
T-test:
P-Value = 0.08
The Box and Whisker Plot shows the reduction in errors between the pre-test and
the post-test for all 16 students. The average reduction for this group was .875 and the
116
difference in test scores between pre-test and post-test ranges between -3 and 6. The
value of p (.08) for the paired t-test indicates that the learning gain for the no-feedback
group is not statistically significant and thus does not reject the hypothesis. The data
supports that the students who do not receive any feedback will not realize any learning
gains.
TEXTUAL-FEEDBACK GROUP:
Hypothesis: There should be learning gains for the textual feedback group.
Sample size (n)
Average
Variance
Standard deviation
Minimum
Maximum
Range
10
1.2
8.62222
2.93636
-4.0
5.0
9.0
Table 10 –Textual-feedback group data
Figure 7-3 Learning gains – Textual-Feedback group
T-test : P-Value = 0.114219
The Box-and-Whisker Plot shows the average reduction in errors between the pretest and the post-test for all 10 students. The average reduction for this group was 1.2.
117
The difference in test scores between pre-test and post-test ranges between -4 and 6. The
value of p (.1) for the paired t-test indicates that the learning gain for the textual-feedback
group is not statistically significant and thus rejects the hypothesis. The data does not
support the original hypothesis that the textual-feedback group should realize learning
gains.
LEARNING-STYLE-FEEDBACK GROUP:
Hypothesis: Learning-style-feedback group should realize higher gains.
Sample size (n)
Average
Variance
Standard deviation
Minimum
Maximum
Range
16
3.3125
8.3625
2.8918
-5.0
8.0
13
Table 11 – Learning-style-feedback group data
Figure 7-4 Learning gains – learning-style-feedback group
T-test: P-Value = 0.000179826
The Box and Whisker Plot shows the average reduction in errors between the pretest and the post-test for all 16 students. The average reduction in errors for this group is
118
3.3. The difference in test scores between pre-test and post-test ranges between -5 and 8.
The value of p (0.0) for the paired t-test indicates that the learning gain for the learning
style feedback group is statistically significant. The data supports the original hypothesis
that the learning style feedback group should realize learning gains.
SUMMARY STATISTICS:
Group
No Feedback
Textual Feedback
LearningStyle Feedback
Total
Sample size
16
10
16
42
Average reduction in errors
0.875
1.2
3.3125
1.88095
Table 12 – Summary statistics
The following Box-and-Whisker Plot shows the error reduction for each group
between pre-test and the post-test.
Figure 7-5 Learning gains for all three groups
Discussion:
119
The main objective of these evaluation studies was to test the hypothesis that the
learning style feedback results in higher learning gains for students. To test this
hypothesis, the effects of no feedback, textual feedback, and learning style feedback were
studied for three separate groups. All the materials and processes were the same for all
three groups except when they interacted with the DesignFirst-ITS. The no-feedback
group did not receive any feedback from the system, the textual-feedback group received
feedback in response to their erroneous action in the form of text only, and the learningstyle-feedback group received feedback from the system that matched their specific
learning style.
Based on the evaluation data, it can be concluded that there is no statistically
significant improvement between the pre-test and post-test scores for the no-feedback
group. This conclusion makes sense because the students did not receive any information
between the pre-test and the post-test. The same conclusion can be drawn from the
evaluation data for the students in the textual-feedback group. Even though these students
did receive feedback from the system for their erroneous actions, there was no significant
improvement between pre-test and post-test scores. The data does not support the original
hypothesis that textual-feedback group should see a learning gains after using the system.
There are many factors that could have contributed to this lack of improvement; small
sample size; students did not read the feedback; students did not understand the feedback,
etc. One of the most likely reasons is that in general high school students do not like to
read and the students in the evaluation study did confirm that by voicing their dislike
about reading when they were asked to read the handouts and feedback carefully.
120
For the learning-style-feedback group, the evaluation data suggests that the students
that received the learning style feedback did realize learning gains after using the system.
This conclusion makes sense because the students received feedback in response to their
erroneous actions in the form of their own preferred learning style. It is possible that
students were more likely to pay attention to the feedback when presented in a way that
they preferred. Based on data analysis for all three groups, the following conclusions can
be drawn:
1. The data suggests that the no-feedback and textual-feedback groups did not
show any statistically significant learning gains between the pre-test and the
post-test scores.
2. The learning-style-feedback group did achieve statistically significant learning
gains between the pre-test and the post-test.
In addition to the pre-test and the post-test, the students were also given a
pedagogical advisor evaluation survey to determine how well the students liked and
understood the feedback. The students were not required to identify themselves on the
survey so that they could answer the questions without any hesitation. The evaluation
survey consisted of 8 questions that required the students to answer with a yes/no answer.
The results of survey are as follows:
Survey Questions
% positive answers
Presentation Mode - Visual
Read Advice
Understood Device
Advice Helped understand Errors
Advice Helped Correcting Errors
Liked Images
Understood Information Conveyed in
Images
Did not find PA annoying
70%
90%
75%
72%
71%
65%
69%
121
70%
% Positive Responsive
Table 13 – Pedagogical advisor evaluation
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
Pr
Un
Re
Un
Ad
Ad
Li
Di
ke
es
d
de
v ic
v ic
ad
de
no
d
en
r
r
e
e
s
I
A
s
tf
m
ta
t
t
H
H
d
o
o
in
ag
ti o
vi c
od
el
el
od
d
pe
pe
es
n
e
In
PA
De
M
d
d
f
or
od
un
Co
v ic
an
m
e
de
e
r
no
at
re
-V
io
rs
yi n
cti
n
ta
is u
ng
g
C
n
al
d
on
Er
Er
v
ro
e
ro
ye
rs
rs
d
in
. ..
Figure 7-6 Pedagogical advisor survey
The survey results suggest that the system was well received by the students. 90%
of the students said they paid attention to the advice, and more than 70% of them
understood the advice. The majority of the students found the advice helpful in
understanding and correcting the errors. About 70% of the students that received advice
containing images actually understood the information contained in the images and 65% of
students liked the images. 70% of the survey participants stated that they did not find the
pedagogical advice annoying. These results are encouraging and provide evidence that this
system is effective in helping students understanding and fixing errors. These results are
also useful in improving the system, such as creating better images.
122
7.3 Object Oriented Design Tutorial Evaluation
The object oriented (OO) design tutorial gives students a detailed description of
object oriented concepts based on student learning style. The students brush up on the
OO concepts through examples, verbal definitions, visual images, and interactive
exercises. The tutorial not only adapts the concept explanation to the student’s preferred
learning style, but it also adapts the presentation of content index. Sally Moritz and
Sharon Kalafut also evaluated the object oriented design tutorial. They were provided
with a link to the tutorial and given a set of different login IDs that were set up in the
system with various learning styles. They were asked to use different logins to view the
object oriented concept tutorial and fill out the tutorial evaluation questionnaire. The
questionnaire contained 10 questions, 5 of which required rating the tutorial in various
categories on a scale of 1 to 5 (1 = poor, 5 = best), while the other 5 questions required a
yes/no answer. The following are the results of the questionnaire:
Ques #
Questions
1
2
3
4
Tutorial layout clear
Ease of use
Concept clarity
Overall tutorial rating
(usability & content)
Tutorial appealing
Enough examples
Learning style support
Visual content clarity
Visual content easy to
comprehend
Visual content appeal
5
6
7
8
9
10
Rating/answer
5
4.5
5
4.5
4.5
yes
yes
yes
yes
yes
Table 14 – Tutorial evaluation
123
5
4
3
2
1
tutorial
appealing
tutorial
layout
clear
ease of
use
concept
clarity
overall
tutorial
rating
(usability &
content)
Figure 7-7 Tutorial evaluation - Questions 1-5
According to the data, the OO tutorial received the average rating of 5 (highest)
for its appeal, and ease of use. The tutorial layout received an average rating of 4.5
because the content index for the verbal/sequential did not present concepts in the order
of hierarchy. The concept clarity received an average rating of 4.5 because the visual
illustration for one of the OO concepts did not provide a complete definition. The overall
rating for the tutorial received an average of 4.5.
The evaluation results demonstrate that the OO tutorial is useful in helping students
learn object oriented concepts. One thing to bear in mind is that object oriented design
concepts are not trivial concepts and one cannot expect a student to learn and comprehend
these concepts just by going through the tutorial. The intent of this tutorial is to be used as a
tool like the DesignFirst-ITS system along with a classroom instruction.
7.4 Feedback maintenance tool evaluation
Ms. Moritz and Prof. Kalafut also evaluated the feedback maintenance tool that is
designed to allow an instructor to add/delete/modify feedback in the system. They were
provided with the following:
124
1.
Instructions on how to access the feedback maintenance tool.
2.
Instructions on what to evaluate. They were asked to evaluate
a. The online user guide that provides an introduction to the pedagogical
framework and its various components.
b. Different options provided by the tool.
3.
Their own login IDs to log into the system.
4. A questionnaire to fill out while they evaluated the tool. The
questionnaire contained 8 questions that pertained to different aspects
of the tools. Some of these questions were yes/no questions while
others required using a scale of 1 - 5 (1 = easy, 5=difficult).
The results of the evaluation are summarized below.
Quest #
1
2
3
4
5
7
6
8
Question
FMT documentation
helpful
FMT interface easy to use
All FMT options work
Level of difficulty for
adding advice mode
feedback
Level of difficulty for
adding tutorial mode
feedback
Difficulty level for adding
new concepts
Able to add new concepts
Able to view newly added
advice
Average Rating / answer
yes
yes
yes
1.5
1.5
1.5
yes
yes
Table 15 – Feedback maintenance tool evaluation
125
5
4
3
2
1
difficulty level for
adding advice
mode feedback
difficulty level for
adding tutorial
mode feedback
difficult level for
adding new
concepts
Figure 7-8 Feedback maintenance tool evaluation
The evaluation results show that the feedback maintenance tool documentation
was helpful to instructors in understanding the pedagogical framework and the feedback
components. The instructors also found the FMT interface easy to use and were able to
easily view and add new feedback. The average rating for level of difficulty for adding
new concept / new feedback was 1.5. The evaluation results demonstrate that the
feedback maintenance tool is easy to use for adding/modifying feedback for the advice
and tutorial modes. The results also show that the tool works well. The accompanying
documentation is helpful in understanding the pedagogical framework.
The evaluation results suggest that the pedagogical framework and the
accompanying tools are well designed and are effective in integrating learning styles into
an ITS. The results also suggest that students realize learning gains when they are
presented with information in the form that matches their learning styles. These
evaluation studies set a stage for more comprehensive studies involving bigger sample
size and evaluating the system at more diverse level.
126
8 CONCLUSIONS
Each individual has a preference in which he/she prefers to receive and process
information (Jonassen & Grabowski, 1993). Learning style is a term that is used to
describe these preferences.
Further research has shown that accommodating an
individuals learning style helps the learning process (Felder, 1996). As a result, learning
style research has produced many different types of theories/models, some of which have
been applied in various settings, such as academia and industry to provide learning
support.
The goal of this dissertation was to use learning style research to enhance the
adaptability of an intelligent tutoring system. Current intelligent tutoring systems do not
take an individuals’ learning style into account while providing learning support.
However, there are educational systems, such as adaptive hypermedia systems, that do
provide learning style adaptability, but there is no standard methodology for one to
follow to integrate learning styles into a system.
This dissertation has contributed towards providing a standard methodology to
incorporate learning styles into an intelligent tutoring system by creating a domain
independent pedagogical framework based on Felder-Silverman learning style model.
The following section will describe the methodology that was used to create this
pedagogical framework using the research questions posed in chapter one.
1. How can learning style based feedback architecture be created using a learning
style model?
This question was answered by using the various dimensions of the FelderSilverman learning style model to create different types of feedback components.
127
The Felder-Silverman learning style model has four dimensions: sensing/intuitive,
visual/verbal, active/reflective, and sequential/global. Each component type
contains information in the form that matches the style encompassed by each
dimension of the model. For example, the picture component contains information
in visual form, such as pictures, diagram, etc. This component is useful for a visual
type learner while the definition component contains information in the form of
text, which is preferred by a verbal learner. The various types of components are
picture, definition, application, scaffold, relation, question, exercise, and example.
Each of these components has certain attributes that identify each component and
its content.
2. How can this feedback architecture be used to create learning style based
feedback?
A feedback generation process (FGP) was developed, which uses the feedback
components to generate feedback that matches an individuals learning style. The
FGP creates feedback by using inputs, component attributes, and individual
learning styles to automatically generate feedback from these components.
3. How can this feedback architecture be generalized to make it domain
independent?
This feedback architecture is generalized because it uses feedback components that
are based on a learning style model and not on a particular domain. The
component attributes make it possible to create these components in any domain.
128
4. How can this feedback architecture be made extendible, such that the instructor
can easily add/update the feedback without requiring any help from the ITS
developer?
This feedback architecture was made extendible by developing a feedback
maintenance tool (FMT) that allows an instructor to update, modify, and/or add
new feedback to the architecture. FMT is a web-based, easy to use tool that can be
used to view, update, and/or add feedback in the architecture.
5. How can this feedback architecture be used to incorporate multiple pedagogical
strategies into an ITS?
The two strategies that were considered for this dissertation were to learn-by-doing
and learn-by-example. The learning-by-doing strategy was incorporated to some
extent by providing the student hints about erroneous action and not an answer.
Since learners are not given answers at any level of feedback, they are forced to
think for themselves.
The learning-by-example strategy was implemented by
using examples in explaining and illustrating domain concepts.
6. How effective is this learning style ITS in helping students understand the domain
knowledge?
This question was answered by implementing this pedagogical framework in
DesignFirst-ITS, an ITS that provides learning support to novices learning object
oriented design. The system was evaluated with high school students that were
divided into three groups, no-feedback group, textual-feedback group, and
learning-style-group. All three groups of students used the system and received
different types of feedback: no feedback, feedback in the form of text only, and
129
learning style feedback that matched the individuals learning style. The students
were given a pre-test before using the system and a pos-test after using the system.
The evaluation results showed no significant learning gains for no-feedback group
and textual-feedback group but statistical significant learning gains for learningstyle-feedback group.
The following are the contributions made by this dissertation.
1. It provides a novel domain-independent pedagogical framework to integrate
learning styles into an intelligent tutoring system. This framework provides a
standard methodology for ITS developers to adapt the feedback to the needs of
individual learners.
2. This research contributed towards creating a pedagogical advisor in Design FirstITS, an ITS for teaching object-oriented design and programming. The
DesignFirst-ITS was used to provide learning support to high school students,
who are novices to object oriented design and programming. It was found to be
effective in helping them realize learning gains.
3. This research provides a novel, graphic, user interface for extending the feedback
network. This interface is very important because it makes it easy to update and
add feedback to the system making it more flexible.
4. The object-oriented design tutorial can be used by an instructor as a resource to
reinforce object-oriented concepts to introductory class students.
130
9 FUTURE WORK
To a large extent, one of the goals of this dissertation was to facilitate future work
by developing a standard methodology that would make it easier to integrate learning
styles into an intelligent tutoring system. The domain independent pedagogical
framework that is the focus of this dissertation makes it possible for ITS designers to
integrate learning styles into an intelligent tutoring system without starting from scratch.
As part of this dissertation, this framework was implemented in DesignFirst-ITS, in
object oriented design, and programming domain. It was found to be effective in helping
students learn domain concepts. The next logical step would be to implement this
framework in an intelligent tutoring system (ITS) in another domain and evaluate the
effectiveness of learning style based feedback. The feedback maintenance tool that is part
of this pedagogical framework can be used to create and/or update feedback components
for the new ITS. As with any tool, this feedback tool can be extended to create and
maintain domain knowledge information that is used by other components of the system.
131
10 BIBLIOGRAPHY
Baghaei, N., Mitrovic, A.(2005). COLLECT-UML: Supporting individual and
collaborative learning of UML class diagrams in a constraint-based tutor . In: Rajiv
Khosla, Robert J. Howlett, Lakhmi C. Jain (eds) Proc. KES 2005, Springer-Verlag,
LCNS 3684, pp. 458-464.
Bajraktarevic, N., Hall, W., Fullick, P. (2003). Incorporating learning styles in
hypermedia environment: Empirical evaluation, Workshop on Adaptive Hypermedia and
Adaptive Web-Based Systems, 41-53.
Blank, G. D., Moritz, S. H., DeMarco, D. W. (2005). Objects or Design First? Nineteeth
European Conference on Object-Oriented Programming (ECOOP 2005), Workshop on
Pedagogies and Tools for the Teaching and Learning of Object Oriented Concepts,
Glasgow, Scotland.
Bloom B. S. (1968). Learning for mastery. In Evaluation Comment, 1(2), Los Angeles:
University of California at Los Angeles, Center for the Study of Evaluation of
Instructional Programs, 1-5.
Brown, E.J, Brailsford, T. (2004). Integration of learning style theory in an adaptive
educational hypermedia (AEH) system. Short paper presented at ALT-C 2004, Exeter,
14-16
Brusilovsky, P. (1999). Adaptive and Intelligent Technologies for Web-based Education.
In C. Rollinger and C. Peylo (eds.), Special Issue on Intelligent Systems and
Teleteaching, Künstliche Intelligenz, 4, 19-25.
Brusilovsky, P. (2001). Adaptive Hypermedia. User Modeling and User-Adapted
nstruction, 11(1-2), 87-110.
Brusilovsky, P. (1996). Methods and techniques of adaptive hypermedia. User Modeling
and User-Adapted Interaction, 6 (2-3), pp. 87-129.
Burns, H. L, C. G. Capps. (1988). Foundations of Intelligent Tutoring Systems, Lawrence
Erlbaum Associates, Hillsdale, NJ.
Carver, C. A., Howard, R. A., Lane, W. D. (1999). Enhancing Student Learning through
Hypermedia Courseware and Incorporation of Learning Styles. IEEE Transactions on
Education, 42(1), 22-38.
132
Claxton, D. S., Murrell, P. (1987). Learning styles: Implications for improving educational practices (Report No. 4). Washington: Association for the Study of Higher
Education.
Corbett, A., Koedinger, K., Anderson, J. (1992).LISP Intelligent Tutoring System:
Research in Skill Acquistion. In J.H. Larkin and R.W. Chabay, eds. Computer-assisted
Instruction and Intelligent Tutoring Systems: Shared Goals and Complementary
Approaches. Hillsdale, NJ: Erlbaum, pp. 73 – 109.
Corbett, A. T., Anderson, J. R. (2001). Locus of feedback control in computer-based
tutoring: Impact on learning rate, achievement and attitudes. Proceedings of CHI 2002,
Human Factors in Computing Systems (March 31 - April 5, 2001, Seattle, WA, USA),
ACM, 2001 245-252
Cornett, C. E. (1983). What you should know about teaching and learning styles.
Bloomington, IN: Phi Delta Kappa (ERIC Document Reproduction Service No.
ED228235).
Curry, L. (1983). An organization of learning style theory and constructs. ERIC
Document, 235, 185.
Curry, L. (1987). Integrating concepts of cognitive or learning style: A review with
attention to psychometric standards. Ottawa, Ontario, Canada: Canadian College of
Health Service Executives.
Dempsey, J.V. and Sales, G.C. (1994). Interactive Instruction and Feedback. Englewood
Cliffs, NJ: Educational Technology Publications.
Dunn, R., Dunn, K. (1978). Teaching students through their individual learning styles: A
practical approach. Reston, VA: Reston Publishing.
Dunn R., Dunn, K., Price, G. E. (1979). Learning Style Inventory. Lawrence, KS: Price
Systems
Dunn R., Dunn, K., Price, G. E. (1989a). Learning Style Inventory. Lawrence, KS: Price
Systems
Dunn R., Dunn, K., Price, G. E. (1982). Productivity, Environmental Preference Survey.
Lawrence, KS: Price Systems.
Dunn R., Dunn, K., Price, G. E. (1989b). Productivity, Environmental Preference Survey.
Lawrence, KS: Price Systems
Felder, R. M., Silverman L. K., (1988). Learning and Teaching Styles. Engineering
Education, 674-681, April 1988.
133
Felder, R. M., (1996). Matters of Style, ASEE Prism, 6(4), 18-23
Felder, R. M., Felder G. M., Dietz E. J. (1998). A longitudinal study of engineering
student performance and retention. V. Comparisons with traditionally-taught students.
Journal of Engineering Education, 469-480, Oct 1998.
Felder, R., (1993). Reaching the second tier: Learning and teaching styles in college
science education. Journal of College Science Teaching, 23(5), 286-290.
Felder, R.M., Brent, R. (2005). ‘Understanding Student Differences’. Journal of
Engineering Education, Vol. 94, No. 1, pp. 57–72.
Felder, R. M., Solomon, B. A. (2001). Learning styles and strategies [WWW document].
URL http://www.ncsu.edu/effective_teaching/ILSdir/styles.htm North Carolina State
University.
Felder R.M., Spurlin J.E. (2005). "Applications, Reliability, and Validity of the Index of
Learning Styles," Intl. J. Engr. Education, 21(1), 103-112.
Ford, N., Chen, S. Y. (2000). Individual differences, hypermedia navigation and learning:
An empirical study. Journal of Educational Multimedia and Hypermedia, 9(4), 281-312.
Ford, N., Chen, S. Y. (2001). Matching/mismatching revisited: an empirical study of
learning and teaching styles. British Journal of Educational Technology, 32(1), 5-22.
Gardner, H. (1983). Frames of Mind. New York: Basic Books
Gardner, H. (1993). Multiple Intelligences: The theory in practice. New York: Basic
Books.
Garsha, A. (1972). “Observations on Relating Teaching Goals to Student Response Style
and Classroom Methods.” American Psychologist 27:244-47.
Gertner, A., VanLehn, K.(2000). Andes: A Coached Problem Solving Environment for
Physics . In G. Gauthier, C. Frasson and K. VanLehn (Eds), Intelligent Tutoring
Systems: 5th International Conference. Berlin: Springer (Lecture Notes in Computer
Science, Vol. 1839), pp. 133-142
Gilbert, J. E., Han, C. Y. (1999). Adapting Instruction in search of ‘a significant
difference’. Journal of Network and Computer Applications, 22(3), 149-160.
Gilbert, J. E., Han, C. Y. (1999a). Arthur: Adapting Instruction to Accommodate
Learning Style. Paper presented at the World Conference of the WWW and Internet,
WebNet'99, Honolulu, USA, 433-438.
134
Gilbert, J. E., Han, C. Y. (2002). Arthur: A Personalized Instructional System. Journal of
Computing in Higher Education, 14(1), 113-129.
Gilman, D. A. (1969). Comparison of several feedback methods for correcting errors by
computer-assisted instruction. Journal of Educational Psychology, 60(6), 503 — 508.
Goodman, B., Soller, A., Linton, F., and Gaimari, R. (1997). Encouraging Student
Reflection and Articulation using a Learning Companion. Proceedings of the AI-ED 97
World Conference on Artificial Intelligence in Education, Kobe, Japan, 151-158.
Graesser, A., VanLehn, K., Rose, C., Jordan, P. and Harter, D. (2001). Intelligent
tutoring systems with conversational dialogue, AI Mag., vol. 22, pp. 39--51.
Graesser, A. C., Wiemer-Hastings, K., Wiemer-Hastings, P., Kreus, R., and the Tutoring
Research Group (1999). AutoTutor: A Simulation of a Human Tutor. Journal of
Cognitive Systems Research 1(1):35-51.
Haukoos, G., and Satterfield, R. (1986). “Learning Style of Minority Students (Native
Americans) and Their Application in Developing Culturally Sensitive Science
Classroom.” Community/Junior College Quarterly 10: 193-201.
Jonassen, D. H., Grabowski, B. L. (1993). Handbook of Individual Differences, Learning
and Instruction. Lawrence Erlbaum Associates.
Keefe, J. W. (1979). Student learning styles: Diagnosing and prescribing programs.
Reston, VA: National Association of Secondary School Principals.
Kelly, D., and Tangney, B. (2002). Incorporating Learning Characteristics into an
Intelligent Tutor. Paper presented at the Sixth International Conference on Intelligent
Tutoring Systems, ITS'02., Biarritz, France, 729-738.
Koedinger, K. (2001). Cognitive Tutors as Modeling Tools and Instructional Models in
Smart Machines in Education. Forbus, Kenneth and Feltovich, Paul, Eds. AAAI
Press/MIT Press, Cambridge, MA. pp. 145-167.
Koedinger, K., Anderson, J., Hadley, W., Mark, M.(1997). Intelligent Tutoring Goes to
School in the Big City. International Journal of Artificial Intelligence in Education, 8(1),
pp. 30-43.
Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and
development. Englewood Cliffs, NJ: Prentice-Hall.
135
Kulhavy, R. W., Stock, W. A. (1989). Feedback in written instruction: The place of
response certitude. Educational Psychology Review, 1(4), 279 — 308
Kulik, J. A., Kulik, C. C. (1988). Timing of feedback and verbal learning. Review of
Educational Research, 58(1), 79 — 97.
Lester, J. C., Converse, S. A., Stone, B. A., Kahler, S. E., Barlow, S. T. (1997) Animated
pedagogical agents and problem-solving effectiveness: A large-scale empirical
evaluation. In Proceedings of the Eighth World Conference on Artificial Intelligence in
Education, 23-30. IOS Press.
Lester, J., Callaway, C., Grégoire, J., Stelling, G., Towns, S., Zettlemoyer, L. (2001).
Animated Pedagogical Agents in Knowledge-Based Learning Environments. In Smart
Machines in Education: The Coming Revolution in Educational Technology, Forbus, K.,
& Feltovich, P. (Eds.), pp. 269-298, AAAI/MIT Press, Menlo Park
Lewis, M.W., Anderson, J. R. (1985). Discrimination of operator schemata in problem
solving: Procedural learning from examples. Cognitive Psychology, 17, 26-65
Litzinger T.A., Lee S.H. , Wise J.C. , Felder R.M. (2005).A Study of the Reliability and
Validity of the Felder-Soloman Index of Learning Styles, Proceedings, 2005 ASEE
Annual Conference, American Society for Engineering Education.
Livesay, G., Dee, K., Felder, R. M., Hites, L., Nauman, E., O’Neal, E. (2002). Statistical
evaluation of the index of learning styles. Proceedings of the 2002 American Society for
Engineering Education Annual Conference and Exposition, Montreal, Quebec, Canada.
Merril D. C., Reiser B. J., Ranney M., Trafton J.G. (1992) .Effective tutoring techniques:
A comparison of human tutors and intelligent tutoring systems. The Journal of the
Learning Sciences, 3(2):277--305.
Messick, S. and Associates (Ed.) (1976). Individuality in Learning (San Fransisco,
Jossey-Bass).
Mitrovic, A., Ohlsson, S. (1999). Evaluation of a constraint-based tutor for a database
language, Int. J. Artificial Intelligence in Education.
Moritz, S., Blank, G. (2005). A Design-First Curriculum for Teaching Java in a CS1
Course, SIGCSE Bulletin (inroads), June
Mory E. H. (1996). Feedback Research. In D. H. Jonassen (Ed.), Handbook of research
for educational communications and technology. New York: Simon and Schuster
Macmillan.
136
Myers, I. B. (1976). Introduction to Type. Gainsville, Fla.:Center for the Application of
Psychological Type.
Ohlsson, S. (1996) Learning from Performance Errors. Psychological Review 103(2)
241-262.
Paredes, P., Rodriguez, P. (2002). Considering Learning Styles in Adaptive Web-based
Education. Proceedings of the 6th World Multiconference on Systemics, Cybernetics and
Informatics en Orlando, Florida, 481-485.
Person, N. K., Graesser, A. C., Kreuz, R. J., Pomeroy, V. and the Tutoring Research
Group. (2001). Simulating human tutor dialgue moves in AutoTutor. International
Journal of Artificial Intelligence in Education, 12:23--39.
Reichmann, S., Grasha, A. (1974). “A Rational Approach to Developing and Assessing
the Construct Validity of a Student Learning Style Scale Instrument.” Journal of
Psychology 87: 213-23.
Roper, W. J. (1977). Feedback in computer assisted instruction. Programmed Learning
and Educational Technology, 14(1), 43 — 49.
Rosati, P. (1999). Specific differences and similarities in the learning preferences of
engineering students. Proceedings of the 29th ASEE/IEEE Frontiers in Education
Conference, (Session 12c1), San Juan, Puerto Rico.
Sims, R. R., Sims, S. J. (1995). The Importance of Learning Styles: Understanding the
Implications for Learning, Course Design, and Education. Westport, CT: Greenwood
Press.
Smith, N. G., Bridge, J., Clarke, E. (2002). An evaluation of students’ performance based
on their preferred learning styles. In Pudlowski, Z. J. (Ed.), Proceedings of the 3rd
UNESCO
International Center for Engineering Education (UICEE) Global Congress on
Engineering
Education, 284-287. Glasgow, Scotland.
Specht, M., Oppermann, R. (1998). ACE: Adaptive CourseWare Environment. New
Review of HyperMedia and MultiMedia, 4, 141-161.
Stone, B., Lester, J., (1996). Dynamically sequencing an animated pedagogical agent. In
Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 424431.
Suraweera, P., and Mitrovi c, A. (2002). Kermit: a constraint-based tutor for database
modeling. In Proc. 6th Int. Conf on Intelligent Tutoring Systems ITS.
137
Thomas, L., Ratcliffe, M., Woodbury, J., Jarman, E. (2002). Learning styles and
performance in the introductory programming sequence, Proceedings of the 33rd
SIGCSE technical symposium on Computer science education (pp. 33-37). Cincinnati,
Kentucky: ACM Press.
Triantafillou, E., Pomportsis, A., Demetriadis, S. (2003). The design and the formative
evaluation of an adaptive educational system based on cognitive styles. Computers and
Education, 41, 87-103.
VanLehn, K. (1996). Conceptual and meta learning during coached problem solving. In
Frasson, C.; Gauthier, G.; and Lesgold, A., eds., Proceedings of the 3rd International
Conference on Intelligent Tutoring Systems ITS ’96. Springer. 29–47.
Van Zwanenberg, N., Wilkinson, L J., Anderson, A. (2000). Felder and Silverman’s
Index of Learning Styles and Honey and Mumford’s Learning Styles Questionnaire: How
do they compare and do they predict academic performance? Educational
Psychology,Vol. 20 (3), pp. 365-381
Wager, W. and Wager, S. (1985). Presenting questions, processing responses, and
providing feedback in CAI. Journal of Instructional Development, 8(4),2-8.
Weber, G., and Brusilovsky, P. (2001). ELM-ART: An Adaptive Versatile System for
Web-based Instruction. In International Journal of Artificial Intelligence in Education.
12, pp. 351-384.
Weber, G., and Möllenberg, A. (1995). ELM programming environment: A tutoring
system for LISP beginners. In K. F. Wender, F. Schmalhofer, & H.-D. Böcker (Eds.),
Cognition and computer programming. Norwood, NJ: Ablex Publishing Corporation.
Weber, G., Schult, T. (1998). CBR for Tutoring and Help Systems. In Case-Based
Reasoning Technology: From Foundations to Applications. Lecture Notes in Artificial
Intelligence 1400, Springer Verlag.
Wei, F., Moritz, S., Parvez, S., and Blank, G. D. (2005). A Student Model for ObjectOriented Design and Programming. The Tenth Annual Consortium for Computing
Sciences in Colleges Northeastern Conference, Providence, RI.
Witkin, H. A. (1954). Personality through perception: An experimental and clinical study.
New York: Harper.
138
Zywno, M.S. (2003). “A Contribution of Validation of Score Meaning for FelderSoloman’s Index of Learning Styles.” Proceedings of the 2003 Annual ASEE
Conference. Washington, DC: ASEE.
139
Download