Intelligent Tutoring Systems

advertisement
Intelligent Tutoring Systems
Jim Warren
Professor of Health Informatics
Outline
• Dimensions of intelligent tutoring systems
(ITS)
• Examples
– AutoTutor
– LISP tutor
• Implications for future of learning and MOOCs
Some basics from Brusilovskiy
• Student model types
– What does it hold?
• Scalar estimate of expertise (‘B+ for COMPSCI 101’)
• Overlay of the domain topics
– 0/1 (or more granular) for each domain concept (‘loop structure,
tick; recursion, cross’)
• Error (or ‘buggy’) model
– Include common ways that student might misconceive (‘always
borrowing from leftmost digit in 3 or 4 digit subtraction’)
– Is it executable?
• Can I ‘run’ the model to estimate what a student at a certain
level might say/do?
How to acquire/update the
student model?
• Implicit
– Watch how they act and infer their knowledge (‘ah, he didn’t
initialise the iterator variable; he’s not very familiar with writing
loops’)
• Explicit
– Ask them a question that tests a concept and rate the
correctness of the answer
(note that’s not the same kind of explicit approach as asking the
student whether they think they know something)
• Inferred
– From domain structure: ‘he doesn’t know about variables so it’s
safe to say he won’t know about arrays’
– From background: ‘well, she passed COMPSCI 101, so she must
know about variables’ or ‘he’s solving these problems quickly, I’ll
skip ahead’
AutoTutor
• Design with a theory of how you will influence
the user
– “AutoTutor adopts the educational philosophy that
students learn by actively constructing explanations
and elaborations of the material”
• The dialog tactics are aimed at implementing the
theory
– E.g. pumps (“What else?”)
– Hints, prompts, assertions
– Backchannel feedback (nodding at important nouns)
AutoTutor
•
https://www.youtube.com/watch?v=aPcoZPjL2G8 and Sidney D'mello and Art Graesser.
2013. AutoTutor and affective autotutor: Learning by talking with cognitively and emotionally
intelligent computers that talk back. ACM Trans. Interact. Intell. Syst. 2, 4, Article 23 (January
2013), 39 pages.
Solving further problems
• Latent Semantic Analysis (LSA) allows
assessment of student response
– Good/bad and which of a set of expected
concepts are involved
• Other refinements are addressing limitations
of the established approach
– ATLAS: knowledge construction dialogs to elicit
fundamental principles
– WHY2: engaging in qualitative reasoning on
physics concepts
LISP tutor
• Developed way back in the early 1980s at CarnegieMellon
– A tutor for the (then at least) popular AI programming
language, LISP
– Had an underlying rule-based system for the rules of
programming as well as how to tutor
• Interface had 3 windows
– User code
– Tutor feedback
– Goal hierarchy (as reminder of task)
LISP tutor
Example coding rule
Example tutor feedback
Thoughts on the LISP tutor:
First, the system itself needs
to be an expert
• Productions for use of LISP language
– Simple rules for how to use particular functions
– Higher level rules about how to tackle programming
tasks
• Unclear
– to what extent it could author its own solutions
– how the problem is expressed
• In practice, for the tutorials the general structure
of the solution appears to be pretty well spelled
out for the system
Second, know how students go wrong
• LISP Tutor is a great example of a system with
a strong ‘buggy rule’ component
– A lot of the system development effort in buggy
rules: “325 production rules about planning and
writing LISP programs and 475 buggy versions of
those rules”
Third, have a model for learning (which
drives the model for interaction)
• Hard to find a deep educational theory here, but
dedicated to rapid feedback
• So the system is a ‘critic’
– Offers support only when it detects that it’s needed
(or when asked)
• This requires synchronization to be closely following the
student’s intentions
• Aided by the template of the intelligent editor
– Could call it ‘mixed-initiative’
• Student can request explanation
• Also, system is constantly prompting student to select
options (generally only one of which is ‘correct’)
Fourth, have a lesson plan
• A curriculum of
8 progressively
more
challenging
topics
Fifth, evaluate
• Have a meaningful control
– Human tutor and ‘usual’ (on your own, or in lecture)
• Dependent measures
– E.g. time to learn
• They found
–
–
–
–
–
40 hours for lecture (according to poll of students)
26.5 hours (extrapolating drop-outs) for learning on-your-own
15 hours with LISP tutor
11.4 hours with experience human tutors
Performance after completing recursion model: about equal
Lastly, utterly fail to realise
your potential
• The last sentence leave us to expect results
that are ‘nothing short of revolutionary’ once
access to computers with 1MB RAM is
commonplace
• Where are the artificially intelligent tutors?
MOOCs (Massive Open Online Courses):
A good thing
• Not quite sure why it didn’t all happen
immediately after LISP tutor, but… here we are
• Reasons that this might be a
good level of global free
education service
– E.g. it scales so well that it
can be a ‘public good’
function of a coalition of
universities
An ITS research agenda for MOOCs:
1. Agents in MOOC design
• “Intelligent” or otherwise
– Could offer ‘critic’ functionality on design
• Like Fischer’s critics, have rules for good educational
material design and point out violations
• Could apply to individual exercises or to larger schema
(e.g. too much focus on one type of presentation or
favouring one learning style)
2. Agents on MOOC delivery/analytics
• Lots of users provides strong indication of
usage patterns
– Potential to communicate patterns to the course
manager, tutor or design (for next offering) about
• Little-used segments
• Areas with poor performance (e.g. requiring many
attempts and being a point of drop-out)
– Maybe be able to offer diagnostic critique on the
likely problem
3. Agents for delivery:
modelling the user
• Learn who the users (i.e. student users) are
– Background knowledge, learning style, aims
• Individual starting point for attaining specific
competencies
• Individualise presentation style
• Don’t necessarily need to provide the same
syllabus to everyone if aims differ
– Depending on what it means in the end to have
‘completed’ the MOOC
• All the usual techniques apply
– Ask, directly assess, infer
4. MOOCs for assessment
• As per user model of aims
– Maybe not the same test for everyone
• And can we help with the ‘cheating’ problem?
– Easier if we use speech, and maybe video
• E.g. detect individual prosody of speech
– Mismatch of user-modelled performance and
actual performance could be at least a cue for
human attention
5. Oh, and, actual intelligent tutors
• Y’know, Lisp Tutor etc.
• Maybe it could be better when adding the
social MOOC aspects to the individual tutoring
Conclusion
• ITS model the user
– Long-term: to guide instruction across a
curriculum
– Medium-term: to assess learning achievement
and interests
– Short-term: to structure dialog and provide
feedback (including back-channels)
• They really should be able to revolutionize
learning with enough focused effort
Download