Tutoring

advertisement
TalkBank Tutorial Database Guide
This guide provides documentation regarding the Tutorial Interactions corpora in the
TalkBank database. TalkBank is an international system for the exchange of data on
spoken language interactions. The majority of the corpora in TalkBank have either audio
or video media linked to transcripts. All transcripts are formatted in the CHAT system
and can be automatically converted to XML using the CHAT2XML convertor. To jump to
the relevant section, click on the page number to the right of the corpus.
1 CIRCLE ...................................................................................................................... 2
1.1
Algebra ............................................................................................................................ 2
1.2
Electricity and Electronics ............................................................................................ 2
1.3
TRG1 -- Psychology Research Methods ....................................................................... 3
2 DISPEL ....................................................................................................................... 6
2.1
Design setting .................................................................................................................. 6
2.2
The games ....................................................................................................................... 7
2.3
The roles .......................................................................................................................... 8
2.4
The feedback from the pilot data collection................................................................. 8
2.5
The sessions..................................................................................................................... 9
3 Frederiksen ............................................................................................................... 11
4 Graesser .................................................................................................................... 12
4.1.1
4.1.2
4.1.3
4.1.4
4.1.5
Overview ...............................................................................................................................12
Students .................................................................................................................................12
Tutors ....................................................................................................................................12
Topics ....................................................................................................................................13
Method ..................................................................................................................................13
1
CIRCLE
1.1 Algebra
Neil T. Heffernan
Carnegie Mellon University
This archive contains the text of a tutorial dialogue between an experienced math
tutor and 8th grade students. The task that was being tutored was algebra symbolization.
The data was collected to help inform the construction on more human-like dialogues
inside an intelligent tutoring system. In the transcription the student’s written remarks
are indicted in brackets. The tutor is an experienced current middle school math teacher.
The student is a seventh grade male student who is a student in the tutor’s classroom.
The student had a list of problems in front of him and each problem’s text is reprinted
(underlined) when the student reads the problem. The student had a blank sheet of paper
on which he wrote his answers. Generally, his paper included only attempts at
symbolization with a few accompanying words possibly indicating the units. The session
lasted approximately one hour. The session consisted of 17 problems, 8 of which the
student answered correctly on the first try. This transcription was made from a video
tape of the session. Pauses are indicated with colons and one colon indicates about one
half of a second pause.
* Description.html: A html file containing a complete description of the corpus.
* Transcript.html: A html file containing the text of a one hour tutoring session.
Heffernam, N. T. (1998) Intelligent Tutoring Systems have Forgotten the Tutor: Adding
a Cognitive Model of Human Tutors. Thesis Proposal, Computer Science
Department, Carnegie Mellon University.
1.2 Electricity and Electronics
Carolyn Penstein Rose
Learning Research and Development Center
University of Pittsburgh.
This corpus was collected as part of the Computational Model of Tutorial Dialogue
project, under the supervision of Johanna Moore, under funding provided by the Office
of Naval Research, Cognitive and Neural Sciences Division, Grants N00014-91-J-1691
and N00014-93-I-0812. The purpose of this study was to collect examples of students
interacting with a human tutor while working through a computer based basic electricity
and electronics curriculum in order to evaluate the relative effectiveness of two
alternative tutoring strategies. The first tutoring strategy is a Socratic style strategy in
which the tutor attempts to encourage the student to construct knowledge for himself by
leading him through a Socratic style directed line of reasoning. In the second strategy, a
more Didactic style, the tutor is more forthcoming with explanations up front but then
encourages the student to actively engage in the learning process by making an
inference, rephrasing, or applying the knowledge from the explanation.
We started with an available BE&E course developed in the VIVIDS authoring
environment at NPRDC. This course included four lessons covering Current, Voltage
(both AC and DC), Resistance and Power. It also included four labs allowing the
students to make basic measurements using a multimeter. To this original curriculum,
we added two problem solving labs covering Ohm's Law and the Power Law. 44
students participated in our experiment. Each student's participation spanned over two
sessions of 2 - 2.5 hours and included a pre and post-test in addition to the computer
based curriculum. The student interacted with the on-line BE&E course as well as with a
human tutor through a chat interface. Tutor and student were in the same room but
separated by a partition. The student's video signal was split so that the tutor was able to
monitor the student's progress through the curriculum. Beginning with a small pilot
study, we collected data from 8 students. Based on our observations from this small
study of which parts of the curriculum were problematic for students, we focused our
pre-post test used in the final experiment. In this final experiment, we collected data
from 36 students, 20 of which we used for a rule gain analysis in order to draw some
preliminary conclusions about the relative effectiveness of our two alternative tutoring
strategies. For each of the 36 students who participated in the final experiment, this
corpus includes a log file from each of the two sessions they participated in, a
calculations file that records all of the operations they performed using an on-line
calculator during the lessons, and a notes file that lists the condition the student was
assigned to, Math and Verbal SAT scores when available, dates of the two sessions, pre
and post test score, and time on task for each lesson and lab. No special annotations are
included in the log files apart from a time stamp for each contribution to the dialogue
from student and tutor. A coding scheme is currently being developed at the University
of Edinburgh.
C. P. Rose, B. Di Eugenio, & J. D. Moore, (1999) A Dialogue Based Tutoring System
for Basic Electricity and Electronics, Proceedings of AI in Education.
1.3 TRG1 -- Psychology Research Methods
Art Grasser, a-graesser@memphis.edu
Natalie Person, person@rhodes.edu.
Institute for Intelligent Systems
University of Memphis
The transcripts were collected so that we could perform an in-depth analysis of
human-to-human discourse. The tutoring protocols collected from upper-division
college students who were enrolled in a course on research methods in psychology. This
particular sample was chosen for a number of reasons. First, these sessions focused on
topics in which tutoring is known to be comparatively effective. That is, according to
available studies (Cohen et al., 1982; Fitz-Gibbon, 1977), topics which involve
quantitative skills (e.g., mathematics) lead to more positive outcomes than topics which
focus on nonquantitative skills (e.g., creative writing). Second, this corpus is
representative of the tutors and students in normal tutoring environments. Tutors are
normally older students, paraprofessionals, and adult volunteers who have not been
extensively trained in tutoring techniques (Cohen et al., 1982; Fitz-Gibbon, 1977).
Third, this corpus is representative of college-level students at all levels of achievement
rather than being restricted to students who are having difficulty in the course.
The tutoring protocols were collected from 27 undergraduate students enrolled in a
psychology research methodology course at the University of Memphis. The tutors were
three psychology graduate students who had each performed well in undergraduate-level
and graduate-level research methodology courses. The course instructor selected six
topics that are normally troublesome for students in the course. Each topic had related
subtopics that were to be covered in the tutoring session. Topics included variables,
graphs, statistics, hypothesis to design, factorial designs, and interactions. The tutoring
sessions spanned an eight-week period. Only one topic was covered per week. The room
used for the tutoring sessions was equipped with a video camera, a television set, a
marker board, colored markers, and the Cozby textbook. The television screen was
covered during the entire session. The camera was positioned so that the student and the
entire marker board were in the picture. Therefore, the transcripts of the tutoring
sessions included both spoken utterances and messages on the marker board.
Data from these transcripts are reported in the following publications:
Graesser, A. C., Bowers, C. A., Hacker, D. J., & Person, N. K. (1997). An anatomy of
naturalistic tutoring. In K. Hogan & M. Pressley (Eds.), Scaffolding student learning:
Instruction approaches and issues (pp. 145-184). Cambridge, MA: Brookline Books.
Graesser, A. C., & Person, N. K. (1994). Question asking during tutoring. American
Educational Research Journal, 31, 104-137.
Graesser, A. C., Person, N. K., and Huber, J. D. (1993). Question asking during tutoring
and in the design of educational software. In M. Rabinowitz (Ed.), Cognitive science
foundations of instructional software. Hillsdale, NJ: Lawrence Erlbaum Associates.
Graesser, A. C., Person, N. K., and Huber, J. D. (1992). Mechanisms that generate
questions. In T. Lauer, E. Peacock, & A. C. Graesser (Eds.), Questions and
information systems. Hillsdale, NJ: Lawrence Erlbaum Associates.
Graesser, A. C., Person, N. K., & Magliano, J. P. (1995). Collaborative dialogue patterns
in naturalistic one-to-one tutoring sessions. Applied Cognitive Psychology, 9, 1-28.
Person, N. K., Kreuz, R. J., Zwaan, R., & Graesser, A. C. (1995). Pragmatics and
pedagogy: Conversational rules and politeness strategies may inhibit effective
tutoring. Cognition and Instruction, 13, 161-188.
Person, N. K., Graesser, A. C., Magliano, J. P., & Kreuz, R. J. (1994). Inferring what the
student knows in one-to-one tutoring: The role of student questions and answers.
Learning and Individual Differences, 6, 205-229.
Person, N. K., & Graesser, A. C. (1999). Evolution of discourse in cross-age tutoring. In
A.M. O'Donnell and A. King (Eds.), Cognitive perspectives on peer learning (pp. 6986). Mahwah, NJ: Erlbaum.
Graesser, 1992, 1993a, 1993b; Graesser & Person, in press; Graesser, Person, & Huber,
1992, 1993; Person, Graesser et al., in press; Person, Kreuz, Zwaan, & Graesser, in
press).
2
DISPEL
Nikolinka Collier
Gina Joue
Computer Science
University College
Dublin, Ireland
The Dispel corpus was collected in the autumn of 2001 in the Department of
Computer Science, University College Dublin. It is a set of 30 two channel 16 bit
recordings. They were recorded on DAT Sony MP2. Approximately five minutes
excerpts of each dialogue were annotated using Chat. The corpus is available on 3 CDs,
which contain the sound and annotated files and the design of the data collection,
settings and demographic details of the participants. It is also available trough TalkBank
http://www.talkbank.org/data/tutor/dispel/
The collection of Dispel was aimed at providing sufficient material for the analyses
of DPs in spontaneous speech. Although there are a lot of corpora available for
discourse analyses, few account for DPs consistently. The core reason for that omission
is that DPs are often considered peripheral to speech processing. Therefore it is common
practice for a corpus to lack relevant presentation of DPs. This want can be a reflection
of the design settings for a data collection. The design is often oriented towards
collaboration at some level- topic of interest or solving a task. The subjects’ behavior is
usually controlled for situations that are either outside their daily routine- or the
interaction is with partners that are not so familiar with each other so they are restricted
in the range of the social interaction that they apply to the collaboration. In this
collection the aim was to preserve the loose task collaboration pattern of the interaction,
without at the same time dispensing with the phenomena that occur in more informal
communication. The objective here was to provoke intensive interaction by promoting a
topic from an everyday life routine that the participants consider entertainment. This
usually neither increases nor decreases the number of tokens but expands the range of
DPs produced.
2.1 Design setting
The design set of for collecting the dialogues was approximating a collaborative
introductory tutorial between two participants: a Beginner and an Expert. The two
participants are aware that they are engaged in a role playing game and their task is to
grasp a basic notion of one of the following computer games: Age of Empires or
Civilization . The speakers are told that the aim of their session is to initiate an
introductory tutorial about the discussed computer game. One of the participants, the
Expert is familiar with the discussed game. The other participant, the Beginner has not
played the game before. The Expert instructs the Beginner while they are playing the
game on the cause of actions to undertake and on the general tactics in their improvised
demo tutorial. They are both sitting next to each other looking at the computer screen
and following the events of the game. The Beginner has control over the keyboard. The
Beginner is entitled to ask and to get help from the Expert for all the movements and
strategy puzzles in the game.
The design was aimed at providing conditions, which allow for average frequency of
DPs one for task-oriented interactions. Their roles are generally the role of the novice
and the role of the expert. The environment was aimed at remotely simulating an
interaction between a user and an intelligent help interactive assistant. The Beginner has
control over the keyboard. The Beginner is entitled to ask and get help for all the
movements and strategy puzzles in the game from the Expert. Apart from providing us
with enough data of DP tokens the work can be used with some insights for studying
interaction patterns outside classroom conversations for computer human interaction.
2.2 The games
The choice of the games was motivated in two ways. The first one was that enough
subjects have to be familiar with the game. The games strategy has to be complex
enough to allow the need of interaction between the participants and especially to
prompt the Beginner to ask questions and elicit discussions about the best and the not so
good choices in the game.
In order to choose a set of games we interviewed the subjects for the proficiency
they have in a game. There were about four games that a large group of the interviewees
was familiar with: Team Fortress, Sim City, Tomb Raider, Age of Empires and
Civilization. Sim City was a complex game and the subjects were engaging in
conversation about it but not playing the game. The second trial with Team Fortress
showed us that the level of the game proficiency of the players was very similar and they
do not interact—instructions are not demanded as the two players are equally familiar
with the special and problem layout of the game. Tomb raider involved a lot of puzzles
both in special orientation in the virtual world and in handling monsters and finding
bounties. As the game involves only one avatar it is not so demanding to provoke the
Beginner to make inquires about the course of action to be taken.
After the pilot study on a variety of games my choice was restricted to: Age of
Empires and Civilization II. Both computer games are strategy computer games, which
main purpose is to create and evolve a civilization strong and smart enough to survive
through internal (diseases, lack of resources, discontent citizens and so on) and external
difficulties (hostile and friendly tribes or civilizations, attacks and/or wars) in order to
become the most prosperous ruling civilization. They were complicated enough to
engage participants in conversation but still not so demanding as to prevent them from
discussing points while playing the game.
There are also a few differences and points to be made about each game. In the
following sections we present two introductory descriptions of each game. One is a
resume of the commercial descriptions of the games and the second one a transcribed
version of one of the introduction given by the subjects.
Civilization II is a journey through time where players are challenged to create their
own version of history as they match wits against the world's greatest leaders and build,
expand and rule a world dominating civilization to stand the test of time. The leader (the
player themselves) rules their citizens with the help of advisors, (that is the interactive
help of the game). The main occupations of the citizens are to build a strong base,
explore new and uncharted territories as they search for valuable resources, and
conquer enemies through force or diplomacy. An important point about this game is
that there are technologies to be chosen for the benefit of our population and they
determine the stages that the participants would undergo in the development of their
civilization.
Age of Empires is an epic real-time strategy game spanning 10,000 years, in which
players are the guiding spirit in the evolution of small Stone Age tribes [1]. Starting with
minimal resources, players are challenged to build their tribes into civilizations. Gamers
can choose from one of several ways to win the game, including: world domination by
conquering enemy civilizations, exploration of the known world and economic victory
through the accumulation of wealth. The players have a choice between twelve
civilizations, technology tree to select the next step in the evolution, dozens of units,
randomly generated maps. Although the game sets players within a historical context, it
is supplied with a built-in scenario editor so the players can create their own conflicts
and scenarios. It allows up to eight other players, to play in a multiplayer forum.
The corpus involves 26 participants- 17 male and 9 female. The participants are
graduate students and members of staff in Department of Computer Science, UCD.
English is first language for all participants. The origin of the participants was
predominantly Irish, with the exclusion of three subjects. There are two Americans and
one English amongst the group. The age of the participants was basically between 20
and 30. The youngest one is 22, the oldest 36. The demographic details of the
participants are listed in the Table1 below.
2.3 The roles
The roles of the dialogues were based on the domain knowledge of the participants.
There are two roles: Beginner and Expert. In the group of the Experts were designated
people who are familiar with the game. In order to be promoted an Expert the person has
to have spent not less than 20 hours playing the discussed game. On the other hand, in
the group of the Beginners were designated people that have not played the chosen game
ever before.
In the orthographic transcription of the game we have chosen the initials that are
provided as part of the tiers available for encoding the roles of the participants in the
tools provided in CHILDES System. The Expert takes the initials TEA for teacher and
the Beginner takes the initials STU for student.
2.4 The feedback from the pilot data collection
In the beginning a few pilot sessions were done on people that have different level of
familiarity but nevertheless are familiar with a game. This worked for people that had
very contrasting proficiency in the game. The choice of the games was motivated by two
main criteria: the first one was diversity of subjects and the second one was complexity
of the game that would induce intensive discussion in the sessions. The first criterion
demanded enough subjects to be familiar with the game in order to have a variety of
both Experts and Beginners. The second one determined games objectives to be
challenging enough to allow the need of interaction between the participants and
especially to prompt the Beginner to ask questions and elicit discussions about the cause
of actions they should overtake for better results.
In the initial design the efficiency of the subject was not positioned in the two
opposing scales of the knowledge. The intention was to use subjects that have different
knowledge about one and the same game, but both of them have some familiarity with
the game and have spent not less than five hours exploring it. It proved out to be difficult
to precisely evaluate the proficiency in the games primarily based on the time that the
subject calculates they spent exploring the game. The problems emerge when the
sessions were carried out with people that have almost the same level of knowledge in
the domain. There was almost no verbal interaction between these participants as they
were equally proficient and there was little that could prompt inquiries by either of them.
The setting of the game was changed to participants who lie in the opposing scale of the
domain knowledge so there would be one Expert who has intermediate up to advanced
level in playing the game and Beginner who has never played the game before.
Another factor for the intensity of the interaction was the level of familiarity
between the participants as well as the amount of time they have spent together in that
problem-solving environment. As the aim of the data collection was allowing abundance
of discourse particles production the interaction has to be intensive in order to allow the
natural occurrence of such. The intensity of the interaction was considered to be the
frequent overlapping speech and the signals indicating surprise and frustration of
unexpected outcome after some set off actions.
2.5 The sessions
Originally the conversations between the participants were approximately 10 to 15
min. From the collected sound data were chosen about 5 min extract from each dialogue
which were transcribed in order to measure the frequency of the disfluency and to
investigate their functions in this dialogues. After listening through the collected data the
extractions were restricted in most instances to the first five minutes of the introduction
when the participants interact about the goals of the game and are exposed to the task of
finding the realms of their world and the possibilities that they have in it.
Some of the sessions were more successful than others in the sense that the
participants reached a more advanced stage in the game they were playing. This was
usually related to proficiency of the participants in either similar games or game playing
in general. The most challenged beginners were those that have not played the games
before and did not have experience in virtual reality environment. There are basic
conventions on how movements are performed or how information swaps between birds
eye view of the world that is being explored to information on current resources or
politics development. For example, one of the most frequent disorientation in
Civilization II was the pop up window which gave statistics what is the increase of the
population, what are the necessary resources or the information when certain technology
has been developed. The statistics showed that out of eight sessions in this particular
game, six out of the eight participants playing the role of the Beginner asked What is
going on?; while in the age of empires where the game does not have pop up menus and
tutorials this question has not being asked. there was also a general sense of detachment
from the avatars with the more experienced players, while those who had no knowledge
about the games, or game scenarios in general seemed to concentrate more on the
avatars as personalities. More about the peculiarities of each session can read in
sessions.txt.
3
Frederiksen
Carl Frederiksen
Psychology
1205 Dr. Penfield Ave.
Montreal Canada
carl.frederiksen@mcgill.ca
This corpus contains two transcripts linked to videos of tutorial instruction in aspects of
the analysis of variance (ANOVA) in statistics.
4
Graesser
Art Grasser, a-graesser@memphis.edu
Natalie Person, person@rhodes.edu.
Institute for Intelligent Systems
University of Memphis
4.1.1 Overview
The transcripts were collected so that we could perform an in-depth analysis of
human-to-human discourse. The tutorings protocols collected from upper-division
college students who were enrolled in a course on research methods in psychology. This
particular sample was chosen for a number of reasons. First, these sessions focused on
topics in which tutoring is known to be comparatively effective. That is, according to
available studies (Cohen et al., 1982; Fitz-Gibbon, 1977), topics which involve
quantitative skills (e.g., mathematics) lead to more positive outcomes than topics which
focus on nonquantitative skills (e.g., creative writing). Second, this corpus is
representative of the tutors and students in normal tutoring environments. Tutors are
normally older students, paraprofessionals, and adult volunteers who have not been
extensively trained in tutoring techniques (Cohen et al., 1982; Fitz-Gibbon, 1977).
Third, this corpus is representative of college-level students at all levels of achievement
rather than being restricted to students who are having difficulty in the course.
4.1.2 Students
The tutoring protocols were collected from 27 undergraduate students enrolled in a
psychology research methodology course at the University of Memphis. The sample
included 9 males between the ages of 18 and 25, 12 females between the ages 18 and 25,
and 6 females over the age of 25. All students enrolled in the course participated in the
tutoring sessions in order to fulfill a course requirement (6% of the total points in the
course). Therefore, the tutoring protocols involved a representative sample of college
students rather than a sample restricted to students who were having difficulty in the
course.
4.1.3 Tutors
The tutors were three psychology graduate students who had each performed well in
undergraduate- and graduate-level research methodology courses. The tutor sample
consisted of one male and two females. These tutors were a subset of the tutors used in
previous studies. In the previous studies, these tutors comprised the normal tutoring
condition. The tutors in the other condition were given special instructions, and could
therefore, not be included in this analysis of normal tutoring. Each of these tutors had
tutored students on a few occasions prior to this study, but none in the area of research
methods. Therefore, the tutors had a modest amount of tutoring experience, but they did
not have any formal training in the tutoring process. It is important to point out that
these characteristics are representative of most of the tutoring that takes place in school
systems. That is, tutors are usually older students, paraprofessionals, or adult volunteers
who have moderately high domain knowledge and minimal training on the tutoring
process (Cohen et al., 1982; Fitz-Gibbon, 1977). Each tutor was paid $500 for serving as
a tutor in 18 sessions. Learning Materials
4.1.4 Topics
The course instructor selected six topics that are normally troublesome for students
in the course. Each topic had related subtopics that were to be covered in the tutoring
session. The topics and subtopics are specified below.
 VARIABLES: operational definitions, types of scales, values of variables
 GRAPHS: frequency distributions, plotting means, histograms
 STATISTICS: decision matrix, Type I and II errors, t-tests, probabilities
 HYPOTHESIS TO DESIGN: formulating a hypothesis, practical constraints,
control groups, design,statistical analyses
 FACTORIAL DESIGNS: independent variables, dependent variables,
statistics, main effects,cells, interactions
 INTERACTIONS: independent variables, main effects, types of interactions,
statistical significance
The students were exposed to the material on two occasions prior to their
participation in the tutoring sessions. First, each topic was covered in a lecture by the
instructor before that topic was covered in the tutoring session. Second, each student
was required to read specific pages in a research methods text (Methods in Behavioral
Research, Cozby, 1989) prior to the tutoring session. The students, therefore, had
multiple chances to learn the material.
4.1.5 Method
The tutoring sessions spanned an eight-week period. Only one topic was covered per
week. The topics covered during the first three weeks were variables, graphs, and
statistics, respectively. A two-week break followed the first three weeks of tutoring. The
remaining three topics (i.e., hypothesis to design, factorial designs, and interactions)
were covered during the subsequent three weeks.
The room used for the tutoring sessions was equipped with a video camera, a
television set, a marker board, colored markers, and the Cozby textbook. The television
screen was covered during the entire session. The camera was positioned so that the
student and the entire marker board were in the picture. Therefore, the transcripts of the
tutoring sessions included both spoken utterances and messages on the marker board.
Prior to a tutoring session, the students were told that they would receive tutoring on
particular pages in the Cozby text. When a student entered the tutoring room, the student
was instructed to sit in view of the camera and to read a topic card aloud. The tutoring
session then proceeded in the direction that the tutor and student saw fit. The three tutors
were not given a specific format to follow. They were instructed, however, to avoid
simply lecturing to the student. Each tutoring session lasted approximately 45-60
minutes.
Each of the 27 students participated in two tutoring sessions. A counterbalancing
scheme was designed so that (a) a student never had the same tutor twice, (b) each tutor
covered all six topics, (c) each tutor was assigned to 18 tutoring sessions, and (d) a
student was tutored once during the first three weeks and once during the second three
weeks. Therefore, each tutor tutored three students on each of the six topics, which
yielded 54 tutoring sessions. Ten of the 54 sessions could not be transcribed due to audio
problems. Therefore, the analyses included a total of 44 tutoring sessions.
Transcribers received a one-hour training session on how to transcribe the protocols.
They were instructed to transcribe the entire tutoring sessions verbatim, including all
"ums," "ahs," word fragments, broken sentences, and pauses. The transcribers specified
whether an utterance was made by the student or tutor. In addition, transcribers noted
messages that appeared on the marker board, hand gestures, head nods, and
simultaneous speech acts that occurred between the student and tutor. Each written
transcription was verified for accuracy by a research assistant who spot-checked random
segments of the videotapes.
Data from these transcripts are reported in the following publications:
Graesser, A. C., Bowers, C. A., Hacker, D. J., & Person, N. K. (1997). An anatomy of
naturalistic tutoring. In K. Hogan & M. Pressley (Eds.), Scaffolding student learning:
Instruction approaches and issues (pp. 145-184). Cambridge, MA: Brookline Books.
Graesser, A. C., & Person, N. K. (1994). Question asking during tutoring. American
Educational Research Journal, 31, 104-137.
Graesser, A. C., Person, N. K., and Huber, J. D. (1993). Question asking during tutoring
and in the design of educational software. In M. Rabinowitz (Ed.), Cognitive science
foundations of instructional software. Hillsdale, NJ: Lawrence Erlbaum Associates.
Graesser, A. C., Person, N. K., and Huber, J. D. (1992). Mechanisms that generate
questions. In T. Lauer, E. Peacock, & A. C. Graesser (Eds.), Questions and
information systems. Hillsdale, NJ: Lawrence Erlbaum Associates.
Graesser, A. C., Person, N. K., & Magliano, J. P. (1995). Collaborative dialogue patterns
in naturalistic one-to-one tutoring sessions. Applied Cognitive Psychology, 9, 1-28.
Person, N. K., Kreuz, R. J., Zwaan, R., & Graesser, A. C. (1995). Pragmatics and
pedagogy: Conversational rules and politeness strategies may inhibit effective
tutoring. Cognition and Instruction, 13, 161-188.
Person, N. K., Graesser, A. C., Magliano, J. P., & Kreuz, R. J. (1994). Inferring what the
student knows in one-to-one tutoring: The role of student questions and answers.
Learning and Individual Differences, 6, 205-229.
Person, N. K., & Graesser, A. C. (1999). Evolution of discourse in cross-age tutoring. In
A.M. O'Donnell and A. King (Eds.), Cognitive perspectives on peer learning (pp. 6986). Mahwah, NJ: Erlbaum.
Graesser, 1992, 1993a, 1993b; Graesser & Person, in press; Graesser, Person, & Huber,
1992, 1993; Person, Graesser et al., in press; Person, Kreuz, Zwaan, & Graesser, in
press)
Download