Using Meta-Cognitive Conflicts to Support Collaborative Problem

advertisement

Analyzing On-Line Collaborative Dialogues:

The OXEnTCHÊ–Chat

Ana Cláudia Vieira, Lamartine Teixeira, Aline Timóteo, Patrícia Tedesco and Flávia Barros

Universidade Federal de Pernambuco

Centro de Informática

Recife – PE Brasil

Phone: +55 8121268430

E-mail: {achv, lat2, alt, pcart, fab}@cin.ufpe.br

Abstract. Internet-based virtual learning environments allow participants to refine their knowledge by interacting with their peers. Besides, they offer ways to escape from the isolation seen in the CAI and

ITS systems. However, simply allowing participants to interact is not enough to eliminate the isolation feeling and to motivate students. Recent research in Computer Supported Collaborative

Learning has been investigating ways to minor the above problems. This paper presents the

OXEnTCHÊ–Chat , a chat tool coupled with an automatic dialogue classifier which analyses on-line interaction and provides just-in-time feedback to both instructors and learners. Feedback is provided through reports, which can be user-specific or about the whole dialogue. The tool also counts on a chatterbot, which plays the role of an automatic coordinator. The implemented prototype of

OXEnTCHÊ–Chat has been evaluated and the obtained results are very satisfactory.

1.

Introduction

Since the 1970’s, research in the area of Computing in Education has been looking for ways to improve learning rates with the help of computers [1]. Until the mid 1990’s, computational educational systems focused on offering individual assistance to students (e.g., Computer Assisted Instruction (CAI), and early

Intelligent Tutoring Systems (ITS)). As a consequence, the students could only work in isolation, frequently feeling unmotivated to spend long hours in this task.

Currently, the available information and communication technologies (ICTs) provide means for the development of group work/learning virtual systems [2] at considerably low costs. This scenario has favoured the emergence of virtual learning environments on the Internet (e.g., WebCT [3], Blackboard

[4], TelEduc [5]). One of the several benefits of group work is that the participants can refine their knowledge by interacting with the others. Besides, it offers ways to escape from the isolation seen in the

CAI and ITS systems.

However, simply offering technology for interactions between virtual learning environments participants is not enough to eliminate the isolation feeling. The students are not able to see their peers, and to feel that they are part of a “community”. This way, they tend to become unmotivated [6, 7], and drop out of on-line courses fairly frequently.

Recent research in Computer Supported Collaborative Learning (CSCL) [8] has been investigating ways of helping users to: (1) feel more motivated; and (2) achieve better performances in collaborative learning environments. One way to tackle problem (1) is to provide the interface with an animated agent that interacts with the students. In fact, studies have shown that these software agents facilitate human computer interaction, and are able to influence users’ behavior [9]. Regarding issue (2), one possibility is to monitor the collaboration process, analyzing it and providing feedback to the users on how to better participate in the interaction. Besides, the system should also keep the instructor informed about the interaction (so that s/he can decide if, when and how to intervene or change pedagogical practices) [10].

In this light, we developed the OXEnTCHÊ–Chat , a tool that tackles the above problems by monitoring the interaction process and offering feedback to users. The system provides a chat tool coupled with an automatic dialogue classifier which analyses on-line interaction and provides just-in-time feedback to both instructors and learners whenever they request it.

The tool offers two kinds of feedback reports: one containing general information about the dialogue (e.g. chat duration, number of users, total number and type of utterances used); and one containing specific information about one user’s participation, as well as tips on how to improve it. The system also counts on a chatterbot [11], which plays the role of an automatic coordinator (helping to maintain the dialogue focus, and trying to motivate students to engage in the interaction). The tool has being evaluated with two different groups, and the obtained results are very satisfactory.

The remainder of this paper is organised as follows. Section 2 presents a brief review of the state of the art in systems that analyse collaboration. Section 3 describes the OXEnTCHÊ-Chat tool, and section 4 discusses experiments and results. Finally, section 5 presents conclusions and suggestions for further work.

2.

Collaborative Learning Systems that Monitor the Interaction

In order to be able to foster more productive interactions, current collaborative learning systems that monitor the participants´ interaction typically focus their analysis on one of two levels: (1) the participant’s individual actions; or (2) the interaction as a whole. Of the five systems discussed in this section, the first two focus on (1), whereas the other three focus on (2).

LeCS (Learning from Case Studies) [12] is collaborative learning environment for case studies. In order to solve their case, participants follow a methodology consisting of seven steps. At the end of each step, the group should send a partial answer to the system. An interface agent monitors the solution development, as well as user’s participation. This agent sends messages to the students reminding them that they have forgotten given step, and to encourage remiss students to participate more.

COLER ( COllaborative Learning Environment for Entity-Relationship modelling ) [13] is an Internetbased collaborative learning environment. Students first work individually (in a private workspace) and then collaborate to produce an Entity-Relationship (E-R) model. Each student has an automated coach. It gives feedback to the student whenever a difference between his/her individual E-R models and the one built by the group is detected. The coach uses Decision Trees [14] to decide how to present feedback.

DEGREE ( Distance Environment for GRoup ExperiencEs ) [15] monitors the interaction (the system’s analysis is based on the Activity Theory [16]), of distant learners in a discussion forum (i.e. participants work asynchronously) in order to support its pedagogical decisions. The system sends messages to the students with the aim of helping them reflect about the solution-building process, as well as about the quality of their collaboration. It also provides feedback about the group performance.

COMET ( A Collaborative Object Modelling Environment ) [17] is a system developed so that teams can collaboratively solve object-oriented design problems, using the Object Modelling Technique (OMT).

The system uses sentence openers (e.g. I think, I agree) in order to analyse the ongoing interaction. The chat log stores information about the conversation, such as date, day of the week, time of intervention, user login and sentence openers used. COMET uses Hidden Markov Models to analyse the interaction and assess the quality of knowledge sharing.

MArCo ( Artificial Conflict Mediator – in Portuguese) [18] counts on an artificial conflict mediator that monitors the dialogue and detects conflicts, giving tips on how to better proceed to the participants. The mediator uses a Belief-Desires-Intentions [19] model in order to reason about the dialogue and decide on how to intervene. MArCo’s interventions are restricted to the moments where a conflict has been detected.

Apart from DEGREE, current systems that monitor on-line collaboration tend to either concentrate their feedback on the users’ specific actions or on the whole interaction. On the one hand, by concentrating only on particular actions, systems can miss opportunities for improving group performance. On the other hand, by concentrating on the whole interaction, systems can miss opportunities for engaging students into the collaborative process, and thus not properly motivating them.

3.

The OXEnTCHÊ–Chat

The OXEnTCHÊ-Chat [20] is a tool that tackles the problems of lack of motivation and low group performance by providing feedback to individual users as well as to the group as a whole. The system provides a chat tool coupled with an automatic dialogue classifier which analyses the on-line interaction and provides just-in-time feedback to instructors/teachers and learners. Teachers receive feedback reports on both the group and on individual student performances (and thus can evaluate students and change pedagogical practices), whereas students can only check their individual performance (and reflect on it).

This combination of automated dialogue analysis, just-in-time feedback, and the concern with both teachers and students constitutes a novel approach. The OXEnTCHÊ-Chat is an Internet-based tool, implemented in Java. Its architecture is explained in details in section 3.1.

3.1

Tool’s Architecture

OXEnTCHÊ-Chat adopts a client-server architecture ( Fig. 1) . The system consists of two packages, chat and analysis . Package chat runs on the client machines, and contains the chat interfaces. When users make a contribution to the dialogue (which can be either a sentence or a request for feedback), it is sent to package analysis .

Package analysis runs on the server, and is responsible for classifying the ongoing dialogue and for generating feedback. This package counts on five modules: Analysis Controller; Subject Classifier;

Feature Extractor; Dialogue Classifier; and Report Generator. There are also two databases: Log, which stores individual users’ logs and the whole dialogue log; and Ontology, which stores the ontologies for various subject domains. Package analysis also counts on the Bot Agent.

The A nalysis Controller (AC) performs three functions: (1) to receive users’ contributions to the dialogue; (2) to receive students/teachers requests for feedback; and (3) to send relevant messages to the

Bot. When the AC receives a contribution to the dialogue, it stores this contribution in the whole dialogue log, as well as in the corresponding user’s log. When the AC receives a student’s request for feedback, it retrieves the corresponding individual dialogue log, and sends it to the Subject Classifier (SC). If the request is from the teacher, the AC retrieves the whole dialogue log as well as any individual logs

requested. The retrieved logs are then sent to the SC. The AC forwards to the Bot all messages directed to it (e.g., a query about a concept definition).

The SC analyses the dialogue and identifies whether or not participants have discussed the subject the teacher proposed for that chat. This analysis is done by querying the relevant domain ontology (stored in the Ontology database). Currently, there are six ontologies available: Introduction to Artificial

Intelligence, Intelligent Agents, Multi-Agent Systems, Knowledge Representation, Machine Learning and

Project Management. When the SC verifies that the students are really discussing the proposed subject, it sends the dialogue log to the Feature Extractor (FE) for further analysis. If not, the SC sends a message to the Report Manager (RM), asking it to generate a Standard report. The SC also informs the Bot Agent about the subject under discussion, so that it can provide relevant web links to the participants.

The FE calculates the number of collaborative skills [21, 22] used by each user and in the complete dialogue. It also counts the total number of dialogue utterances, number of participants and total chat time. These numbers are then sent to the Dialogue Classifier (DC).

The DC is responsible for classifying the dialogue as effective or non-effective. Dialogues are classified as effective when there is a significant use of collaborative skills (e.g. creative conflict) that indicate user’s reflection on the subject. Currently, the DC uses one of two techniques: a MLP neural network or a Decision tree. In order to train these classifiers, we have manually tagged a corpus of 200 dialogues collected from the Internet, and then used 100 dialogues for training, 50 for testing, and 50 for cross-validation. The DC sends its classification to the RM, which composes the final analysis report.

The RM produces three reports: Instructor , Learner and Standard . The Instructor report presents information about the whole dialogue (number of users present, total number of contributions, chat duration, collaborative skills used). The Learner report presents specific information about the student’s participation (time spent in chat, number and type of collaborative skills used). This report can also be accessed by the teacher, allowing him/her to verify specific details about a student’s performance. The

Standard report informs that the subject proposed for the current chat session has not been discussed and, consequently, that the dialogue was classified as non-effective. Thus, whenever the subject proposed has not been discussed, the system produces a Standard report as answer to any kind of request for feedback.

The Bot Agent is a pro-active chatterbot 1 that plays the role of an automatic dialogue coordinator. As such, it has two main goals. First of all, it must help maintain the dialogue focus, interfering in the chat whenever a change of subject is detected by the SC. Three actions can be performed here: (1) the Bot simply writes a message on the environment calling the students back to the subject of discussion; and/or

(2) it presents some links to Web sites related to the subject under discussion, in order to bring new insights to the conversation.

Another goal of this agent is to motivate absent students to engage in the conversation (the dialogue log will provide the information on who is actively participating in the chat session and who is not). Here, the Bot may act by sending a private message to each absent student inviting them back, or by writing a message in the chat window asking all students to participate and collaborate in the discussion. Finally, the Bot Agent may also be answer students’ simple questions based on (pre-stored) information about the current subject (acting as a FAQ-bot 2 ).

The idea of using a chatterbot in this application comes from the fact that, besides facilitating the process of human computer interaction, chatterbots are also able to influence the user’s behavior [9].

1 Chatterbots are software agents that communicate with people in natural language.

2 A FAQ-bot is a chatterbot whose aim is to answer Frequent Asked Questions.

Fig. 1.

System’s Architecture

In order to facilitate users’ communication, the chat

interface ( Fig. 2 ) presents a similar structure to

other chats found on the Internet. Chat functionalities include: user identification, change of nickname, change of text colour, automatic scrolling, emoticons, and help.

Fig. 2 OXEnTCHÊ-Chat ’s Interface

The OXEnTCHÊ-Chat ’s interface is divided into four regions: (1) top bar, containing generic facilities; (2) chat area; (3) message composition facilities; and (4) list of logged-on users. In (1) there are four buttons: exit chat, change nick, request feedback, and help. In (2) (indicated by 

in Fig. 2 ) the

user can follow the group interaction. The facilities found in (3) allow participants to talk in private to each other, change font colour and insert emoticons to express their feelings during the dialogue. By clicking on the button indicated by  participants choose which sentence openers 3 they want to use.

OXEnTCHÊ-Chat provides a list of collaborative sentence openers in Portuguese, compiled during the tool’s development. This list is based on available linguistics studies [23], as well as on an empirical

3 Sentence openers have been widely used in collaborative analysis, since the dialogue pattern can give us indications of the quality of collaboration [22].

study of our dialogue corpus (used to train the MLP and Decision Tree classifiers). We carefully analysed the corpus, labelling participants’ utterances according to the collaborative skills they indicated. The final list of sentence openers 4 was based both on their frequency in the dialogue corpus as well as on our studies about linguistics and about collaborative dialogues (e.g. [24]).

Arrow  points to the Agent Bot’s name in the logged users window. We have decided to show the

Bot as a logged user (Billy) to encourage participants to interact with it. The Bot can answer user’s questions based on pre-stored concept definitions, send messages to users that are not actively contributing to the dialogue, or play the role of an automated dialogue coordinator.

Fig. 3 presents the window showed to the teacher when s/he requests feedback. In

 the teacher can see which individual dialogue logs are available. In  the instructor can choose between analysing the complete dialogue, or the individual performances. The teacher can do so by clicking on the buttons labelled “Analisar diálogo completo” 5 and “Analisar conversa selecionada” 6 , respectively. Item  shows the area where feedback reports are presented. This particular example shows an Instructor Report . It contains the following information: total chat duration, number of user’s contributions, number of participants, number of collaborative skills used, SC analysis and final classification (effective, in this case).

We have also developed an add-in that allows the instructor to access the dialogue analysis even if s/he is not online during the interaction. In order to get feedback reports, the teacher should select the relevant dialogue logs, and click on the corresponding interface buttons to obtain Instructor and/or Learner reports.

4 The complete list of Portuguese sentence openers can be found in [20]

5 Analyze the complete dialogue

6 Analyze selected dialogue

Fig. 3 Instructor’s Online Feedback Report

3.2

Implementation Details

As previously mentioned, we have used the Java language to implement the OXEnTCHÊ-Chat.

There were several reasons for this choice: the language support for distributed applications, its portability, scalability, and Java's built-in multithreading mechanism. In addition, in order to achieve a satisfactory message-exchange performance, we have used Java Sockets to implement the client-server communication.

The FE module is a parser based on a grammar which defines the collaborative sentence openers and their most common variations (e.g spelling mistakes, synonyms and slang). This grammar was written in

JavaCC, a popular parser generator which reads a grammar specification and converts it to a Java program.

The ontologies used by the DC were defined in XML, due to its seamless integration with Java and the easy representation of hierarchical data structures. By using these structures, we could represent the level of specificity of each domain concept stored.

4.

Evaluating the OXEnTCHÊ–Chat

In order to evaluate the OXEnTCHÊ–Chat , we carried out two experiments. First, a usability test was performed, in order to ensure that the system’s interface would really facilitate communication/collaboration between users. The tool was refined based on the results of this experiment, and a second experiment was carried out. This time, the goal was to assess the quality of the feedback provided by the tool. Both experiments were carried out with two different groups of participants. At the time of the experiments, the OXEnTCHÊ–Chat did not count on the Bot Agent yet. It was integrated later, as a refinement suggested by the results obtained. The experiments and their respective results are described below.

4.1

Evaluating the Tool’s Usability

In order to assess the tool’s usability we first performed a preliminary investigation, which had two main goals: (1) testing the OXEnTCHÊ–Chat ’s performance with several users; and (2) discovering any remaining bugs. This preliminary study was carried out at the Federal University of Pernambuco (UFPE), with ten Computer Science Post-graduate Students. They were located in different laboratories, and interacted for 30 minutes using the tool. Three observers were logged in, passively observing the dialogue. After the experiment was finished, the observers analysed the dialogue log, and documented all participants’ comments.

At this first experiment, no system’s bugs were found. All participants commented that the system was fairly easy to use. However, they suggested some possible improvements. These included modifying the position of some interface buttons and changing their names in order to maintain consistency. Participants remarked that using different text colours was a good idea, and suggested that private messages should be highlighted. All suggestions for improvement were considered, and the interface was refined accordingly,

resulting in the layout shown in Fig. 2 .

Once the new version of OXEnTCHÊ–Chat was ready, we conducted a usability test at the Federal

Rural University of Pernambuco (UFRPE), where we observed a group of ten undergraduate Computer

Science students interacting through the tool during a class. The participants and their lecturer were all in the same laboratory, and were observed by three researchers. The task at hand was to discuss a proposal for an electronic magazine. At the beginning of the experiment, participants used OXEnTCHÊ-Chat for a few minutes without knowing any details about the task. During this period, observers could check whether there were any technical problems. Since no problems were found, the teacher then explained the task. After that, the participants went back to using the tool, keeping silent throughout the experiment

(which lasted one hour and a half). The observers took several notes about the technical conditions of the test (equipment, network load, system speed), and about the participants (number of users, comments provided, and which questions were asked).

At the end of the chat interaction, both the lecturer and the students were asked to fill in an evaluation questionnaire, based on the guidelines suggested in [25]. The questionnaire contained questions about the users’ identification and background (gender, age, experience with computers and chats) as well as about the chat usage (difficulties, suggestion for improvement). All questionnaires were filled in and handed back to the observers for posterior analysis.

All participants considered the system’s usability excellent. Difficulties reported were related to reading of messages and user’s identification. This is due to the use of nicknames, as well as to the speed of communication (and several conversation threads) that is so common in synchronous communication.

4.2

Evaluating the Quality of the Feedback Provided

In order to assess the quality of the feedback provided by the system, we carried out an evaluation experiment with two different groups of participants. The main objective at this point was to validate the feedback and the dialogue classification provided by OXEnTCHÊ–Chat .

The first experiment was performed at UFRPE, with the same group that participated in the usability evaluation experiment. This time, learners were asked to discuss their points of view about a face-to-face class that had been previously ministered by the lecturer. The subject under discussion was Interface

Design (which gave rise to the need of creating a new ontology domain in the OXEnTCHÊ–Chat

specifically for this test). Before the participants started to interact, the observers explained how to obtain the tool’s feedback reports to the participants.

Participants interacted for forty minutes, taking the discussion seriously (as reported by the observers).

The lecturer also participated in the discussion, coordinating the dialogue. Participants requested individual feedback both during the interaction and just after its end. Thus, they accessed both the just-intime feedback and the report on their general performance. The lecturer requested both the Instructor

Report and also accessed various Learner Reports .

At the end of the test, the participants from UFRPE filled in another evaluation questionnaire, and remarked that the chat use was very enjoyable, since the tool was quick, easy to use, and provided interesting just-in-time feedback. They pointed out that a more detailed feedback report, including specific tips on how to improve their participation in the interaction would be useful. Nine out of the ten students rated the feedback as good, while one rated it as regular, stating that it was too general.

The second experiment was carried out at UFPE. Five undergraduate Computer Science students (with previous background on Artificial Intelligence) participated in it. The participants were asked to log on to

OXEnTCHÊ–Chat and discuss about Intelligent Agents. The task was explained individually to each participant, because they were located in different laboratories. Due to this physical separation, it was not possible to observe users’ reactions first hand (as it was the case in UFRPE). Participants interacted for twenty-five minutes. The lecturer was not present during the experiment, and thus had use the off-line feedback add-in in order to obtain the Instructor and Learner reports. She was able to assess the quality of the feedback provided by analysing the dialogue logs and comparing them to the system’s reports.

At the end of their dialogue, participants filled in the same evaluation questionnaire that was distributed at UFRPE. Out of the five participants, two rated the feedback as excellent, two rated it as good, and one rated it as weak.

Both students and lecturers (in the two experiments) suggested several improvements to the

OXEnTCHÊ–Chat . In particular, they suggested that the feedback reports could be improved, including more specific tips on how to improve one’s participation in the collaborative dialogue. The lecturers agreed with the system’s feedback report, and remarked that such tool would be very helpful for teachers

to assess their students and to reflect on their pedagogical practices. Table 1 summarises the tests and the results obtained with the OXEnTCHÊ–Chat .

The results obtained in all experiments indicated that the tool had achieved its main goal, both helping teachers and students to better understand how the interaction was evolving, and also helping each student to get a better picture of his/her participation. Thus, the results obtained were considered to be very satisfactory. They did, however, bring to light the need for more specific feedback, and also for other ways of motivating students to collaborate more actively. As a consequence of these findings, the Bot

Agent was developed and integrated to our tool.

Table 1 – Summary of Tests and their Results

Test Local

Number of

Participants

Duration

Teacher’s

Presence

Preliminary

Usability

Feedback 1

Feedback 2

UFPE

UFRPE

UFRPE

UFPE

9

10

10

5

30 min

1 h and 30 min

40 min

25 min

No

Yes

Yes

No

Result

Good usability, suggestions for refinement

Excellent usability

Good(9),

Regular(1)

Excellent(2),

Good(2), Weak

(1)

5.

Conclusions and Further Work

Recent research in CSCL has been investigating ways to mitigate the problems of student’s feeling of isolation and lack of motivation, common to Virtual Learning Environments. In order to tackle these issues, several Collaborative Learning Environments monitor the interaction and provide feedback specific to users actions or to the whole interaction.

In this paper, we presented the OXEnTCHÊ–Chat , a tool that tackles the above problems. It provides a chat tool coupled with an automatic dialogue classifier which analyses on-line interaction and provides just-in-time feedback to both teachers and learners.

The system also counts on a chatterbot to automatically coordinate the interaction. This combination of techniques and functionalities is a novel

one. OXEnTCHÊ–Chat has been evaluated with two different groups, and the obtained results are very satisfactory, indicating that this approach should be taken further.

At the time of writing, we are working on improving the Bot Agent by augmenting its domain knowledge and skills, as well as on evaluating its performance. In the near future we intend to improve

O XEnTCHÊ–Chat in three different aspects: (1) to include other automatic dialogue classifiers (e.g. other neural network models); (2) to improve the feedback provided to teachers and learners, making it more specific; and (3) to improve the Bot capabilities, so that it can contribute more effectively to the dialogue, by, for example, playing a given role (e.g. tutor, collaborator) in the interaction.

6.

References

1. Wenger, E. Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the

Communication of Knowledge. Ed: Morgan Kaufmann (1987) 486p

2. Wessner, M. and Pfister, H.: Group Formation in Computer Supported Collaborative Learning. In Proceedings of

Group´01, ACM Press, (2001) 24-31

3. Goldberg, M.W. Using a Web-Based Course Authoring Tool to Develop Sophisticated Web-based Course.

Available at WWW in: http://www.webct.com/service/ViewContent?contentID=11747. Accessed in 15/09/2003

4. Blackboard, Inc. Available at WWW in http://www.blackboard.com. Accessed in 10/10/2003

5. Oeiras, J. Y. Y. and Rocha, H. V.: Aprendizagem online: ferramentas de comunicação para colaboração. In:

Workshop de Interface Humano-Computador , 5, 2002, (2002) 226-237

6. Issroff, K., and Del Soldato, T., Incorporating Motivation into Computer-Supported Cooperative Learning. In

Brna, P. Paiva, A. and Self, J. (eds.) Proceedings of the European Conference on Artificial Intelligence in

Education , Edições Colibri, (1996) 284-290

7. Tedesco, P. C. R. and Gomes, A. S.: AMADEUS: A FrameWork to Support Multi-Dimensional Learner

Evaluation. In: R., Kwan; W., Jia; J., Chan; A., Fong; R., Cheung. (Org.). Web-Based Learning: Men & Machines.

Hong Kong, (2002) v. 1, p. 230-241

8. Dillenbourg P. Introduction: What do you mean by Collaborative Learning? In Dillenbourg, P. (ed.) Collaborative

Learning: Cognitive and Computational Approaches . Bennet, N., DeCorte, E., Vosniadou, S. and Mandl, H. (eds),

Advances in Learning and Instruction Series. Elsevier Science, (1999) 1-19

9. Chou C. Y.; Chan T. W.; Lin C. J.: Redefining the learning companion: the past, present, and future of educational agents Computers & Education 40 Ed: Elsevier Science (2003) 255–269

10. Jermann, P., Soller, A., e Mühlenbrock, M.: From Mirroring to Guiding: a Review of the State of the Art

Technology for Supporting Collaborative Learning. Proceedings of CSCL’2001, (2001) 324-331

11. Galvão, A.; Neves, A.; Barros, F. “Persona-AIML: Uma Arquitetura para Desenvolver Chatterbots com

Personalidade”. In.: IV Encontro Nacional de Inteligência Artificial. Anais do XXIII Congresso SBC. v.7.

Campinas, Brazil, (2003) 435- 444

12. Rosatelli, M. and Self, J. A Collaborative Case Study System for Distance Learning, International Journal of

Artificial Intelligence in Education, 12, (2002) 1-25

13. González M. A. C. e Suthers D. D.: Coaching Collaboration in a Compute-Mediated learning Environment.

(2002) Available at http://citeseer.nj.nec.com/514195.html. Accessed in 12/12/2003

14. Mitchell, T.M. Machine learning . New York: McGraw-Hill, (1997) 414p

15. Barros, B. e Verdejo, M. F.: Analysing student interaction processes in order to improve collaboration. The

Degree Approach. International Journal of Artificial Intelligence in Education, 11, (2000) 221-241

16. Leontiev, A. N.: Activity, Consciousness, and Personality. Prentice-Hall (1978)

17. Soller A.; Wiebe J.; Lesgold A.: A Machine Learning Approach to Assessing Knowledge Sharing During

Collaborative Learning Activities. Proceedings of Computer Support for Collaborative Learning 2002 , (2002)

128-137.

18. Tedesco, P. MArCo: Building an Artificial Conflict Mediator to Support Group Planning Interactions,

International Journal of Artificial Intelligence in Education, 13, (2003) 117-155

19. Rao, A. S. and Georgeff, M. P.: Modeling rational agents within a BDI-architecture. In Fikes, R. and Sandewall,

E., editors, Proceedings of Knowledge Representation and Reasoning (KR&R-91) Morgan Kaufmann Publishers:

San Mateo, CA. (1991) 473-484

20.

Vieira A. C. H. Classificando Automaticamente Diálogos Colaborativos On-line com a OXEnTCHÊ – Chat.

Dissertação de Mestrado em Ciência da Computação. UFPE (2004) 121p

21. Searle, J. What is a speech act?. In P. Giglioli (Ed.), Language and Social Context . Penguin Books Ltd., (1972)

136-154

22. MacManus, M. M. e Aiken, R. M.: Monitoring Computer-Based Collaborative Problem Solving. Journal of

Artificial Intelligence in Education , 6(4), (1995) 307-336

23. Marcuschi, L. A.: Análise da Conversação. Editora Ática, (2003) 94p

24. Robertson, J.; Good, J.; Pain, H. (1998) BetterBlether: The Design and Evaluation of a Discussion Tool for

Education. International Journal of Artificial Intelligence in Education, 9, 219-236

25.Robson, C., Real World Research: A Resource for Social Scientists and Practitioner-Researchers, Blackwell

(1993)

Download