On effectiveness of distance learning using LANCA Guy Gouardères*, Claude Frasson**, * Université de Pau, IUT Informatique, Bayonne, France Phone : 514-343 7019 Fax : 514-343 5834 Email : [frasson, gouarderes@iro.umontreal.ca] **Université de Montréal, Département d'informatique et de recherche opérationnelle 2920 Chemin de la Tour, Montréal, H3C 3J7, Québec, Canada Abstract LANCA : Learning Architecture Based on Networked Cognitive Agents relates to Intelligent Tutoring Systems (ITS) on the Web and, more particularly, to a system and a method for assisting the learner involved in distance learning situation on the Internet using several types of intelligent Agents. It also relates to knowledge mining by discovering on the Internet which help appears useful in a given context. In this paper, we propose the prime issues when evaluating several models of the previous cognitive and intelligent agent working together to improve convivial distance education using a virtual learning network. I-Experimental background The major product in the market is VLN, Virtual Learning Network, a collaborative network dedicated to the intelligent application of telecommunications technology. VLN has no agent, mobile and/or intelligent at all, and can be just viewed as regular telelearning using discussion forums & chatrooms [http://telcom.coos.k12.or.us/vlncourses/tutor.htm]. On the opposite, other innovative products available today in distance learning on the Web try to combine the use of different types of intelligent agents with mobile dissemination of effective synchronous & asynchronous learning. Among them, the Web Based Intelligent Tutor [C. Eliot, 1998] and the Web Agent-Based methodology [C. J. Petrie] seem to be a bit unachieved regarding the actual performance of the LANCA prototype. Our present system (LANCA) uses the Internet as a constructivist learning environment and aims to provide intelligent assistance to improve both quality of training and distribution of knowledge in a distance learning situation. The assistance is based on various intelligent Agents which act collaboratively to support learning at different steps. A first Agent (pedagogical Agent), close to the learner, is able to detect his difficulties using a learner model and provides local explanations when needed or on request. A second Agent (dialog Agent) provides access to other explanations or learners available on the Web, in synchronous or asynchronous mode. Taking into consideration all possible helps a third Agent (negotiating Agent) is in charge of finding and selling explanations according to a market of requests and helps. A fourth Agent ( moderator) in charge of determining which explanation was finally useful to serve as a permanent source of explanation for future learners, (see Fig 1.). The goal of this paper is to present an overview about the resulting performances on real tests between two different sites in Montreal and Bayonne (France). We will compare the response time in the different sites. Technical details related to the achieved architecture can be found in Frasson & al., [1998]. II-How to evaluate the efficiency of the improved learning process in a mixed society of human & artificial Agents ? We elaborated on previous findings by exploring how various helps and explanations resource resulting from the agents activity can improve the performance of learners during individual or collaborative problem solving. We tested two main hypotheses using a multi-agent problem solving simulation testbed: (1) an agent decides to present useful or profitable helps only if it reduces overall problem solving effort (2) an agent can use its own evaluation context to intervene or keep silent when assessing the individual or collective learner performance on a given problem III- Description of the model to evaluate performance In this work, we adopt a definition of performance evaluation in accordance with the critical goals for LANCA objectives: first, individual versus collective management of aids and second, synchronous versus asynchronous Kernel Explicatif Course Agent (1) Négociateu r Agent server De Service collaborative problem Market des(2) Agent Négociateu Explicat r ions .... . solving. Agent (n) Négociateu r Fournies Dialog (n) Agent Dialog (2) Agent Dialog (1) Agent Pedagogical (1) Agent Pedagogical (2) Agent Analyzer ..... .... Pedagogical (n) Agent Analyzer ur Analyzer Fig. 1 :General architecture of LANCA This performance measure assumes that the agents are working together as a team (Actors), and are attempting to maximize performance of their goal, the learner. In this way, the performance is the difference between an objective measure of the utility of different types and source of helps, and their induced cost measure in terms of collaborative agent effort to be effective in a distance learning situation. in the learner profile. The mastering of the learner in a problem solving situation is measured using a unique parameter, the SCORE. This SCORE is calculated using the parameters of the learner profile [ Fig 2 ] and takes into account whether or not the user learning performance is improved or not with a given aid (as just a "useful" help or, at the opposite, as a "decisive" one). On one hand, the calculation of the communicative effort, which specifies the cost of evaluating user elementary interaction for resolving a difficulty, is coupled with the calculation of the retrieval effort to get the right aid in the right time - i.e. the correct behavior of a human (teacher, learner,..) or software agent (pedagogical, tutor,….). IV- Selected protocol and experiments On the other hand, the profitability of an agent intervention is evaluated by taking into account the resulting evolution According to the previous model to evaluate agent performance, we experimented within two ranges resources; low for agents with limited resource (staying on the learner station) and high for agents with unlimited resource (located in the server station such as Negociating or Moderator Agents). We have selected a sequence of learner manipulations involving all the functional capabilities of the LANCA system, all the Agents and all the types of exercises. In particular, we have identified three types of exercises: with multiple choices (MCE) : the solution is among a list of exclusive choices; with multiple multiple choices (MMCE) : the solution is among a sequence of responses with binary exclusive choices; with several multiple choices (ESCM) : involving several steps including more than two choices of responses. Each type of exercise can be performed using different strategy : Tutor Companion; Troublemaker; Book V- Experimental results We determined that a sample size of 50 to 100 dialogues per experimental condition is adequate for determining whether a collaborative scenario between agents affects performance. To collect these samples we have tested 100 dialogues with the appropriate parameter settings, yielding a performance distribution for each strategy and a set of assumptions tested. The comparisons of problem-solving dialogue strategies is presented in annex (table 1) The selected example illustrates the differences of agent timing performances using the same strategies, first, for a distant learner on the server (in Montreal, Canada), second, for a distant learner acting at the same time on the user station (in Bayonne, France). We were gracefully surprised by the very low decreasing of ratio performance when distance and traffic on the network is obviously increasing. This is due to the transfer on the local station of the effectiveness the "intelligent" guidance of the learner. In fact, the local session in Bayonne nearly reach the same performance as the one in Montreal. Another good surprise occurs when, using a centralized trace on the server in Montreal, (see Fig 2), we can dynamically track the precise time and context in which an help became "useful" or "decisive" for a given solving problem situation according to a learner profile and behavior. (see Fig. 2) [Creating ibm.aglets.tahiti.Tahiti] ********** Pedagogical Agent: Run ********** ********** Negociator Agent: Receive Service Agent ID:e128816373c9066b********** ********** Service Agent: Receive: Get course ********** ********** Service Agent: Course Name: howToCreateItsOwnPageOnTheInternet ****** ********** Service Agent: Receive: Kernel: explanation ********** ********** Service Agent: Concept: File menu: Save ********** ********** Service Agent: Strategy: tutor ********** ********** Service Agent: Level: weak ********** >>>>>>>>>>> Service Agent: S C O R E = 1020 <<<<<<<<<<< ********** Service Agent: Receive: Kernel: explanation ********** ********** Service Agent: Concept: File menu: Save as ********** ********** Service Agent: Strategy: companion ********** ********** Service Agent: Level: weak ********** >>>>>>>>>>> Service Agent: S C O R E = 1054 <<<<<<<<<<< ********** Negociator Agent: Receive: Search market: explanation ********** ********** Negociator Agent: search local database ********** ********** Negociator Agent: search market: explanation ********** ********** Negociator Agent: Receive: Asynchronous explanation ********** ********** Negociator Agent: Receive: Search market: learner ********** ********** Negociator Agent: search market: learner ********** ********** Negociator Agent: Receive: Synchronous explanation ********** ********** Negociator Agent: concept: File menu: Save ********** ********** Negociator Agent: Receive: Synchronous explanation ********** ********** Negociator Agent: concept: File menu: New document ********** ********** Service Agent: Receive: Kernel: explanation ********** ********** Service Agent: Concept: <TITLE> tag ********** ********** Service Agent: Strategy: tutor ********** ********** Service Agent: Level: average ********** >>>>>>>>>>> Service Agent: S C O R E = 1108 <<<<<<<<<<< ********** Service Agent: Receive: Kernel: explanation ********** ********** Service Agent: Concept: button 8 ********** ********** Service Agent: Strategy: tutor ********** ********** Service Agent: Level: average********** >>>>>>>>>>> Service Agent: S C O R E = 1128<<<<<<<<<<< …. Fig. 2: Tracking the learner SCORE change according to more and more decisive helps. Conclusion These first experiments have focused on decisions that have to do for evaluating that a given agent (human or artificial) is, just in time, bringing up (or not) useful or decisive helps. Currently, we are revisiting our previous definition of performance evaluation in distance collaborative problem solving. This performance measure must be completed and improved by adding more fine grained agent parameters, as what are immediate benefits in automatically change of strategy as tutor, companion or troublemaker into the pedagogical agent when the dialog agent is connecting to different types of external helps (forum, direct video chat-room, etc..)…. The critical question that remains is how the agent moderator goes about calculating adequacy of agent interventions and evaluating profitability of the prescript helps. A first solution has been tested. The moderator agent receives success explanations from all users connected to network data via each negotiating agent, and compiles statistical data in order to generate a signal indicating the utility of the trial messages. Then, corresponding trial explanations can be stored. The moderator agent may remove trial explanation messages which have been found as not used, or order trial explanation messages which are found useful in explaining particular types of subject matter. Their ranking or importance is increased to provide the most relevant trial explanation messages first. In future work, the moderator agent may include a screening procedure for determining whether the proposed new explanation is worthy of being entered into the reminding kernel of explanation in the database. References Cohen, P. R. 1995. Empirical Methods for Artificial Intelligence. Boston: MIT Press. Guinn, C. I. 1994. Meta-Dialogue Behaviors: Improving the efficiency of Human-Machine Dialogue. Ph.D. Dissertation, Duke University. Hanks, S.; Pollack, M.; and Cohen, P. 1993. Benchmarks, testbeds, controlled experimentation and the design of agent architectures. AI Magazine. Eliot C., 1997 "Implementing Web-Based Intelligent Tutors" "Adaptive Systems and User Modeling on the World Wide Web", Sixth International Conference on User Modeling, June 1997. Eliot C., Neiman, D. and Lamar, M., 1997 "Medtec: A Web-Based Intelligent Tutor for Basic Anatomy, Webnet97. Frasson C., Mengelle T., Aimeur E., Gouardères G., 1996, "An Actor-based Architecture for Intelligent Tutoring Systems"- 3° International Conference on Intelligent Tutoring Systems -ITS’96ACM/AFCET/SIGART/SIGCUE/IEEE - Montréal Canada. Frasson C., Martin L., Gouardères G., Aïmeur E, 1998, "LANCA : a distance Learning Architecture based on Networked Cognitive Agents"- 4° International Conference on Intelligent Tutoring Systems -ITS’98ACM/AFCET/SIGART/SIGCUE/IEEE - San Antonio. USA. Krulwich, B. 1997, "Automating the Internet: Agents as User Surrogates", IEEE Internet Computing, Vol. 1, No. 4, July - August 1997. Petrie, C. J., , 1996, "Agent-Based Engineering, the Web, and Intelligence". IEEE Expert. Annex : Table 1 ( see trace extract in Fig. 2, "change strategy to companion" ) Current parallel actions (in Montreal & Bayonne) Login First Netscape message (agent load) Pedagogical Agent<strategy Tutor> Course selection"howToCreateItsOwnPage… Button<Exercises according to..> Question 1.18 <&amp> choice(correct) Select exercise 1.2 Answer reply 2 et 3 <help> request Dialog agent Select exercise 1.3 Answer.3 (incorrect) <help> request <Explanation> from Kernel DB Answer.2 (correct) Change of strategy (Companion) Select exercise 1.10 -------------------------------------Select exercise 1.7 Incorrect choice <Explanation> from the Market Correct answer Change strategy (TroubleMaker) Select exercise 1.14 Answer. 1 (correct) Exit . Delay in Montreal (at 10 h A.M.) regular 15 sec 14 sec regular Delay in Bayonne(at 4 h P.M.) regular 20 sec 17 sec regular regular regular regular Regular regular 10 sec 7 sec Regular Regular Regular 4 sec Regular Regular Regular ---------------------Regular regular regular regular regular regular 15 sec 30 sec Regular Regular Regular 7 sec Regular Regular Regular ---------------------Regular 5 sec Regular Regular Regular Regular 12 sec Regular Regular Regular Regular