Supporting Privacy in E-learning with Semantic Streams Lori Kettel, Christopher Brooks, Jim Greer ARIES Laboratory Advanced Research in Intelligent Educational Systems Computer Science Department University of Saskatchewan Saskatoon, SK, Canada Overview of Presentation 1. Background – E-learning, Computer Supported Collaborative Learning (CSCL), User Modelling, AI-Ed, and Learner Privacy 2. Empirical Work – User views on privacy in e-learning 3. Our systems – – An architectural solution to the need An implementation that supports this solution 4. Work in progress – Integrating privacy with a generic user modelling component, is this impossible? Copyright © 2004. Why are we here • We’re interested in intelligent e-learning – building learning environments that: – – – – understand a users’ goals understand a users’ background knowledge are able to automatically adapt content to fit a user are able to connect users into meaningful collaboration • These learning environments are distributed in nature – Different applications support different activities – E.g. LCMS, live chat forums, asynchronous forums, electronic submission/feedback systems, etc. Copyright © 2004. Steps along the way • One such application is I-Help, a public forum discussion system and instant messenger – Available to all undergraduate computer science students at the University of Saskatchewan – Allows learners to request and provide help – Learners indicate their proficiency in a subject, and I-Help provides for expertise location – Each user had own personal agent – Agent designed to protect and share information Copyright © 2004. Initial Privacy Server (PEST) • Application agent that talks to the I-Help databases and to the personal agents • An agent that sits between the application datastores and users’ personal agents to control the flow of information. • Personal Agents tell PEST what information is allowed to be released. • Personal Agents request information about others through PEST. Copyright © 2004. PEST Server Data store I-Help Copyright © 2004. Evaluation of PEST - in learner awareness task • 32 participants – 18 computer science and 14 nonscience students • Initial Privacy Preferences – >50% would reveal information using alias or as part of a summary – Three would not hesitate to reveal all types of information using their name – Three would conceal absolutely all information about themselves (non-science students) – No significant difference between computer science and non-science students Copyright © 2004. Evaluation - Final Questionnaire • 78% felt awareness of other learners may or would give a better sense of community • 78% felt it may or would make the system more personalized • 88% felt the awareness tools may or would be beneficial • 88% felt the awareness tools may or would be a privacy risk • 88% felt the potential benefits may or would be greater than the privacy risk Copyright © 2004. PEST Pros and Cons • Pros – Allowed users fine-grained control over their information – Promoted awareness of other users in the system to facilitate learning and a sense of community • Cons – Required detailed ontological domain knowledge – Only developed to work within I-Help Copyright © 2004. Current e-learning landscape • We currently have a number of custom built elearning applications deployed within our program – I-Help Discussions: An asynchronous web-based discussion forum – I-Help Chat: A real-time topic-based chat built around IRC – E-handin: An electronic hand-in system for assignments – Learning Content Management System: A delivery tool for course content (learning objects) and quizzes associated with the content • These systems are distributed both in deployment (machines) and in production (different development teams) – How can we model a learner in such a system? Copyright © 2004. Massive User Modelling System (MUMS) • To address these needs we have created a framework (MUMS) that facilitates the collection and distribution of learner modelling information • The central artifact of the framework is the opinion: – objective data about a user – relevant from the perspective of who created it – time-dependant in nature (when was it valid) • Opinions are not constrained to any particular ontology or vocabulary – different producers of modelling information can use whatever taxonomies and vocabularies they feel are expressive Copyright © 2004. MUMS – 3 Entities • Opinions are used by three computational entities – Evidence Producers: observe user interaction with an application and produce and publish opinions about the user. – Modellers: are interested in acting on opinions about the user, usually by reasoning over these to create a user model (e.g. the tutor!) – Broker: acts as an intermediary between producers and modellers, providing routing and quality of service functions for opinions. • From this, we can derived fourth entity of interest (adaptor pattern) – Filter: act as broker, modeller, and producer of opinions. By registering for and reasoning over opinions from producers, a filter can create higher level opinions. Copyright © 2004. MUMS – Architectural Overview Copyright © 2004. MUMS – Benefits of architecture – Amongst other benefits, this architecture reduces the coupling between producers and modellers allows for adding new entities to the system in an dynamic manner – – – – New grad students == new data collection/production needs Maintains system coherence New ideas get real usage data immediately! But with this reduced coupling comes an important question: How can we support privacy in a system designed to be neutral as to the information it transmits? Copyright © 2004. Privacy Agents as Filters • If we add a filter to the MUMS broker aimed specifically at supporting privacy, then this filter needs to understand the user modelling information it receives • Intelligent personal agents seem to be a natural paradigm choice – Agents could be filled with both the knowledge of what information should be allowed to pass (ontologically) – Agents can interact with the learner keeping them aprise of who is receiving what modelling information Copyright © 2004. • But challenges arise – Learners are not in complete control of their usage information, institutional control is also important Using modelling information for evaluation Providing reduced functionality for the learning environment • The effects of missing modelling data could lead to chaos – Currently no method of understanding how information is being used, just by whom – Can we further this vision of control with trust networks, and begin to start asserting how data will be used? Copyright © 2004. For more information Jim Greer Director, ARIES Laboratory University of Saskatchewan Saskatoon, SK, Canada greer@cs.usask.ca http://www.cs.usask.ca/research/research_groups/aries/ Copyright © 2004. Copyright © 2004. Evaluation - Post Privacy Preferences Reveal Less Same Reveal More CS 2 3 13 Non-Science 2 4 8 Total 4 7 21 Copyright © 2004.