Practical Ethical Models in an Educational Age of Learning Analytics James E. Willis, III, Ph.D. Matthew D. Pistilli, Ph.D. Educational Assessment Specialist Research Scientist Office of Institutional Research, Assessment, and Effectiveness Office of Institutional Research, Assessment, and Effectiveness July 2014 http://3.bp.blogspot.com/-Gv_qIJlLrWQ/UiZZP1FDU7I/AAAAAAAAAA8/FMhsNbO_2-E/s1600/ethics.jpg Ethics in Learning Analytics Ethics in learning analytics are just like the rest of technology: right vs. wrong It’s finally finding an audience. • • It’s about time. Does binary approach really work? What is the purpose of learning analytics? (Slade and Prinsloo, 2011) Academic Concerns • Maximize number of students reaching graduation. • Improve completion rates who may be disadvantaged. Money • Maximize profits. Context: Ethics in LA Today • “…systematization of correct and incorrect behavior in virtual spaces according to all stakeholders” (Pardo and Siemens, 2013, p. 2). • Legal frameworks: – “transparency, student control over the data, security, and accountability and assessment” (Pardo and Siemens, 2013, p. 11). • Problem: Everything is based on utilitarianism The Legal Connection • Recent work (The Asilomar Convention for Learning Research in Higher Education) affirmed six positions: “Respect for the rights and dignity of learners, beneficence, justice, openness, the humanity of learning, and continuous consideration.” Connected by the 1973 Code of Fair Information Practices and Belmont Report of 1979. • Utilitarianism and the law: both attempt to provide a means to take individual information and execute the greater good. http://i.imgur.com/N7KOL.jpg The Central Problem • We cannot undue what has been done • Technology = permanent possibility Maybe the Problem is Modelling • We tend to think in terms of modelling, both scientifically and artistically. • What happens if, when discussing ethics and LA, we apply tensions instead? • Modelling = preconditions for tenable outcomes • Tensions = fluid, unfixed set of concepts to drive questions and outcomes Guiding Questions • What leads us to ethical inquiry? • What are the requisite problems that must be considered, and why? • What comes after ethical inquiry? • Is there a post-ethic/s? – How do we create the future by setting up how we want to envisage it? Moral Utopianism • How the world ought to be, given perfect circumstances. • Technology: at worst, it does no harm; at best, it provides better human, natural, and mechanized environments. • Learning Analytics: our technologies understand what students need to learn and provide calculated predictions to intervene meaningfully. http://www.utopiamechanicus.com/wp-content/uploads/2011/12/umlogo.jpg Moral Utopianism: Questions • How do educators work out effective interventions from past failures? • How do we reform the values of failure into the birthing pains of the future? • How can learning analytics be a catalyst to steer students productively in different directions? Moral Ambiguity • Value of an outcome cannot necessarily be determined, and thus remains indefinitely suspended (example: conflicting data and undetermined directions). • Technologically, this means actions may be taken until there is legal precedent or public outcry (example: Facebook, quantified self and insurance companies). • Learning Analytics: student ID cards, geo-tracking, and grades correlation: what does this yield? http://www.theplanningboardroom.net/wp-content/uploads/2010/04/ethics-pic1.jpg http://4.bp.blogspot.com/-WdR8M2gNBU/T3hR9i47vDI/AAAAAAAAC74/soM9fEuPS9U/s1600/Image%2B%2Bmorality.jpg http://ind.ccio.co/VB/F3/VA/futureWatchtypeSurveillancecamerasstrongreadtextmessagesfuturistic1.jpg http://www.scientificamerican.com/sciam/cache/file/92885822-0614-48BF82279E8C3AD43A82.jpg Moral Ambiguity: Questions • What happens when the ‘nature’ of so-called big data is repurposed or treated systematically? How will educational institutions react? • Our predictive analytics give us an entirely new spectrum to change the future. No long in conflict with individual agendas, learning analytics can be pockets of shared data that affect everyone. Are both proxies for ‘truth’? Moral Nihilism • Utter meaninglessness and lack of value; nothing is intrinsically right or wrong, including behavior. • Technology: innovation may proceed without any guidance or reflection because outcomes have no value. • Learning analytics: paternalism and care ethics are rendered obsolete. Even retention for sake of money is meaningless. http://wolfweb.unr.edu/homepage/fenimore/ch202h/finalpapers/abacherli.jpg.jpg http://www.redorbit.com/media/uploads/2012/07/tech-070812-005-617x416.jpg Moral Nihilism: Questions • Once administrators “know” something about a student (via statistical regression), are institutions or individuals compelled to act? What happens if there is no action? • What happens when something turns up in the data (either as a single previously-unknown data point or as a correlation of aggregate data) that is unexpected? What infrastructure exists to handle it? Building Ethics into Every Step • Practical application is key • We now stand on cusp of moral nihilism with our technologies, including learning analytics • Important: include probing questions, assessments of possible outcomes, and active disagreement about future developments • Tensions: moral utopianism, ambiguity, nihilism