Authentic Assessment in the Age of AI: Aligning Assessment Design with the Academic Misconduct Triangle Introduction The rise of generative AI (e.g., ChatGPT) has heightened concerns about academic integrity in higher educationfacultyfocus.com. Educators are rethinking assessment strategies to promote authentic learning and deter cheating. A useful framework is the academic misconduct triangle – adapted from the fraud triangle – which highlights three factors that contribute to cheating: Pressure, Opportunity, and Rationalizationblogs.ubc.ca. High pressure (e.g. intense grades focus, stress), ample opportunity to cheat (e.g. unsupervised tasks easily outsourced to AI), and rationalization (“everyone is doing it” or seeing tasks as meaningless) each increase the likelihood of misconductblogs.ubc.cafacultyfocus.com. Research shows that when none of these factors are addressed, cheating rates can rise to 33%, but when all three are managed, cheating drops to ~8%blogs.ubc.ca. In other words, reducing student pressure, limiting opportunities for dishonesty, and removing justifications for cheating all bolster academic integrity. Figure 1. The Academic Integrity Triad (Pressure, Opportunity, Rationalization) – instructors can design assessments to target each dimensionblogs.ubc.ca. In the era of generative AI, one promising approach is to design authentic assessments characterized by being LIVE, CONTEXTUALIZED, PROCESS-ORIENTED, MULTI-MODAL, and COLLABORATIVE. Such assessments align with known learning theories (constructivism, social learning, self-determination theory, etc.) and can inherently reduce the pressure, opportunity, and rationalization for cheating. They emphasize meaningful learning over rote performance, thus making cheating both less tempting and more difficult. Below, each characteristic is discussed in terms of the misconduct triangle and how it promotes authentic learning, with examples of formats that post-secondary educators can implement. Live Assessments (Synchronous & In-Person) Mitigating Pressure, Opportunity, and Rationalization: Live assessments – such as oral exams, live problem-solving interviews, or real-time presentations – occur in real-time, often face-to-face (or via video conference). They inherently reduce the opportunity for AIassisted cheating or collusion because students must respond on the spot, without unlimited time or access to unauthorized resourcesblogs.ubc.ca. An oral exam, for example, can be tailored to each student in real time, making it nearly impossible to use pre-scripted answers or generative AI unnoticedblogs.ubc.ca. Live formats also add a human element that can curb rationalizations for cheating: students are aware that they must demonstrate understanding directly to an instructor or panel, which fosters personal accountability. When assessment feels like a dialog with a real audience rather than a faceless submission, students may feel more responsibility to prepare honestly. Additionally, a well-designed live assessment can moderate pressure by allowing interactive clarification – if a student is confused by a question, the examiner can rephrase it, and students often get immediate feedbackceea.caceea.ca. This can reduce the panic that leads to “pressure cheating” blogs.ubc.ca. Indeed, oral examinations are seen as a “real-life” evaluation method that measures true understanding and allows students to correct themselves if they make a mistakeceea.ca. By making the assessment a conversation, the format emphasizes mastery and communication over mere right/wrong answers, which can lessen the high-stakes anxiety that often drives misconduct. Promoting Authentic Learning: Live assessments align with social-constructivist learning principles (Vygotsky, 1978) by leveraging dialogue and immediate feedback as part of the learning process. Rather than simply testing recall, an oral or live exam requires students to articulate their thinking, engage in reasoning on the fly, and apply knowledge in a spontaneous context – skills highly relevant to real-world settings. From a constructivist perspective, learners build knowledge through active, contextualized experience (Bruner, 1961; Brown, Collins, & Duguid, 1989). The act of verbally explaining a concept or demonstrating a skill in real time means the student is reconstructing their knowledge in an authentic communicative act, much like they would in a professional presentation or job interview. This deepens understanding and retention. Live tasks also tap into social learning theory (Bandura, 1977) when done in interactive formats – for instance, a seminar discussion or live debate encourages students to learn from prompts and the evaluators’ cues, modeling expert thinking. Importantly, live assessments support the development of communication skills and confidence. They are often more formative in nature; the immediate interchange can turn an assessment into a learning experience, not just an evaluationceea.caceea.ca. This authentic practice of thinking aloud mirrors many real-life situations (e.g., project pitches, oral defenses, collaborative problem-solving meetings). By focusing on the process of explanation and understanding (not just a polished final answer), live assessments embody authentic learning and make cheating both difficult and less justifiable. Example Formats: Oral exams or vivas: Students verbally answer questions or solve problems oneon-one with an examiner, demonstrating understanding in real time. Live presentations with Q&A: Students present a project or research live (in class or on video) and respond to spontaneous questions. In-class problem solving or debates: Students work through a case or debate a topic in a live session, showing their thinking process. Contextualized Assessments (Authentic & Hyper-Local Tasks) Mitigating Pressure, Opportunity, and Rationalization: Contextualized assessments are those tailored to realistic scenarios, personal experiences, or local contexts relevant to the learner. By grounding tasks in authentic context, we make cheating with AI or generic answers less feasible – this reduces the opportunity for misconduct. For example, a writing assignment asking students to analyze a current issue on their campus or in their community is unique and personalized to their contextnmu.edunmu.edu. A student cannot simply copy an online essay or prompt ChatGPT with a generic question and get a perfect answer, because the task requires integrating specific, real-world details or data. The originality of context acts as a barrier to plagiarism or AI-generated responses (the content won’t exist in any database or model). Moreover, contextualized tasks can reduce rationalization for cheating by increasing the perceived value and relevance of the work. When students see a direct connection between the assignment and real life, they are more likely to view it as a meaningful learning opportunity rather than busywork. According to self-determination theory, relevance fosters intrinsic motivation (Ryan & Deci, 2000) – students feel the material is worthwhile, so they are less inclined to justify cheating. Indeed, research in teaching innovation notes that assignments with real-world applications help students realize the material’s relevance to their lives and thus “potentially reduce the likelihood of turning to AI for cheating”facultyfocus.comfacultyfocus.com. By contrast, if students perceive an assignment as pointless or disconnected from practice, they find it easier to rationalize dishonest shortcutsfacultyfocus.com. Contextualized assessments can also alleviate pressure in some cases: students often engage more and worry less about grades when working on something personally meaningful or concrete. The focus shifts from “What does the teacher want?” to “How can I solve this real problem?” – a mindset that emphasizes learning over performance. This aligns with findings that students are less likely to cheat when they find the material valuable and relevantfacultyfocus.com. Promoting Authentic Learning: Contextualized assessments epitomize authentic learning, which is rooted in the idea that knowledge is best constructed within the contexts in which it is applied (Brown et al., 1989). By working on hyper-local projects, case studies, or service-learning tasks, students engage in situated cognition: they must apply academic concepts to messy, real-world situations. This bridges the gap between theory and practice, enhancing deep understanding and transfer of learning. From a constructivist standpoint, learners make meaning by connecting new knowledge to their existing frames of reference and real experiences. A contextual task requires exactly that kind of connection, thereby fostering deeper cognitive processing. For instance, a business student developing a marketing plan for a local nonprofit is using course theories in an authentic way, learning by doing. Such tasks also often satisfy autonomy needs (as per self-determination theory) – students might have choice in topic or how they approach a local issue, increasing their sense of ownership and intrinsic motivationacademicintegrity.orgacademicintegrity.org. When motivation shifts from extrinsic (just getting a grade) to intrinsic (interest in solving a real problem), engagement and ethical behavior improve. Contextualized assessments can draw on experiential learning theory (Kolb, 1984) as well: students concrete experience (e.g. a field project) and then reflect abstractly on it, which solidifies learning. They also mirror situated learning and community of practice models (Lave & Wenger, 1991) by connecting students to real communities or audiences. Overall, by making learning contextrich, these assessments develop critical thinking, problem-solving, and an appreciation of how knowledge functions in context – outcomes far more valuable than rote memorization, and which an AI cannot easily produce for the student. Example Formats: Hyper-local projects: e.g. a research project addressing a challenge at the university or in the local community. Case studies or problem-based assignments tailored to current events: Students apply course concepts to analyze a recent news event or a case drawn from their own industry/workplace. Personalized scenarios: e.g. in a nursing class, developing a care plan for a (hypothetical) patient profile that the student crafts based on someone they know (with anonymized details). Process-Oriented Assessments (Emphasizing Process and Iteration) Mitigating Pressure, Opportunity, and Rationalization: Process-oriented assessments focus on how students develop and demonstrate learning over time, rather than a single high-stakes product. Examples include multi-draft writing assignments, portfolios, design projects with milestones, or reflective journals documenting the learning journey. This approach directly reduces several contributors to cheating. First, it lowers pressure by breaking one big task into smaller, formative parts. Instead of one shot at success (which can cause panic and pressure to cheat), students have multiple checkpoints – early drafts, feedback cycles, resubmissions – which cultivate a growth mindset. Failure or struggles early on do not mean total failure, because there are opportunities to improve. When “failure is a normal part of learning, rather than the final outcome,” students feel less anxiety and are “less likely to turn to AI to cheat”facultyfocus.com. This aligns with the idea of mastery learning and formative assessment: giving students room to learn from mistakes reduces the desperation that triggers dishonest behavior. Second, processoriented tasks greatly reduce the opportunity for outsourcing or AI cheating. A student would have to cheat consistently at every stage (draft, revision, reflection) without getting caught – a much more complex endeavor. If a student suddenly turns in a polished final draft but cannot show drafts or explain their revisions, it raises red flags. Requiring interim work (outlines, prototypes, research notes, etc.) makes it logistically harder to delegate the entire process to an AI or third party. It essentially “makes engaging in dishonest behaviour more difficult than engaging in honest ones”blogs.ubc.ca, nudging students toward integrity. Third, emphasizing process changes the rationalization equation: because the instructor is evaluating effort and improvement, not just final quality, students see that learning (not perfection) is the goal. This can help students rationalize integrity (“I might as well do it myself since I’ll get credit for my learning process”) rather than rationalizing cheating. It also builds trust – if students feel the assessment is fair and supports their development, they are less likely to justify cheating as a necessary means to an end. In sum, process- focused assessments erect barriers to cheating (multiple checkpoints, unique progress for each student) while also lowering the stakes and excuses that often lead to cheating. Promoting Authentic Learning: Process-oriented work mirrors real-life learning and working patterns. In almost any profession or complex task, iteration and reflection are key: writers revise drafts, engineers build prototypes, and artists sketch and refine ideas. By simulating these processes, such assessments help students develop metacognition and selfregulation skills – they learn how to learn. This aligns with constructivist and experiential learning theories, which emphasize that knowledge is built through cycles of action and reflection. For instance, Kolb’s Experiential Learning Cycle involves concrete experience, reflective observation, abstract conceptualization, and active experimentation – a loop very much like doing a project, getting feedback, and trying again. The “trial and error” element highlighted in innovative teaching models reinforces that students often learn more from correcting errors than from getting it perfect the first timefacultyfocus.com. Allowing resubmissions or multiple attempts fosters a mastery orientation. This approach also resonates with self-determination theory by supporting competence: small successes in each stage build confidence. Students see progress, which boosts their self-efficacy and intrinsic motivation to continue honestly (as in the example of the student who realized she could do math on her own and thus “didn’t need to cheat because she was learning” academicintegrity.orgacademicintegrity.org). The focus on process can also incorporate reflective practice (Schön, 1983), where students articulate what they’ve learned from each stage. Such reflection deepens learning and helps students connect academic content to personal growth. Authentic learning is supported because students engage in continual improvement – a valuable real-world skill – rather than treating knowledge as a one-time transaction. They learn resilience, problem-solving, and how to incorporate feedback, which are critical capabilities beyond the classroom. Example Formats: Portfolio assessments: Students compile a portfolio of work (e.g., essays, lab reports, art pieces) over time, with drafts and final versions, plus reflections on their growth. Multi-stage projects: e.g. a research paper split into proposal, annotated bibliography, first draft, and final draft. Feedback is given at each stage. Design thinking projects: Students go through iterative phases (empathize, design, prototype, test, etc.) for a project, documenting each phase and what was learned or changed. Multi-Modal Assessments (Using Multiple Modes of Expression) Mitigating Pressure, Opportunity, and Rationalization: Multi-modal assessments allow or require students to demonstrate learning in diverse formats beyond the traditional written exam or essay. This could include presentations, videos, podcasts, infographics, concept maps, diagrams, or a mix of media. By expanding the modes of assessment, instructors can reduce opportunities for easy AI cheating in two ways. First, current generative AI tools are stronger in text generation than in producing original, integrated multi-modal content. While AI can assist with creating images or slides, constructing a coherent video presentation with the student’s own voice, or a detailed visual diagram with personal context, is more complex. The variety inherently makes it harder for a student to rely entirely on AI for the whole assignment. Second, multi-modality often goes hand-inhand with student choice (e.g., “create either a podcast or a poster to explain this concept”). Granting some choice can enhance students’ sense of control, thereby reducing pressure. Students can play to their strengths or interests – a student anxious about writing might feel more comfortable making a short video, whereas another who dislikes public speaking might prefer a written report. According to self-determination theory, providing choice satisfies the need for autonomy, which can decrease feelings of coercion and stress (Deci & Ryan, 1985). Lower stress means less temptation to cheat out of panic. In addition, when students choose a format they connect with, they may be more invested in the task, reducing any rationalization that cheating “doesn’t matter.” Multimodal tasks can also make cheating easier to detect: for instance, if a student submits a voice recording that suddenly doesn’t sound like them, or a video where they barely appear, it raises questions. Many instructors note that requiring a brief video explanation alongside a written submission dramatically cuts down on plagiarism, because a student who copied text would then have to explain it in their own words on camera. In essence, multi-modal assessments create redundancy and personal presence that discourage dishonesty. Finally, these varied formats send a message to students that the instructor values creativity and genuine engagement, not just one kind of output. This can chip away at the rationalization that “the professor only cares about the grade”; instead, students see an invitation to truly show what they know in a form that suits them, which encourages honest effort. Promoting Authentic Learning: Multi-modal assessments align with the principles of Universal Design for Learning (UDL), which advocate multiple means of expression and representation to accommodate different learners (CAST, 2018). UDL isn’t just about accessibility – it can enhance learning for all by tapping into different cognitive and creative skills. Research suggests that using UDL frameworks can improve student engagement and motivation, thereby reducing the inclination to cheatfacultyfocus.com. From a learning theory perspective, multi-modal tasks recognize that understanding can be demonstrated in various forms – a very constructivist idea. Howard Gardner’s theory of multiple intelligences (1983) similarly supports the notion that students have diverse talents: one student might best show understanding through writing, another through artistic design or oral explanation. By allowing multiple modes, we let students construct knowledge in ways that are more natural or meaningful for them, leading to a more authentic learning experience. For example, creating a podcast on a history topic might engage a student’s storytelling ability and require them to teach the material – leveraging social learning by considering an audience. Designing an infographic on a biology process might require distilling complex information into visual form, which is a high-order skill reinforcing their comprehension. Multi-modal projects often integrate creativity, which can lead to deeper processing of content (as students must reinterpret material in a new form). They also mirror real-world communication: in many careers, individuals must present information both in writing and orally, use visual aids, or create digital content. Thus, multi-modal assessments build transferrable skills like digital literacy, public speaking, and design thinking. By validating different ways of demonstrating knowledge, these assessments make learning more inclusive and authentic – students see that what counts is the quality of their understanding, not just their proficiency in standard academic essay writing. This inclusivity can boost relatedness and motivation (students feel their unique strengths are recognized), which self-determination theory links to greater persistence and honesty in learningacademicintegrity.orgacademicintegrity.org. Example Formats: Choice of medium assignments: e.g., “Demonstrate your analysis of this issue in a format of your choice: an essay, a video presentation, or an infographic (with equal grading criteria).” Mixed-media projects: Students submit a written reflection alongside a creative component (short film, prototype, or artwork) explaining how it connects to course concepts. Posters or slide presentations (with defense): Students create a research poster or slide deck and then orally defend or explain it, combining visual and oral modes. Collaborative Assessments (Group and Social Learning Tasks) Mitigating Pressure, Opportunity, and Rationalization: Collaborative assessments involve students working together – in pairs, groups, or whole-class collaborations – to produce work or engage in learning activities. Examples include group projects, team presentations, peer review exercises, or wiki-based assignments. Collaboration can significantly influence each element of the misconduct triangle. In terms of pressure, a wellstructured group task can distribute the workload and provide social support, thereby reducing individual stress. Instead of each student feeling 100% responsible (and thus potentially desperate about performance), they know the team will share ideas and shoulder challenges together. This can alleviate the kind of isolation and high pressure that prompts cheating. (It’s worth noting that poor group design can also add pressure or tempt freeriding, but assuming best practices are used – clear roles, accountability, etc. – the net effect is supportive.) Collaboration also reframes success as learning together rather than fierce competition; fostering a “culture of collaboration with learning as the goal, rather than competition with grades as the goal” is explicitly recommended to reduce pressure and academic misconductblogs.ubc.ca. Regarding opportunity, working in a group can actually reduce certain avenues of cheating. Students are less likely to outsource an assignment to a third-party or AI if it would mean lying to or bypassing their teammates. Group members may hold each other accountable, since one student’s cheating could jeopardize the whole project or be noticed by peers. There’s also a practical check: it’s harder for an AI to mimic the dynamic, multi-voice process of a group working together. If one member turned in AI- generated work that doesn’t align with prior group discussions, others could detect the inconsistency. Additionally, many collaborative tasks happen partly through discussions, which can’t be easily faked. Finally, rationalization is impacted: students often feel a greater sense of responsibility and ethical obligation to peers. Cheating in secret on a solo exam might be easier to justify (“I’m only hurting myself if caught”), but cheating in a group assignment means betraying classmates’ trust or letting them down, which raises the moral stakes. This social disincentive can reduce justifications for dishonest behavior. Moreover, if the group environment is positive, students feel connected (addressing the relatedness need) and thus more committed to doing their fair share honestly. In short, collaborative assessments, by their nature, introduce peer monitoring and mutual investment in integrity, which can shrink the opportunity and inclination to cheat. Promoting Authentic Learning: Collaborative learning is strongly supported by social learning theory and social constructivist theories (Vygotsky, 1978; Wenger, 1998). Vygotsky taught that learning is socially mediated – through interaction, dialogue, and negotiation of meaning with others, students can reach higher levels of understanding (his concept of the Zone of Proximal Development). When students work together on an assessment, they explain concepts to each other, ask and answer questions, and model problem-solving strategies. This process can solidify knowledge in ways individual study can’t. Each student brings unique perspectives or skills, and through collaboration they construct a richer understanding than any one might alone. Bandura’s social learning theory also underscores that people learn from observing and imitating others; in group work, students pick up on peers’ approaches to learning tasks. Collaborative assessments also nurture skills that are undeniably authentic to the real world: teamwork, communication, conflict resolution, and division of labor are staples of almost every workplace. By engaging in these, students practice how to apply their knowledge in group settings. Importantly, collaboration can increase engagement and motivation – it introduces a social element that can make tasks more enjoyable or at least more bearable, thereby reducing the likelihood of disengagement-related cheating. The International Center for Academic Integrity notes that building relatedness (a feeling of connection) in class through group activities can foster integrityacademicintegrity.org academicintegrity.org. When students feel they belong to a learning community, they internalize collective values of honesty and effort. Collaborative assessment can also be designed to include peer feedback or peer assessment, which not only helps learning (by teaching others, one learns better themselves) but also creates an audience beyond the instructor. Knowing that peers will see one’s work can incentivize students to produce authentic work they can proudly share, rather than risk shame by being caught with something inauthentic. From a constructivist view, collaboration often also means tackling complex, authentic problems (which often require a team to solve, as in real life). This can increase the authenticity of learning – students engage in discourse, argumentation, and consensus-building similar to professional team projects. All these experiences contribute to deeper learning. In sum, collaborative assessments leverage the power of social interaction for learning, reflecting the reality that knowledge is often created and applied in social contexts, not in isolation. Example Formats: Group projects: Teams of students produce a joint outcome (report, presentation, research study, etc.), with defined roles. To ensure individual accountability, include components like individual reflection on contributions or peer evaluation. Peer review and feedback loops: Students exchange drafts or project work and give each other feedback. This not only helps improve the work but also builds a sense of shared responsibility for learning quality. Collaborative case analysis or problem-solving: Small groups work through a case study or set of problems together during class, submitting a collective solution. The process of discussion and consensus is where learning happens, and the instructor can ask any member to explain the group’s reasoning to ensure understanding. Conclusion In the age of generative AI, fostering authentic learning through thoughtful assessment design is more crucial than ever. By making assessments live, contextualized, processoriented, multi-modal, and collaborative, educators can create learning experiences that students find meaningful and engaging – and that are inherently resistant to cheating. These characteristics align with the three sides of the academic misconduct triangle: they lower undue pressure (e.g., through iterative feedback and support, or shared effort), limit opportunities for dishonesty (e.g., through unique personal contexts, real-time performance, or group accountability), and diminish rationalizations for cheating (e.g., by ensuring students value the task and feel connected to the learning community). In essence, the more an assessment looks like an authentic learning opportunity and less like a hurdles race for points, the less appealing and feasible cheating becomes. Educational theories back this up: constructivism encourages us to situate learning in real contexts; social learning theory reminds us of the power of interaction and modeling; self-determination theory shows that when students’ needs for autonomy, competence, and relatedness are met, motivation and integrity flourish. Empirical findings echo these ideas – when students feel empowered and see purpose in their work, they are far less likely to resort to academic dishonestyfacultyfocus.comfacultyfocus.com. For post-secondary educators, the implications are clear. Rather than racing to catch cheaters or banning AI, we can redesign assessments to embrace authenticity. This might mean more oral exams, community projects, portfolios, creative media assignments, and team-based work – supported by clear communication of integrity expectations and scaffolded by feedback. Such assessments not only guard academic integrity but also enrich student learning, making it deeper and more transferable. In a world where AI can provide answers but not experiences, designing assessments with these five characteristics ensures that our students gain the experience of learning – the critical thinking, the collaboration, the real-world application, the reflective improvement – that no AI can cheat them out of. By addressing Pressure, Opportunity, and Rationalization through pedagogy, we uphold both academic integrity and the true purpose of education: to equip students with skills and knowledge that are authentic, enduring, and earned through genuine effort. References (APA Style) Anderman, E. (2015, May 20). Students cheat for good grades. Why not make the classroom about learning and not testing? The Conversation. facultyfocus.com Bandura, A. (1977). Social Learning Theory. Englewood Cliffs, NJ: Prentice Hall. Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42. Choo, F., & Tan, K. (2008). The effect of fraud triangle factors on students’ cheating behaviors. Advances in Accounting Education: Teaching and Curriculum Innovations, 9, 205– 220. blogs.ubc.ca Deci, E. L., & Ryan, R. M. (1985). Intrinsic Motivation and Self-Determination in Human Behavior. New York: Plenum. Eaton, S. E. (2021). Academic Integrity in Online Learning: Positives, Challenges, and Solutions. (Note: example source on integrity strategies) Girshner, J. (2022, April 24). Self-Determination Theory and Academic Integrity. International Center for Academic Integrity Blog. academicintegrity.orgacademicintegrity.org Herrington, J., & Oliver, R. (2000). An instructional design framework for authentic learning environments. Educational Technology Research and Development, 48(3), 23-48. Ostafichuk, P. (2020). Academic Integrity – Assessment Guidebook. University of British Columbia. blogs.ubc.cablogs.ubc.ca Simmons, N. (2018). Curiosity and powerful learning. (Note: example source on relevance and student engagement) Trust, T. (2023a, August 2). Essential Considerations for Addressing the Possibility of AIDriven Cheating, Part 1. Faculty Focus. facultyfocus.com Trust, T. (2023b, August 16). Essential Considerations for Addressing the Possibility of AIDriven Cheating, Part 2. Faculty Focus. facultyfocus.comfacultyfocus.com Vygotsky, L. S. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Wenger, E. (1998). Communities of Practice: Learning, Meaning, and Identity. Cambridge, UK: Cambridge University Press.
0
You can add this document to your study collection(s)
Sign in Available only to authorized usersYou can add this document to your saved list
Sign in Available only to authorized users(For complaints, use another form )