Enlight: A Projected Augmented Reality 5 Approach to Science Education C'A by Yihui Saw Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY July 2014 jsepfemberOI9] Massachusetts Institute of Technology 2014. All rights reserved. Signature redacted Author ................................................ Department of Electrical Engineering and Computer Science July 30, 2014 Signature redacted Certified by ........................ C f battie Maes Alexander W. Dreyfoos (1954) Professor of Media Technology Thesis Supervisor Signature redacted Accepted by ...... ................. Prof. Albert R. Meyer Chairman, Masters of Engineering Thesis Committee Cl) L 10 Enlight: A Projected Augmented Reality Approach to Science Education by Yihui Saw Submitted to the Department of Electrical Engineering and Computer Science on July 30, 2014, in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer Science Abstract Advances in augmented reality (AR) interfaces create a new possibility for innovative learning tools in education. This thesis explores the application of a projected augmented reality system and its use in science education. Through this work, we provide some brief insights into the potential and challenges of using the projection augmented model in a learning environment. With a focus on physics education, we developed tangible simulations of magnetic fields and other phenomena through the means of augmentation. We describe also the means to integrate and apply AR in a classroom environment. Our investigation analyzes the effects of augmented reality on student learning outcomes and the usability of our model. Quantitative and qualitative evidence suggests that the projection augmented model may complement learning in ways that current interfaces and learning methods are lacking. Thesis Supervisor: Pattie Maes Title: Alexander W. Dreyfoos (1954) Professor of Media Technology 3 4 Acknowledgments This thesis would not have been possible without the LuminAR team. I am deeply grateful to Natan Linder and Rony Kubat who have provided me an amazing platform to learn and grow. I continue to wonder how with just a semester of programming under my belt and zero knowledge on web development or computer vision, I first joined the team as a UROP to 'work on new LuminAR bulb software stack including web based applications as well as computer vision algorithms'. But that is the kind of space they have given me to grow and the kind of opportunities, trust and guidance I am most thankful for. I have had a lot fun. Thank you for being awesome and I am proud to be a part of the team. Thank you also to our advisor Pattie Maes, for her guidance and support for this foray into education throughout the project. Tal Achituv has been the best supporter of any idea we had, wacky or not, and would figure out five or more ways to accomplish them. This project would not have been the same without his energy and vigor. If a way out of a mathematical problem is to interest Terence Tao, the LuminAR team equivalent would be to just ask Fuzzy, or Ben Weissmann. He has been a great source for advice and amusing anecdotes. This work sits on the foundations he has built on the web framework. A special thank you goes to the UROPs who have supported this project as well: Marcus Boorstin, Tyler Hallada and Stephanie Lee. I am most grateful to our Media Lab sponsors. Thank you to the Steelcase teams, especially Steve Miller, Ed Vanderbilt, Andrew Kim and Paul Noll. Thank you for supporting my studies and for recognizing the potential of this work. Their continuous support and effort to integrate their work with ours have made the vision of how projected interfaces could 'live in the wild' possible. Kathy McKnight of Pearson has been generously sharing illuminating advice from day one. She has been wonderful to work with and her constructive insights and guidance has led me through the uncertainties that came with being a novice in education research. It goes without saying that this work would not have been as meaningful without the contributions of Professor John Belcher of the MIT Physics Department. 5 His investments have been invaluable, bringing us closer to building content that is applicable and useful to students. Finally, I have not said this enough to my dearest family. Thank you to my parents who have always encouraged me to pursue my dreams and every single one of you who have built the strongest net of support and love in this family. I am very lucky to have you. 6 Contents 1 Introduction 13 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Background and Previous Work 3 2.1 Augmented Reality Interfaces 2.2 LuminAR 2.3 . . . . . . . . . . . . . . . . . . . . . . 19 . . . . . . . . . . . . . . . . . . . . . . . . 20 Augmented Reality Interfaces in Education . . . . . . . . . . . . . . . 22 2.3.1 Secondary Screen . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.2 Projected Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 Head-mounted . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 . . .. . . . .. System Design 25 3.1 Initial Explorations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.1 C ount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.1.2 white:scape . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 System Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.1 Playgrounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.2.2 Lesson Planner . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.3 Lesson Player . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.2.4 Teacher Dashboard . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 4 19 User Study 37 7 4.1.1 Research Population . . . . . . . . 38 4.1.2 Research Instruments and Procedure 38 4.1.3 Study Protocol . . . . . . . . . . . 40 Data Analysis and Results . . . . . . . . . 43 4.2.1 Cognitive Domain . . . . . . . . . . 44 4.2.2 Usability . . . . . . . . . . . . . . . 50 4.2.3 Student Preferences . . . . . . . . . 52 4.2.4 Discussion and Conclusion . . . . . 58 . . . . . . 38 . 4.2 Evaluation Methodology . . . . . . . . . . . 4.1 5 Future Work 61 A User Study Materials 63 . . . . . . . . . . . . . . . . . . . . . . . 64 A.2 Pretest Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 . . . . . . . . . . . . . . . . . . . . . . . 67 A.4 Worksheet for User Study . . . . . . . . . . . . . . . . . . . . . . . 68 A.5 Online Survey . . . . . . 71 . . . . Questions . ........... . . Posttest . A.3 . . . A. 1 Rundown for User Study . . 8 List of Figures 2-1 Reality-Virtuality Continuum . . . . . . . . . . . . . . . . . . . . . . 2-2 LuminAR: a project augmented reality system with a compact form factor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1 19 21 Count: The first stage of the Count game where the system counts the number of contours in the white box. Here, a mug is placed in the white box. (left) The second stage of the game where the concept of addition is introduced. (right) . . . . . . . . . . . . . . . . . . . . . . 3-2 27 white:scape : The full view of the interaction space with the Bivi table from Steelcase and the LCD TV as a shared space. (left) The application runs on browser-enabled devices and supports design thinking methods through templates. (right) . . . . . . . . . . . . . . . . . . . 3-3 white:scape: The vision for an interactive space around the Steelcase Turnstone Bivi table. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4 28 29 (a), (b) The application launcher for Enlight. (c) An arch extending from the table houses the system. (d) The device is installed on an arm attached to the table. A tablet holder is also mounted on the arm to allow an additional form of interaction. 9 . . . . . . . . . . . . . . . 30 3-5 Fields playground: (a) The first iteration with vector fields. (b) The second iteration with color detection. (c) A yellow arrow is projected, extending from the center of the block to show the dipole moment vector. (d) Green arrows are projected to demonstrate the force vectors on the magnetic blocks. The compass needle closely follows the field lines projected. 3-6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Pendulum playground: The location of the pendulum is tracked and displayed as a heat map. The time series on the top-left corner plots the changes in the x, y and z locations of the pendulum. 3-7 . . . . . . . 33 The lesson planner: The widgets at the bottom of the screen can be dragged on to the slide to add new media assets to the page. More pages can be added by entering new steps on the left of the page. 3-8 . . 34 The lesson player: (a) Text an images added on the lesson planner gets updated in real time on the player. (b) Quiz questions like a multiple choice question can be a step on the player. (c) A video conference can be launched over WebRTC from the player. A request will be sent to the teacher dashboard to start the two-way video conference. . . . . . 3-9 35 The teacher dashboard: (a), (b) Graphs on the amount of time spent by each student per step and answers to quiz questions can be monitored from the dashboard. Toggling different comparison options changes the query to the database. (c) Snapshots from each system are taken and displayed on the teacher dashboard. (d) A teacher can view all configurations that a student has tried on the magnetic field playground through tim e. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4-1 Part 2 on the worksheet 39 4-2 One of the pretest conceptual questions. 4-3 Study Protocol ........ 4-4 Example steps in user training. Button touches.(left) Placing magnets . . . . . . . . . . . . . . . . . . . . . . . . . under the projection.(right) . . . . . . . . . . . . . . . . .............................. . . . . . . . . . . . . . . . . . . . . . . . 10 40 40 41 4-5 (a) Distribution of participants from 8.02. (b) Posttest scores vs Time spent on activity: Time spent on activity refers to the amount of time spent on the activity portion of the study where students were exposed to the magnetic fields playground. . . . . . . . . . . . . . . . . . . . . 4-6 45 Pretest and Posttest Scores: Each column in the graph represents the pretest and posttest score of one student. The columns are sorted by the class rank of the student in 8.02. . . . . . . . . . . . . . . . . . . 45 4-7 Posttest scores vs Time spent on activity plots separated by class rank. 46 4-8 Time spent on activity vs learning gain g plots separated by class rank. 47 4-9 Self reported helpfulness of object augmentation on learning . . . . . 48 . . . . . . . . . . . . . . . . 50 4-10 Self reported ease of use of the interface. 4-11 User training: The step that requests the participant to place 3 items in the white box projected. (left) The behavior demonstrated by some participants attempting to drag a virtual object into the box. (right) 51 4-12 User behavior with virtual and physical objects. (left) User behavior for participants who attempted to drag on screen objects. (right) . . 52 4-13 Student preferences based on survey responses: interface comparison. 57 4-14 Student preferences based on survey responses: areas for improvement. 58 5-1 Boids: an image burst displaying how the flock of points are 'trapped' in the palm of a user. . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 A -1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 A -2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 11 12 Chapter 1 Introduction Learning in STEM (Science, Technology, Engineering and Mathematics) fields often involves understanding abstract concepts or mathematical formulations for different types of observations. In fields like physics, chemistry and biology especially, the concepts are often about explaining real physical phenomenons that are not always visible to the naked eye, making it difficult for the concepts to be taught purely through sensory arguments. We rely on measurements from the physical world that are expressed through mathematical models, or computer generated visualizations to help us intuitively understand these phenomena. Virtual imagery have been shown to promote conceptual understanding of scientific concepts [54, 9]. But advances in augmented reality which unites physical and virtual objects suggest an opportunity to further enhance the learning experience by allowing simultaneous experience of both the physical phenomena and the virtual visual aids. With computer generated simulations on interfaces like laptops and tablets that are commonly available today, the physical phenomena experienced by the learner and the visual aids that attempt to explain the phenomena live in different contexts. One would have to view both separately and then try to make the connection on how the mechanics from the visualizations apply in the physical world. Augmented reality could not only help to contextualize these visualizations, but also allow learners to freely touch, feel, hypothesize and thus, confirm their intuitions as to how the theories apply in the real world. 13 In this thesis, we present Enlight, a projected augmented reality approach to teaching science, where the system provides tangible simulations through the means of augmentation. Visual aids that are projected on physical objects guide learners as they explore an abstract concept, allowing them to gain direct experiential intuitions about how the natural world functions. Although the increase in availability of computers and mobile devices have initiated a movement of education to online platforms such as the Khan Academy, Coursera and edX [29, 6, 14], we contend that learning with experimentations in the real environment still plays an important role, especially in science where discoveries are made through observing behaviors in the physical world. Our work, we believe, can combine the best of both worlds to provide an interactive learning experience that also promotes conceptual understanding. 1.1 Motivation Constructivist theory suggests that the construction of knowledge occurs in the mind through interactions between experiences and ideas. The assumption here is that knowledge cannot simply be transmitted from teachers to learners. Instead the learner must be engaged in creating the knowledge in his or her mind [58]. Thus, most constructivist learning approaches encourage students to "learn by doing", so they gain new experiences and reconstruct their mental models to accommodate the new knowledge acquired [23]. Seymour Papert extended the constructivist theory with constructionist learning which proposes that learning can happen most effectively when people are also active in interacting with and making tangible objects in the real world. For example, he suggests that familiar characters can be used as transitional objects that allow easier translation of personal experience into abstract symbols. Papert suggests that the computer with its power to simulate and take on a thousand forms can become the tool to assist students in the construction of their conceptual framework [44]. Hence, by exposing students to new concepts through a familiar character, students can more readily connect with the experience and can better construct a mental model on the concepts presented. 14 Many efforts in science education have been founded on constructivist ideas. One of the early efforts was initiated by the MIT Physical Science Study Committee (PSSC), where efforts were extended after the Soviet Union orbited Sputnik, and the US feared American schools lagged behind in science. Educators then found that textbooks were ineffective in stimulating student interest in physics and failed to teach students how to think like a physicist. The PSSC designed many teaching materials with the objective of improving understanding and engagement as opposed to memorization. Photographs of phenomenons were taken, and experts were video recorded demonstrating and explaining about complex phenomenons with specially built equipment. Many simple lab apparatus for conducting experiments were also designed and distributed to schools across the country [47]. Active learning and peer instruction goes hand in hand with constructivist ideas. Active learning refers to a model of instruction where the responsibility of learning is focused on the learner instead of the instructor [4]. Emphasis is placed on develop- ing the learner's skills rather than on the transmission of information [28]. In active learning, students are encouraged to problem solve in teams, share ideas and teach each other [25]. Specifically in peer instruction, students are given questions in groups where they discuss their answers and attempt to reach consensus on the correct answer. This process forces students to think through the arguments being developed, allowing them to achieve a deeper level of understanding on the material [7]. Active learning can be taken further by encouraging learners to solve problems and build solutions with newly acquired knowledge. This not only helps students apply what they learn in the real world context, but also indirectly drives students to learn the intricacies of a given topic which would be essential for successful implementations. At MIT, the TEAL (Technology Enabled Active Learning) environment was designed to support social interactions and create a classroom climate that encourages students' active learning. Students spend a larger portion in class on group activities and discussions. Visualizations of complex phenomena in electricity and magnetism were also distributed to students through Java applets that students can explore on their personal computers [62]. At Harvard, the Applied Physics 50 class led by Professor 15 Eric Mazur, features team projects throughout the course, where students use their new knowledge to reverse-engineer musical instruments, build Rube Goldberg machines to complete specific tasks or even design circuits for secure safes. Mazur also developed Learning Catalytics [33], a correlative site that allowed students to provide feedback and draw solutions to concept questions on their mobile devices. This new approach showed the largest gain on the force concept inventory (a measurement of students' understanding of basic concepts, of any Harvard physics course taught in the last six years) and class attendance surged to 97% in the fall of 2012 [46]. These ideas demonstrate the importance and effectiveness of the exemplary pedagogy in science education where the learner is encouraged to develop intuitive understanding about how the natural world functions before diving into understanding the formal mathematical representations that scientists and engineers use. Our design will focus on encouraging learners to explore a phenomenon and develop capabilities to qualitatively predict the behavior rather than teaching students to manipulate quantitative formulas. The design for the system should allow students to freely explore a concept in ways that a textbook or a computer generated simulation cannot achieve. Inherently, no matter how well the software for virtual simulations are designed, there are limitations to what a student can attempt on a virtual simulation. For example, in a simulation for pendulums, one can design a virtual pendulum to take different masses, different strings and provide capabilities to release the pendulums from different heights. But the learner would not be able to attempt explorations like feeling the force needed to stop a pendulum without touching the weight, or changing the string material or adding weights while the pendulum is in motion etc. The possibilities are endless with a physical pendulum and cannot be fully supported in a virtual environment. Our design for an augmented reality approach will focus on supporting such explorations by creating a physical link between the real and virtual objects, enabling both to be experienced simultaneously. By painting computer generated visualizations directly on physical objects, the learner's experience of a physical phenomenon is enhanced with virtual visual aids. Thus, we can make the invisible visible while retaining the affordances of physical artifacts. 16 This work was guided by our research collaborator Dr. Katherine McKnight who directs the Center for Educator Learning & Effectiveness at Pearson. We also collaborated with Andrew Kim and his team from the WorkSpace Futures group at Steelcase. In addition, we are supported by two leading physics education experts. The first is Professor John Belcher of the MIT Physics Department. Professor Belcher is one of the leading researchers involved in the effort to evolve introductory physics education at MIT to an interactive format [2, 1]. He spearheaded the TEAL (Technology Enabled Active Learning) classroom project that introduced several education technology innovations [62]. He is especially interested in visualizations of electromagnetism. The second is Professor Eric Mazur who serves as area dean of Applied Physics at Harvard University [40]. Professor Mazur is considered an expert in the domain of interactive teaching. He has pioneered and published several seminal books and papers in the domain of peer instruction [7, 38, 39]. 1.2 Thesis Outline This thesis first discusses about augmented reality interfaces and known augmented reality applications in education. Then, we proceed by describing the learning environment of Enlight, implemented on a projected augmented reality system developed at the MIT Media Lab. We then analyze the learning effects and usability of the system and finally present findings that confirm our intuitions about how learning in the physical environment can reinforce conceptual understanding of phenomenons in science, if not enable students to develop intuitive understanding of such concepts. 17 18 Chapter 2 Background and Previous Work 2.1 Augmented Reality Interfaces Augmented reality (AR) provides a live view of the physical world where computer generated input like graphics and sound are overlaid on the real world. As a result, the user's perception of the world is enhanced, enabling the user to interact with and manipulate digital information in their physical surroundings. Augmented reality technologies take on many different forms, including head-mounted units like the Oculus Rift [43], handheld devices like Layar [32], eyeglasses etc. Paul Milgram introduced the concept of the Reality-Virtuality (RV) continuum which encompasses all possible variations and compositions of real and virtual objects, ranging from the completely real to the completely virtual [42]. Augmented reality is classified as a subset within this continuum. Figure 2-1: Reality-Virtuality Continuum Mixed Reality (MR) Real Environment Augmented Virtuality (AV) Augmented Reality (AR) Virtual Environment We focus on projected augmented reality in this project. This form of AR combines the virtual world with the physical world by displaying digital information directly 19 into a user's environment. Virtual objects are rendered on the user's physical space without the need to wear a head-mounted display or carry a hand-held device. A system for projected AR usually includes a sensor to track changes in the environment and a projector to overlay virtual objects in a user's physical world. Projected augmented reality systems use a combination of physical objects and computer-generated information, and hence they inherit advantages from both, combining the high level of intuitiveness of physical models with the flexibility and functionality of computer generated objects, that can be quickly animated, manipulated, saved and updated [24]. Ramesh Raskar's description of the physical model highlights the difference between a physical model and the virtual. "The human interface to a physical model is the essence of 'intuitive'. There are no widgets to manipulate, no sliders to move, and no displays to look through (or wear). Instead, we walk around objects, moving in and out to zoom, gazing and focusing on interesting components, all at very high visual, spatial, and temporal fidelity" [48]. This sentiment promotes the use of the projected AR model, which combines the best of both and essentially gives a physical form to computer generated objects through projection. This concept is not new and many projected AR systems have been developed since Wellner's "Digital Desk" system [61], like Underkoffler's 1/0 Bulb, an evolution of the common light bulb which aimed to transform the physical space with pervasive information [55], or other tabletop systems like DeskJockey from Microsoft Research [63] and Tinker Lamp from Ecole Polytechnique [11]. In this project, LuminAR, the compact projected AR system developed at the Fluid Interfaces Group in the MIT Media Lab will be our platform of choice [36, 37]. 2.2 LuminAR LuminAR is an integrated projected AR system developed at the Fluid Interfaces Group in the MIT Media Lab, that consists of a depth sensor, projector and on board computer. This system will be the platform of choice for deploying and evaluating our exploration in education. Currently the system supports modes of interaction 20 like multi-touch, contour tracking, marker tracking and gesture recognition. Lens, the Javascript SDK deployed on top of LuminAR, provides a library that opens up the field of augmented reality to anyone with basic web development skills [60]. Lens encapsulates and exposes events recognized by the system to the developer as DOM 1 events on the browser. This allows developers to use standard web development tools to build projected AR applications without the need to adapt to an entirely new development environment. Various applications like augmented scrapbooking and an augmented manufacturing application have been developed on this platform. The compact and integrated nature of LuminAR attempts to address many challenges that typically plague the deployment of projector camera systems in non-controlled environments [36]. Figure 2-2: LuminAR: a project augmented reality system with a compact form factor. 1The DOM (Document Object Model) is a convention for representing objects in a HTML doc- ument that is specified by the World Wide Web Consortium. 21 2.3 2.3.1 Augmented Reality Interfaces in Education Secondary Screen There are many forms that augmented reality displays can take. One class of displays utilizes a secondary screen like a tablet or a laptop to provide the additional graphical information to the user. One study at the University of Washington applied this form of AR on an application for teaching chemical structures [5]. Users would hold fiducial-markers in front of a webcam and the laptop would render the structures associated with the specified markers. When this interface was compared to physical models, students who cited a preference for the physical models indicated that they like being able to hold, rotate and count the atoms on the models. Students who preferred the augmented model however liked how more complex structures can be created on the system compared to the physical model. This shows that a class of AR interfaces that allow users to both retain the feeling of "touch" and create complex models could be promising for a future interface. Another project from the University of Carlos III Madrid utilized tablets as the secondary screen for overlaying visualizations on fiducial markers. They developed an application that allowed students to construct circuits using marked boxes. Current flows on the circuits would then be shown when a tablet is held over the boxes. Similarly, learners can also view electromagnetic field lines using the interface. When compared with a basic web application, research found that students learned significantly better on the AR interface based on pretest/posttest results. The result seemed to be consistent with various other studies. Although further research is required to provide stronger evidence on the effectiveness of AR, researchers suggest that one possibility for the result could be that the cognitive efforts required to handle abstract concepts was lower for students using the AR interface, given that they could explore the visualizations spatially more easily when manipulating boxes than on the web based application [22]. 22 2.3.2 Projected Interfaces Projected AR systems create a new possibility over secondary screens. They allow users to use their hands freely without the need to hold any device and the digital information is projected directly on the physical world. A group at the Ecole Polytechnique Federale de Lausanne has explored various applications of the TinkerLamp, a system they have developed. For applications in education, they have developed a paper based interface for teaching geometry and fractions. The learner can move pieces of marked paper that represented a specific shape or number around under the Tinker Lamp. The projection would then show additional information that assists the user in answering questions on a problem sheet that is also marked. They have also taken the system to classrooms for teaching warehouse logistics, carpentry and geometry. In each case, the system was used and tested as a classroom-management tool. While the learning effects were unclear, the teachers indicated increased awareness on the progress of all students in the class [11, 8, 12]. The RVCLab at Queen's University also applied projected AR to enhance a pool player's experience by projecting guidelines for the ball trajectories based on the cue's position. The system helps novice players by calculating the best trajectories based on the ball's position in the game [56]. 2.3.3 Head-mounted Head-mounted displays allow experiences with more varied compositions of reality and virtuality, and can provide experiences that are completely virtual. In his study that applied AR to teach spatial geometry, Kauffmann et al. found promising results in his early assessment of the project, where the hypothesis that the system could improve a student's understanding on 3D geometry was supported by observations during his study [27]. Another application taught geography students the concept for earthsun relationships. The researchers also found encouraging results and highlighted benefits of being able to represent any three-dimensional phenomenon they wish, scaled to dimensions that are convenient for classroom observation and manipulation 23 [49]. While these applications provided an experience with a combination of reality and virtuality, Dede et al explored the possibility of a complete virtual experience. Fully immersive environments were created where students explored different worlds without friction or gravity. The learning results were also compelling in this case, but users are more susceptible to motion sickness when using head mounted displays. Such displays also present some challenges for lesson administration as the instructor would have to rely on verbal interaction with the student [10]. 24 Chapter 3 System Design We designed Enlight with the goals of providing a seamless interaction, where the learner can freely explore a phenomenon in the real environment that is augmented with visualizations. In a sense, the system acts as an assistant to the learner, providing just in time information about what he/she is observing in the real world and providing the cognitive support to learners working with abstract information through visual aids. The following factors guide our design for the system: 1. Environments for explorations: One of the key advantages of computer simulations is how they provide an environment for learners to explore hypothetical situations, practice tasks, change variables and verify predictions. However, haptic feedback is lost in virtual simulations. Moreover, simulated environments are set in time, limited by constraints defined by the developer. For example, in a virtual computer simulation for gravity, a user might be given a ramp and an object where the user can adjust the elevation of the ramp and the mass of the object within a given range. In the physical reality, the user can explore the concept with any object and our goal will be to support such an exploration. Enlight aims to provide such an environment through Augmented Simulations (AS) in 3D space. By projecting directly on physical objects, the learner's view of a physical phenomenon can be augmented with the computer generated imagery that now live on static displays. The combination of pro25 jection and sensing capabilities allows the system to track objects in the space and adapt to changes introduced by the user. Thus, an augmented simulation can take physical objects as inputs and respond to alterations introduced by the learner, per the natural phenomena it models after. Combined with good instructional design, augmented simulations can provide a learning environment that engages all senses. 2. Support active learning: Active learning encourages students to discuss, engage in solving problems, share ideas and teach each other [25, 4, 28]. Numerous studies have provided evidence that supports active learning and have shown improved understanding with active learning [20, 62]. Projections itself creates a larger interactive space than laptops and mobile devices that provide experiences that are inherently personal. Our system will build upon the larger projection space and support active learning by providing a collaborative environment in the setup and in the design of the applications. 3. Dynamic, online content creation: With the easy access to learning resources on the web and the growing popularity of online platforms such as edX, the Khan Academy and Coursera, the best learning material from the best teachers can now be easily obtained online. Thus, we design a system that allows educators to adapt resources and combine them with the augmented reality capabilities of the system. The system should allow integration of the latest learning materials with the simulations available to provide a cohesive learning experience. 4. Analytics and management tools: In virtual learning environments, and generally in computer mediated experiences, we can track student progress or detect performance dips by recording clicks and time spent on different learning modules. Such insights allow educators to provide tailored feedbacks to students and intervene when a student in struggling. Learning materials can also be adjusted and adapted to a student's ability. In the physical reality however, we are unable to detect what attempts students have made, or how much time 26 they have spent on a step in a lab experiment. This system will allow collection of student work not only through button clicks but also in 3D space through contour tracking and augmentation. The collected data will be updated in real time and provide educators with a live view of the class's progress. The system can also allow educators to intervene when necessary by providing educators with the ability to control what's projected from the system. 3.1 Initial Explorations Our development process has been iterative, with learnings from previous explorations informing the later designs of the system. Here are some of our initial explorations: 3.1.1 Count Count is a simple counting game that teaches young children basic arithmetic. The system tracks objects that are placed under the projection, and informs the user on the number of objects placed in the projected box. Designed with the target audience in mind, colorful Lego blocks [34] were used as transitional objects to convey the concept of numbers, but the system is not limited to recognizing Lego blocks as markers. The application allows users to add any object (large enough for LuminAR's contour detection module) to the interaction space. The game progresses to different stages that teaches the concept of addition, subtraction and multiplication. (b) (a) Figure 3-1: Count: The first stage of the Count game where the system counts the number of contours in the white box. Here, a mug is placed in the white box. (left) The second stage of the game where the concept of addition is introduced. (right) 27 The system was installed on the Node chair from Steelcase [52], which we had modified to house our projection system. Even with the augmentation, the functionality of the chair is retained, the projection is stable and the slightly elevated surface provides comfortable views of the projection. 3.1.2 white:scape In this exploration, the goal was to create an application that supports collaborations across multiple browser-enabled devices: laptops, tablets and the LuminAR system. Our design approach was influenced by the popular iPad application Grid [18], where the user points to select and fill an area on a matrix with text, images and other media assets. This provides a similar user language across all devices and built as a Meteor [41] based application, white:scape allows real time collaboration remotely and in person when users are working on the same matrix. As a collaboration tool in the workplace, the application also supports design thinking methods. The large projected interface we found provides a shared experience for multiple users compared with devices like laptops that are inherently personal. The system was installed on the Bivi table from Steelcase [51]. A large LCD TV was also used as a part of the interaction design. Two sets of the system were deployed at the headquarters of Steelcase in Grand Rapids, Michigan and were used as demo systems. (b) (a) Figure 3-2: white:scape : The full view of the interaction space with the Bivi table from Steelcase and the LCD TV as a shared space. (left) The application runs on browser-enabled devices and supports design thinking methods through templates. (right) 28 Figure 3-3: white:scape: The vision for an interactive space around the Steelcase Turnstone Bivi table. 3.2 System Implementation Following our early explorations, we implemented Enlight to support the format of the hour long lesson in a classroom, from lesson planning by the teacher, to carrying out the lesson and receiving analytics on student progress. A lesson is structured by multiple steps, similar to a set of slides, and a teacher would step through the lesson or allow students to progress at their own pace. A lesson step could contain plain - text, images, quiz questions or activities. We also call activities within a lesson "playgrounds". Playgrounds are the hosts for augmented simulations, where learners are encouraged to explore a real-world phenomenon that is enhanced by computer generated visualizations. Thus, an augmented simulation can be designated as a part of a lesson, along with other existing materials. The system was installed on the Verb table from Steelcase [53]. Two designs were produced by Steelcase to support different use cases. One houses our system on a monitor arm that can be removed from the table. Another houses the system on a steel arc that extends from the table. Both offer different flexibilities that retain the functionality of the Verb tables which were originally designed to be reconfigurable for different learning modes. 29 (a) (b) (c) (d) Figure 3-4: (a), (b) The application launcher for Enlight. (c) An arch extending from the table houses the system. (d) The device is installed on an arm attached to the table. A tablet holder is also mounted on the arm to allow an additional form of interaction. 3.2.1 Playgrounds Each playground was built as a stand alone web based application with the Lens Javascript framework [60]. Vectors Our first design for a playground was a game that teaches students about vectors through the analogy of navigating a sailboat on a fast flowing river or a plane in windy conditions. In each stage of the game, the student is given a goal to achieve and the student would select two points on the projected interface to determine the magnitude and direction of the vector for their sailboat or plane. This form of interaction was not unlike a touch screen and is less effective in capturing the potential of the system 30 as a 3D interactive space. Fields The use of physical objects was included in the next design for a playground which simulates vector fields created by monopoles or dipoles. The system would detect the presence of objects under the projection and redraw the vector field to reflect the current state based on the locations of the objects in the space. We took the application further by building a simulation of a magnetic field represented by field lines instead of a vector field. Built in collaboration with Professor John Belcher (Professor of Physics, MIT) [2, 1], real magnets were incorporated in the simulation, where a red and blue block would be recognized as a dipole. On detecting the presence of a contour, we would locate the blue and red sections of the contour, calculate the dipole moment vector based on the orientation of the contour, and apply the 5th order Runge Kutta to determine the field lines generated by the dipole. In this case, we calculate the dipole moment vector, P = p - 1, based on 1, the vector between the poles. The magnetic pole strength p is predetermined, but can be easily changed to adapt to different heights of the contour or a slider in the user interface. When two dipoles are present, we also calculate the torque on each of the dipoles due to the other and project arrows extending from the center of each dipole to depict the resulting forces. When the magnetic blocks are used, users can feel the actual physical forces from the magnets, and at the same time experience the visualization of field lines around the magnet. Within a playground itself, one can make predictions on the movement of the magnets from the field lines and verify them by placing them closer to each other or reducing the friction on the magnet from the surface. Using a physical compass to follow the fields lines also lends an additional level of reality to the simulation. 31 (a) (b) (c) (d) Figure 3-5: Fields playground: (a) The first iteration with vector fields. (b) The second iteration with color detection. (c) A yellow arrow is projected, extending from the center of the block to show the dipole moment vector. (d) Green arrows are projected to demonstrate the force vectors on the magnetic blocks. The compass needle closely follows the field lines projected. Pendulums In this playground, the location of a pendulum is tracked by the system and a heat map is projected to depict its motion tracks. Imagine that the pendulum drops sand on the table, and that sand slowly gets sifted through the table and disappears. The heat map hues ranges from blue to yellow to red, where the closer to red - the more "sand" is on that spot. We also provide visualizations on the x, y and z locations of the pendulum that was detected by the system in a time series, which demonstrates the periodic movements of the pendulum. We took a marker-less approach in the implementation of the app, so any object can be attached as the weight for the pendulum, allowing users to explore different behaviors as they use objects of different masses or strings of different lengths. More than one pendulum can be attached to demonstrate concepts of phase displacements in simple harmonic motions. Haptic feedback can also be 32 Figure 3-6: Pendulum playground: The location of the pendulum is tracked and displayed as a heat map. The time series on the top-left corner plots the changes in the x, y and z locations of the pendulum. introduced into the instruction of the concept as learners can feel the forces on the pendulum. The possibilities are limited only by the imagination of the user. This form of simulation is much more open ended when compared to simulations that run on computers bounded by static displays. On the traditional simulations, the set of parameters are predetermined during implementation. But on such a projected system, the simulation takes physical objects as input parameters, allowing the learner himself to determine the parameters to the simulation, and thus also dictate his learning. 3.2.2 Lesson Planner We developed a web based application that allows teachers to easily plan, design and deploy a lesson to the system. Using the application, users can embed texts, images, quiz questions and playgrounds, combining them into a set of slides that students can step through. Users simply add a step with a button click, and then drag and drop widgets to place any asset (text, videos, playgrounds etc) on the step. If a quiz question is added, the user can specify the type of question, multiple choice or short answer, and then provide the prompts and selections for the question. Videos, web pages and PDFs can also be embedded into a step. Thus, resources from the web can be adapted onto the system and no programming knowledge is needed to put together an augmented reality based lesson. Once created, students can access the 33 lesson through an application we call the lesson player which will be described next. eAon DataTNgh E yE3 E.i ner The w idet at bto the14-- of the s" c be > 5 P_ Is 1o Parw~ 12 ith11utsidet dragged on ~~~~t N 3.2.3 > d e ei ast otepg. oepgscnb Leuson Plye Figure 3-7: The lesson planner: The widgets at the bottom of the screen can be dragged on to the slide to add new media assets to the page. More pages can be added by entering new steps on the left of the page. 3.2.3 Lesson Player Designed as a Lens application that is supported by Meteor, the lesson player loads lessons and updates them in real time as the teacher modifies a lesson. Meteor is a web based framework that is designed to support dynamic web pages [41]. The framework handles the synchronization of all views with the database and provides real time feedback when changes are made to the database, making it an excellent choice for our application. Thus, whenever a lesson is modified on the lesson planner, Meteor serves up the new template that reflects the current state of the lesson, allowing a teacher to edit a lesson in real time. As a student steps through a lesson, the time spent on each step and answers to quiz questions are recorded. Additional data that stems from the contour tracking module, or the marker tracking module can be recorded as well, providing analytics on physical explorations that would have had to be manually observed and recorded without the system. Video conferencing capabilities were also added to the lesson player allowing conferences over WebRTC (Web Real-Time Communication) [59]. A button touch triggers a video call request on the teacher's view of the application 34 which we call the teacher dashboard. (b) (a) (c) Figure 3-8: The lesson player: (a) Text an images added on the lesson planner gets updated in real time on the player. (b) Quiz questions like a multiple choice question can be a step on the player. (c) A video conference can be launched over WebRTC from the player. A request will be sent to the teacher dashboard to start the two-way video conference. 3.2.4 Teacher Dashboard The dashboard provides a live view of the classroom where a teacher can track the progress of each student. The system supports the tracking of how much time was spent on a particular step, the answers provided to quiz questions, and the inputs given by students on a playground. We again leverage the capabilities of the Meteor web framework to provide live updates in this application. Built on the Meteor framework, the analytic and visualizations are updated in real time so the teacher can tailor the lesson to the class's progress. In a model where students each have their own LuminAR system or share multiples systems in groups, the application takes periodic snapshots and displays them on the dashboard so a teacher could easily detect and intervene when a student is in need of assistance. As described 35 earlier, students can initiate video conferences from the lesson player. When a video conference request is sent by a student, the teacher receives a notification on the dashboard and can participate in the two-way conversation through the interface. Compare Compare :By By Options Options (a) (b) (c) (d) Figure 3-9: The teacher dashboard: (a), (b) Graphs on the amount of time spent by each student per step and answers to quiz questions can be monitored from the dashboard. Toggling different comparison options changes the query to the database. (c) Snapshots from each system are taken and displayed on the teacher dashboard. (d) A teacher can view all configurations that a student has tried on the magnetic field playground through time. 36 Chapter 4 User Study This chapter describes a formal user study that was conducted to evaluate the Enlight system and study the potential of AR to improve conceptual understanding of abstract scientific concepts. Students were brought in to complete a short activity on magnetic fields and to provide feedback on the system. The primary goals of the study was to analyze the effects of the system on student learning outcomes, as well as the usability of the system. To that end, the research questions are: 1. To what extent do students perceive the use of the system as engaging (i.e. motivating their interest) for demonstrating magnetic fields? 2. Which features of the demonstration did they find the most helpful for learning the concepts in the lesson? 3. How did students' conceptual understanding change with the use of the system? 4. What did the students find easy or difficult to use on the system? Was there a learning curve? 5. Can the system be deployed and put to use in a real classroom scenario? 37 4.1 4.1.1 Evaluation Methodology Research Population The study was spread out over two days with 30 student volunteers participating in the study. All participants were students from 8.02, MIT's introductory physics course on electricity and magnetism. The class is taught in the TEAL format, where students spend a larger portion of class time on group activities compared to the traditional lecture format. Visualizations of electricity and magnetism on Java applets and Youtube videos are also a part of the class. Students were recruited over email announcements to the class by Professor John Belcher, one of the lead instructors for 8.02. 4.1.2 Research Instruments and Procedure The research instruments in the study included conceptual tests (pretest, posttest), an online survey and a think aloud session. The think aloud session involves participants thinking aloud as they complete a learning activity on magnetic fields. During the session, subjects were asked to verbalize their thoughts, feelings, and opinions while interacting with the system. The think aloud protocol was originally described by psychologist Karl Duncker as "productive thinking" and a way to understand his subjects' development of thought [13]. Researchers probe subjects as infrequently as possible during the session. When silences continue for several seconds, neutral cues were used to encourage subjects to think aloud but do not bias the data by adding external ideas to the internal processes of subjects [15, 26]. In the usability field, the think aloud method was introduced by Clayton Lewis. The method allows researchers to gain first-hand observations on the procedures employed by participants in the process of completing the task at hand [35]. In our study, the task was described by a worksheet that was designed to accompany the magnetic fields playground app. The worksheet consisted of four parts, where each part introduces a new concept on dipoles. Consistent with our design 38 for the system that encourages exploration with less emphasis on the quantitative formulations for the topic, the worksheet was designed to be open ended, with minimal descriptions and no specific instructions on what they can or cannot do with the objects they were given. An example from one of the parts on the worksheet is presented in Figure 4-1. Part 2 The magnetic dipole moment vector m is a vector which points out of the north pole of a magnet. The magnetic dipole moment vector for the large bar magnet shown below is upward. Feel free to move the magnet and the compass around. The vector m is shown to you in yellow on the system. Figure 4-1: Part 2 on the worksheet The pretest and posttest consisted of four multiple-choice questions and in the posttest only, two free-response questions were also added. The pretest and posttest consisted of different conceptual questions. To illustrate the type of conceptual questions included in the tests, we include a sample question from the pretest in Figure 4-2. 39 xs B A magnetic dipole with dipole moment m sits in a constant field B as shown above. The dipole will 1. Rotate clockwise and not move 2. Rotate counterclockwise and not move 3. Not rotate and move to the left 4. Not rotate and move to the right 5. Both rotate and move 6. Neither rotate nor move 7. 1 don't know Figure 4-2: One of the pretest conceptual questions. Both conceptual tests and the worksheet provided during the think aloud session were designed by Professor John Belcher. 4.1.3 Study Protocol Participants arrived in groups of three and were given a prepared introduction before they were led to individual classrooms by three different researchers. Three LuminAR devices were brought out of our lab space at the MIT Media Lab and placed in three different classrooms. Participants were also given two Lego blocks containing magnets, a compass and the worksheet for the study. Pretest Activity/ Think aloud session User Training Survey Posttest Interview Interview Figure 4-3: Study Protocol There were five parts to the study that was completed individually by each participant: 40 Pretest The pretest questions were completed on the system itself. The time spent by each participants on each question and their answers to the questions were recorded on the system. User Training In this part, users were prompted to press buttons, place objects in projected boxes and move the magnetic blocks given around. Participant were only given prompts such as "Click the button" or "Place 3 items in the box below" for the tasks involved and not instructions on how to complete the tasks. This part of the study was given as an introduction to participants who have not encountered a projected augmented reality interface before using the system. Figure 4-4 shows example steps in the user training part of the study. (b) (a) Figure 4-4: Example steps in user training. Button touches.(left) Placing magnets under the projection. (right) Think aloud session / Activity Participants were handed the worksheet and the session was video recorded as they worked through the materials given. There were no time restrictions and students were given the freedom to spend as much time as they wish on the activity. The prompts that guided the students during the think aloud session were: . What are you noticing during this demo? 41 * What do you think the lines/ arrows " What is bugging you / / images mean? confusing to you in this demo? " What do you like in this demo? These were written on a poster so the student can refer to them during the demo. Researchers were present during the think aloud session to answer any questions from the student and refer to the prompts if silence continues for more than a few seconds. The video recordings from the session were subsequently transcribed. All direct quotes from students were then coded into generalized common themes that represented actual events [571. Independent coders were used to ensure that the information derived from this phase was reliable. The two annotators are graduate students at George Mason University and are introduced to the project at the analysis phase. They were employed by Dr. Katherine McKnight to conduct an independent analysis that was then compared to our results. They provide an objective review of the transcripts and content analysis. Posttest The posttest questions were also completed on the system itself, where we recorded the time spent on each question and the answers to the questions. Online Survey After the study, an online survey was sent out to the students. The survey consisted of questions about the usability of the system, as well as questions on how much the students perceived the system as helpful to their learning. Survey responses consisted of both answer ranges and free responses. Participants were given an Amazon gift card upon completion of the study and the online survey. Interviews Follow up questions were asked to supplement data from the think aloud sessions. This procedure was described as part of a two-step process in Johnstone et al.'s review 42 of the think aloud method. This procedure provides clarifications on data derived from think aloud techniques [26]. Some of these questions were also asked before the think aloud session, to allow before and after comparisons of student responses. The questions asked at the beginning of the study were: 1. Have you studied magnetic fields before? 2. What do you know about magnetic fields? 3. What about magnetic fields did you find confusing? What is tricky to un- derstand about magnetic fields? What is hard for you about understanding magnetic fields? The questions that were asked after the participants have completed the posttest were: 1. Overall, what did you learn about magnetic fields through this demo? 2. What stood out? 3. What still confuses you? What questions do you still have? 4. What did you learn about the forces on the magnets? The factors that affect the forces on the magnets?The field lines? 5. Have there been interactive demos you have used or seen before on a similar (or the same) physics topic? If so, how did this compare? 6. If someone created a similar app for a tablet (Show photo of app when app is running on the iPad), how do you think that would compare? 4.2 Data Analysis and Results We discuss our findings in three different domains. The first relates to the cognitive domain, where we discuss how students performed on conceptual tests as compared to how students perceive the system as being helpful to their learning. The second 43 relates to the usability of the system, where we analyze student observations on the ease of use of the system and discuss the learnability of the system. Finally, we highlight student preferences based on survey responses and common themes that have emerged from the think aloud sessions. 4.2.1 Cognitive Domain To help us measure the effect of augmented reality on students' conceptual understanding of magnetic fields, the questions we proposed were: 1. How did student performances change from pretest to posttest? 2. Which students improved? Which did not? We first looked at comparisons between pretest and posttest scores across the group of students (n = 30). To help us further understand differences in learning gains, the class ranks of the students from their course were also used in the analysis. The class ranks provide additional data on the students' ability and level of conceptual understanding in electricity and magnetism, on top of the pretest and posttest results. Class rank is defined as the class rank of the student in the Spring 2014 semester MIT 8.02 intro to physics course, ranging from 1 (top) to 800 (bottom) for all students in the course. The class ranks were calculated based only on exam scores without including participation or homework grades. Refer to Figure 4-5(a) for the distribution of participants in the study from the overall 8.02 course of 800 students. 44 . .. . ...... .. 0 P055051% Top/Botlom of class U Top of the class U Botom of Ite Posttest vs Time spent on activity ( All) ass 0 100% 90 0 0 00 000e0 0 0 0 41 0 0 00 50% 0 0 T0 0 0 000oo 0 80 00 0 0 00 0 C 25% 0 --50 400 200 0 800 600 0% Class Rank (1-800) 1050 700 350 Time 1400 spent (s) (b) (a) Figure 4-5: (a) Distribution of participants from 8.02. (b) Posttest scores vs Time spent on activity: Time spent on activity refers to the amount of time spent on the activity portion of the study where students were exposed to the magnetic fields playground. 0 100% 0 0 75% 0 0 0 0 Pretest and Posttest Scores (sorted by class rank) 0 0 0 8 0 0 0 0 0 00 o8 00 0 0 0 0 0 0 88 0 0 400 0 0 00 0 2W0 0 0 0 0 0 0 00 0 25% 0 Postest Pretast% % o aOO 0 0 0 0 aOO Class Rank Figure 4-6: Pretest and Posttest Scores: Each column in the graph represents the pretest and posttest score of one student. The columns are sorted by the class rank of the student in 8.02. Based only on test scores (Figure 4-6), there appears to be little or no correlation between the pre and posttest scores or class rank across the board. Similarly, when we looked at the amount of time spent on the activity portion of the study (as a measure of the effects of augmented reality and interest in using an augmented reality system), there appears to be little or no correlation between test scores and time spent learning on the system either (Figure 4-5(b)). However, when students were separated into two groups by class rank (the top 50% of the class for ranks 1 - 400 and the bottom 50% for ranks 401 - 800), there 45 .. . . ........ . . appears to be a significantly stronger correlation between time spent on activity and posttest scores for students with lower class ranks. Posittest vs Time spent on actit (Top 50% based on class rank) 100% 0 0 CC 0 00 0 0 0 0 0 of 0 0 000 0 75% POfteSI % 0 % 0 Posttest Postlest vs Time spent on activity Bottom 50% based on class rank) 0 100% 0 0 0 0 0 00 0. 0 0 0 25% 25% G% 0% 0 1400 1050 700 350 0 350 700 1050 1400 lime spent on ac"it(s) lime spend on activity (a) (b) Plot for the top 50% of the class. (a) Plot for the lower 50% of the class. Figure 4-7: Posttest scores vs Time spent on activity plots separated by class rank. Based on Figure 4-7, for students from the bottom 50% of the class, spending more time with the system appears to improve overall posttest scores, but the same cannot be said about students from the top 50% of the class. This suggests that the learning effects from the activity may vary between different students. A student's prior knowledge or level of understanding on the topic may also have an effect on the learning outcomes. We continued with a comparison of pretest and posttest scores for this group by calculating the relative improvement measure g [19], defined as follows: Posttest(%) - Pretest(%) g 100 - Pretest(%) Learning gain g measures relative improvement as a percentage of how much a student could have improved from his pretest score. We calculate both the Pearson correlation coefficient [45] and the Spearman correlation coefficient [50] to compare the learning gain g for each participant and the amount of time each participant spent on the activity. Table 4.1 shows the results for both groups, students from the top 50% of their class and students from the bottom 50% of their class. 46 Table 4.1: Correlation values between time spent on activity and learning gain g Measures and Values Pearson correlation coefficient, r p-value from Pearson r Coefficient of determination, R2 Spearman correlation coefficient, r p-value from Spearman r 0 Bottom 50% (n = 12) 0.8465 0.002009 0.7166 0.69138 0.01276 Top 50% (n = 18) 0.0195 0.934966 0.0004 0.01868 0.94135 0 Leaming gains Learning gain vs Time spent on activity (Top 50%) Leamig gains Learning gain vs Time spent on activity (Bottom 50%) 0 0 0 0.75 0 0 0 0.55 0 0.5 0 U025 0 o 0 0 000 0 00 0 0 00 0 -03 0 0 0 -0.25 0 *0.5 0 350 700 1050 1400 -0.5 0 350 700 1050 1400 Time spent on activity (a) lime spent on activity (s) (b) Plot for the top 50% of the class. (a) Plot for the lower 50% of the class. Figure 4-8: Time spent on activity vs learning gain g plots separated by class rank. Preliminary comparisons from this study show a significantly stronger positive correlation (0.8465 > 0.00195) between time spent on activity and learning gains for students from the bottom 50% of the class than the top 50%. There appears to be a high (r = 0.8465) and significant (p < 0.005) correlation between time spent on learning with the system and learning gains for the lower performing group. Given our smaller sample size, we include both the Pearson and Spearman correlation coefficients as a sanity check. Both coefficients appear to show a high and significant correlation. It is interesting to note how vastly different the correlations are here for the two groups even though both underwent the same set of activity and tests. Our reasoning attributes this difference to a possible ceiling effect for the students who are already higher performers in their course. With little or no correlations between posttest scores and class rank, students with lower ranks in the course were performing at the 47 same levels as students with higher ranks in our conceptual test. While we should note that the pretest and posttest questions were different, we deferred to Professor John Belcher's experience in the field for a confidence in the measure for improvements in conceptual understanding of the topic. Self reported helpfulness - Bottom 50% Self reported helpfulness - Top 50% Not so helpful * 1 @2 3 Very helpful @4 @ 5 Not so helpful * Very helpful 0 2 1 3 4 @5 (b) Plot for the top 50% of the class. (a) Plot for the lower 50% of the class. Self reported helpfulness - All students Not so helpful *1 Very helpful 2 @4 3 @ 5 (c) Plot for all students of the class. Figure 4-9: Self reported helpfulness of object augmentation on learning From the online survey completed by students after the study, we obtained findings on the self-reported helpfulness of the system as well. The question we asked on the survey was: "Was the object augmentation helpful to understand the concept of magnetic fields?" and students were given an answer range between '1' (Not so helpful) and '5' (Very helpful). Out of the 30 students who participated in the study, 48 26 completed the online survey and the results were consistent with our findings from the test scores. A larger percentage (20%) of lower performing students reported the helpfulness of object augmentation as very helpful (5) compared to only 6% for the higher performing students. The same argument extends when we compare the percentage of students who rate the helpfulness of object augmentation as 4 or higher. These findings indicate that object augmentation could potentially be helpful to students for which current learning methods and interfaces are lacking. The proposition that students learn in different ways is not new. Numerous learning style models from the Kolb learning style indicator [31, 30] to the Gregorc style delineator [17] and the Felder-Silverman index of learning styles [16] have been developed over the past 25 years. In different models, students could cite a preference for more "concrete, hands-on" approach or a more "abstract, written" approach [21]. More recently, a "digital natives" debate has also emerged, raising the idea that a new generation of students that have been immersed in technology could have a different learning preference for which current education methods is not equipped to support. Bennett et al. has compiled a comprehensive review on the idea [3]. It is important to note that the participants in this study were all students of the TEAL format at MIT, where classrooms are technology enabled. Students are provided visualizations of concepts in Java applets or Youtube videos and complete part of their assignments online on the MITx site, where the open source version of the 8.02 class is offered. While all participants completed the same course on electricity and magnetism at MIT, it appears that the learning effects from the augmented reality system differed for different sets of students. Their posttest results showed no correlations with class rank which indicates that lower performing students from the class were showing similar levels of conceptual understanding as higher performing students on the posttest. Moreover, for the group of lower performing students from the course, spending more time on the augmented reality activity seem to have a positive correlation with their learning outcome. This suggests that the augmented reality approach has positive effects on a subset of students for which current learning methods have not produced effective learning outcomes. This provides a strong reason 49 to further pursue a larger and more concrete study for the interface as a tool for science education. 4.2.2 Usability To help us measure the ease of use of the projected augmented reality system as an interface, we look at two factors of usability: the perceived ease of use of the system and the learnability factor for users encountering an interface they have not been exposed to before. We asked participants two questions on the online survey that relate to the usability of the system. 1. Did you find the buttons easy to activate? Yes or No 2. Did you find getting the system to recognize the magnets easy? Yes or No On the LuminAR interface, over 63% of the participants found it easy to get the system to recognize the magnetic blocks. The buttons however were perceived as difficult to activate, with half of the students reporting as such. The percentages reported here do not account for the participants who have not completed the user study as shown in Figure 4-10. 10 out of 26 students found the buttons easy to activate 20 out of 26 students found it easy to get the system to recognize the magnets * Easy NIA * Dificult Easy NJA 0 Difficut (b) Projected buttons. (a) Object recognition. Figure 4-10: Self reported ease of use of the interface. 50 Another finding from the study relates to the usability of projected interfaces as a whole rather than a LuminAR specific factor. During the user training phase of the study, we included a step that requests participants to place physical objects into a projected box. The prompt was "Place 3 items in the box". An image of what was projected is shown in Figure 4-11. We did not anticipate or prepare for the students' subsequent reactions to the prompt. To our surprise, a large portion of the students attempted to drag on-screen virtual elements into the box rather than place physical objects like the magnets and compass they were given into the box. Out of 30 students, 60% attempted to drag on-screen objects rather than physical objects. For 10% of the students, the outcome is unknown due to factors like researchers handing objects to participants before they reacted to the step or video footage was unavailable. Following that, we focused our attention on the 60% of students who attempted to drag on-screen items. In this group, more than 50% of the participants came to the realization of their error in interpreting the instruction on their own and reached for a physical object without further prompts from the researcher. This indicates that overall, more than 60% of participants was able to interpret the intended use of the system without assistance, excluding the unknown/unrecorded behaviors for 10% of the participants. This lends insight to the intuitiveness of interactions with physical objects even when faced with an unfamiliar use case for users whose first experience with a projected AR system is during the study. (b) (a) Figure 4-11: User training: The step that requests the participant to place 3 items in the white box projected. (left) The behavior demonstrated by some participants attempting to drag a virtual object into the box. (right) 51 OK Attempted to drag on-screen objects Instead of physical Yes * * Was prompted actively to place 'real objects' in the box? No Yes in. Other Othe Rb 4 (a) (b) Figure 4-12: User behavior with virtual and physical objects. (left) User behavior for participants who attempted to drag on screen objects. (right) Overall, given open ended directions, the participants adapted to the interface without explicit guidance from the researchers in this study. The figures reported here however should be read with discretion as the perceived usability of the system by users is relative to experiences with interfaces commonly available today. 4.2.3 Student Preferences The analysis of the transcripts that we present here were provided by the annotators from George Mason University. Their involvement in the study as described before, were to provide an objective view on the qualitative data. We highlight some of the common themes that emerged from the think alouds and their related quotes. Annotators had five overlapping transcripts and the results were checked for consensus between the two annotators. In this analysis, we report only the presence or absence of a comment and not the actual total count of comments made by students, as some students made multiple positive/negative comments on certain aspects. Physical, real objects Students markedly enjoyed the physical aspect of the activity with 18 students out of 30 independently making positive statements about being able to 'move around physical objects': 52 " There is something so much more fun about playing with physical legos than " with using a mouse to control little points on a screen. S" I think this is really similar, but it is never quite the same moving around on the screen. I found it easier to just move things around then doing something on a screen. At least with a tablet, versus a computer screen, you are moving things around, but it is not the same as picking it up and moving it." It was not only the physical aspect of moving things around that appealed to the students, many also stated that the inclusion of real objects lends an additional sense of reality and believability to the simulation. 9 students commented positively on how the magnets provided haptic feedback during the activity, while 15 students commented positively on the use of real compasses. S" I guess this is more engaging because it allows you to do more hands on type activities. Also because these are actual magnets, i.e. if you bring them close together, you can actually feel force, you can feel them repelling or attracting. That is good because you see the arrows that tell you they are going to attract and as you bring them closer, you feel them going to attract. That is something you cannot get on a computer, i.e. that these two things are definitely pushing " each other apart. * "There is just a sense that it is a real thing and that you are seeing, where the projection is showing what is underlying the nature of it." "Because it is real and you are actually doing a lot of the stuff we do in physics they kind of just tell you about, and you believe them. When you get to do it with your hands, it very much reinforces the fact that these are real forces, that dipoles are not some abstract thing, vectors and everything, the cross product - that they are things in nature that we have discovered, not just made up." * "Using the compass with the line is a good idea. It kind of helps you get a more accurate depiction of what is going on... because you can see it falling along the 53 counter lines and actually orienting itself, which kind of adds a real thing to the concept." Students also independently reasoned that the projected field lines helped reinforce their understanding on an abstract concept or helped them see the invisible aspect of a real phenomena. 14 students commented on how they liked the projected field lines. 11 students liked the force vectors that were also projected: * "It was just cool to see real field lines from magnets. It is kind of an abstract concept, and seeing it right in front of you makes it more clear." * "Ithink that some of the problems with E&M versus classical mechanics is that you see classical mechanics as it happens. E&M you cannot see it, so, being able to put a magnet on the table and then all the lines - that was useful. It was also a lot of problems that other digital interfaces have is that it kind of all goes in one direction so you are still manipulating digital objects - there is no real physical analog. So the idea of having physical systems is good, especially having a physical compass, instead of a virtual compass, that points... a physical compass seems more real. I can make a digital compass point whatever with a few lines and a GL, but the physical compass tied kind of tied the concepts better." "this kind of system would be more useful where there is actual physical phenomena that you are modeling... where it is almost entirely conceptual, because (xx unclear from audio) you have an actual object that you are working with the digital interface so you kind of need a class where you have actual constructs." * "It was cool to see exactly the force where it is pointing. The magnet still does the same thing along the lines, but to see that the green line is showing that the force...It was nice, later, having the force vectors also on top, so you could see how they all related to each other..." 54 Errors, technical difficulties The magnetic field playground application however still failed to capture some aspects of the concepts presented that students wished were present. Errors that stem from failures of the object recognition system, interface bugs and other technical difficulties during a real-world deployment scenario, were also raised during the think-alouds. Some were application design related, while some are areas of improvements that will have to be addressed in the next iterations of development. Responsiveness - Several students wished that the system was more responsive, with 9 students commenting on the speed of the screen responding to moving objects and 6 commenting on the sensitivity of the buttons: * "Something that is bugging me is that whenever I move it, it disappearsfor several moments until I move my hand" * "I wish it was a little more responsive, but I guess that is because I am spoiled: I am use to using the touch screen on my iphone or on computers." * "The sensitivity is a little bit. Sometimes I have to twice - I don't know if that is going to get in the way." Wish for more elements in the demonstration - Some students suggested improvements for the visualizations that were projected. 9 students wished there were additional arrows that showed torque or other vectors that are related to the concept. S" It would be useful if there was another arrow showing the torque on the dipole at the same time, but, they are kind of separate ideas so maybe that would be something you could add on at the end, as a Part 5, where you could show the torque and force together." * "Ihave one thing that I am noticing is that sometimes the hyper field lines are broken up so they are not connected for me. I guess it would help to have them it exactly on the dipole. " connected so I can try to move the compass aroundfield lines to be able to trace 55 * "I guess I was sort of wonder why some of the lines seem to like break off. And I really want to know what happens right here. This is connected, like, these (three lines are? A0 : 00 : 14) reconnected. Why not the other ones? Because they can't cross. So yeah I kind of wish I would know what's going on here with the dark spots.""I think it could be helpful if you could move them and see how it (force) is changing as you move them" Technical difficulties (Calibration and object recognition) - Being a real deployment of a system in an uncontrolled environment, we ran into occasional technical difficulties that were raised by the students. 10 students commented on the errors in object recognition: * "Something that may help is being able to recognize whether it is a hand or an object because sometimes when I put my hand it would think it was a magnet but it was just my hand." * "it looks to me, that the calibration of the fields are just slightly askew. It still gets the point across, it's just not perfect - it's hard to get it perfect. It makes sense though" Overall findings Broadly, the findings from the transcripts seem to be consistent with the findings from the online survey. The common themes that emerged in the transcripts from the think aloud sessions were amplified in the survey responses. In the online survey, we asked two free response questions that relate to their preferences. 1. Would you prefer this system over visualizations on tablets/laptops/desktops? 2. What will make the system more enjoyable? For the first question, annotators classified a student response as 'Yes' only if it was explicitly stated in the response. Student responses were classified as 'I don't know' 56 if the student claimed they were 'Not sure' or that 'The experience was novel and I need more time to tell'. All other responses were classified as a 'No'. Figure 4- 13 shows the result from the survey. Overall, more students report a preference for using a tangible simulation on the LuminAR system when compared to visualization on tablets/laptops/desktops. Students who report a preference for virtual simulations preferred the mobility and user friendliness of current interfaces. Student responses to: Would you prefer this system over visualizations on tablets/taptops/desktops? Yes, it improves * I ont * Yes * Yes, out oniy marginasy Noi * NIA know Figure 4-13: Student preferences based on survey responses: interface comparison. For the second question, annotators identified and categorized common themes based on the students' responses, where a common theme is a statement made by three or more students. Figure 4-14 shows the results from the survey. * 'Use of hands with magnets' (3 students) refers to responses that wished for the ability to keep their hands on top of the magnets while the visualization for the fields lines is updated. With the current implementation, users have to remove their hands from the magnets for approximately 1 second before the fields lines are calculated and shown. * 'Buttons improve' (9 students) refers to responses that wish for projected buttons that are more responsive and predictable. * 'Quicker simulation' (11 students) refers to responses that wish for a more responsive simulation. This is easily rectifiable with the use of a more powerful 57 computer than the current single board computer on the LuminAR system. 'Application design related' (5 students) refers to responses that suggest improvements that are related to the application design like 'the color of the arrows projected should be red' or 'I'd like to be able to go back to a previous part' etc. Us. eas Student responses lathe queston: What ell ma sWntw. Ousstaten Gpsee ese. e the system more enjoyable? App . 091deq mW Figure 4-14: Student preferences based on survey responses: areas for improvement. 4.2.4 Discussion and Conclusion Even though doubts were raised on the capabilities of the system, a larger percentage of students seem to profess a preference for the projected system, when compared to screen-based interfaces. While the results presented are preliminary, they demonstrate potential for a different type of interface that may complement learning in ways that current interfaces and learning methods are lacking. This appears to be particularly true when we refer to students who are performing at a lower level compared to their peers from the same course. We made several accomplishments that demonstrate the possibility for such a use case within the Enlight project as an extension of the LuminAR project: 9 We brought the system out of a controlled lab space and tested at multiple locations, classrooms and conference rooms. The system was not only deployed during the user study but also at locations outside of MIT. Initially, two devices were deployed in the white:scape project at Steelcase headquarters in Grand 58 Rapids, Michigan. With Enlight, two copies of the system are also deployed at Steelcase as demo systems. Several more have been tested outside of the Media Lab in other extensions of LuminAR that explored the application of augmented reality in manufacturing. " A non trivial number of users completed a full length study without instructions on how to use the system. Participants started completing tasks on the system within minutes of exposure to the system. " We provided a user interface for educators and users who are not software developers to incorporate augmented reality with commonly used media for lesson planning. While the vision of running a projected system during a real classroom scenario have not yet been realized, the setups are in place to carry out a more advanced study of the effects of augmented reality on learning. Further explorations on a larger scale should be carried out to fully discover the potential of projected augmented reality as an approach to science education. 59 60 Chapter 5 Future Work The next steps in the project will be to design additional augmented simulations that utilize 3D space and interactive projections as an interface. Apart from being able to show the invisible (magnetic fields) and the change of visualizations through time (pendulum), a projected interface allows interactions that are off the surface of the table, a space we have not fully explored in this iteration of the project. Examples of such an interaction have been developed on our short capability demo applications: " Boids - This application takes the Boids simulation to the next level. Instead of clicking/touching a point on the screen to steer the set of flocking points to a spot, a user can place an object in the space to attract the flock of points. A user can also steer the flock of points with a mid-air gesture, or trap the projected points in his hands as the set of points flock around in his palms. " Dynamic projection - This application plays an optical illusion on the user. In a top-down projection, the size of a projected image becomes smaller the closer the surface is to the projector. By detecting the distance of the surface from the projector, we can enlarge the projected image, so the size of the image remains the same even as the surface gets closer or further away from the projector. This gives an illusion to the user of picking up a virtual object in 3D space. 61 (a) (b) (c) (d) (e) (f') (g) (h) (i) (j) (k) (1) Figure 5-1: Boids: an image burst displaying how the flock of points are 'trapped' in the palm of a user. With this exploration, we are only scratching the surface of the types of abstract concepts that can be taught on a projected system. With this form of interaction based learning that encourages a learner to explore a concept qualitatively before diving into the quantitative formulations, perhaps the cognitive barriers to learning complex concepts can be lowered, and learners from younger age groups can gain intuitions on concepts that are now reserved to college courses. Thus, closer collaborations with content experts and educators will be needed to develop meaningful learning materials that fully exploit the capabilities of the system. Studies at a larger scale involving control groups and real classroom scenarios should be carried out to determine the applicability of such an interface in education as well. 62 63 Appendix A User Study Materials A.1 Rundown for User Study Rundown for study 1. Introduction + De-biasing the student The lab is developing this projected augmented reality platform and the platform is now being applied in education. For now, the topic of focus is Physics and we've been asked to carry out this research to help them determine if this is helpful to you. The topic of focus for today is magnetic fields.. You do not have to have background in the topic. Basically, you'll start with a pre-test. There's no need to get nervous about the pre-test, it's just for us to get a sense of how much you know about the topic before completing the activity. Next, you'll go through a user training, where you'll be given tasks that will help you get used to interacting with the system, like touched, object detection... getting a sense of how the color detection works.. After that, you'll be given a handout and there's an activity with several parts that you'll go through on the system. During this activity, there's a couple of questions we'd like you to think about and provide feedback to us about. These are mostly on the user experience instead of the material... like 'What did you notice in the visualization?' .... We would just like to know what you think so share any thoughts that you have with the researcher. The last part to the study is the post-test, which is similar to the pre-test. You'll be video recorded during the study, but only the interaction space is within view, so only your voice and hands will be captured.. 2. Have student sign consent form 3. Turn on video & audio recorder 4. PRE-TEST 5. USER-TRAINING " Button presses " Contour counting " Magnet orientation " Magnet color detection 6. Start with interviews about what the students already know about magnetic fields: A. Have you studied magnetic fields before? B. What do you know about magnetic fields? C. What about magnetic fields did you find confusing? What is tricky to understand about magnetic fields? What is hard for you about understanding magnetic fields? 64 7. Introduce the demonstration, e.g., "What if you learned about magnetic fields in this way..." then launch the demonstration. 8. ACTIVITY : Start think aloud, following prompts: Prompts that will guide student during think aloud session: a. What are you noticing during this demo? b. What do you think the lines/ arrows / images mean? c. What is bugging you / confusing to you in this demo? d. What do you like in this demo? These will be written on a poster so the student can refer to them during the demo. 9. POST-TEST 10. Follow up the demo & Think Aloud session with interview questions about the experience, especially as it relates to their learning about magnetic fields, e.g., : a. Overall, what did you learn about magnetic fields through this demo? i. What stood out? ii. What still confuses you? What questions do you still have? b. What did you Learn about Z (these should be specific concepts within topic X that are important to understand)? i. The forces on the magnets? ii. The factors that affect the forces on the magnets? iii. The field c. lines? Have there been interactive demos you've used or seen before on a similar (or the same) physics topic? If so, how did this compare? d. If someone created a similar app for a tablet (Show photo of app when app is running on the iPad), how do you think that would compare? 11. Wrap up with the student, thank them for their time 12. Email out user feedback survey on Google forms. Upon completion of survey, provide incentive. 65 Pretest Questions A.2 cxcOcx (b) (a) thenorth pole The earth looka like it contains a giant bar nagoet. Which one of the following statements is If red denotes the north magnetic pole, then in the figure above, which of th two figurs show g . te corretesens. ofthe magnetic fiels e (a) The geographic north pole of the earth is a north magnetic pole. (a) Left fiBe (b) The geographic (h) Right figur north pole of the earth is a sotth magnetic pole. (b) Question 2 (a) Question 1 A Two vectors are shown as above. The direction of A x B is I A magnetic dipole with dipole omknt a sits mo a constant field B as shown above. The dipole Wini I. Ratate clockwise and nt move (a) Up 2. Rotate countarclockwhse and at move (b) Down 3. Notre (C) Left 4. and move to the et Not otate and move to the right (d)Right 5. Both rotate andtmove (a) Into the page 6. (f) Out of thepage 7. 1 don't know Neitherttlate normove (d) Question 4 (c) Question 3 Figure A-1 66 Posttest Questions A.3 Question 1: There were two green arrows in the visualization you were shown. Explain in your own word what you think the physical meaning of those arrows were, and why: If red denotehdie north magnetic pole, de in the figur above the magnetic dipole momet tred denoteS the north magnetic pole, then inthe figure above the magnetic dipole momet vector - ofthe top bar magnet (the back rectangle) points: vector - ofthe bottom bar magnt (the black rectangle) points: (a) Up () Up (b)Down (g)Down (c) Le& (h)Left (d) Right (i) Right (e) Impossible to tell from the informatien given. () Impossible to tetl fom the information given. (b) Question 3 (a) Question 2 In the figure above the magnetic dipole moment vector en of the left bar magnet (the black rectangle) points: In the figure above the left barmagnet right at the instant shown will try to (a) UP (b) Down (a) Rotate clockwise (c) Left (b) Rotate counterclockwiae (d) Right (C) (a) Impossible to tell from the information given (d)Impossible to tell fram the information given Not rotate at all (d) Question 5 (c) Question 4 Figure A-2 Question 6: Consider the following statement: Two magnetic dipole free to rotate and move will almost always end up attracting one another. This statement is: true /false. Below give the reasoning process you used to come to your answer. 67 - - , NNMNM - -- I- - -- - - - I - - . -V- A.4 Worksheet for User Study Enlight - Augmented Physics Education User Study Activity Description Part 1 A compass needle in the magnetic field of the earth behaves such that the north seeking pole of the compass needle, colored red, points towards geomagnetic north when the needle comes to rest. A bar magnet consists of two poles, designated the north-seeking pole (N) and the south-seeking pole (S). The north-seeking pole or north pole is always colored red. Magnetic field lines of the bar magnetic leave from the north pole and enter the south pole. When placed in the magnetic field of a bar magnet, a compass needle will always align itself with the local magnetic field, so that a distribution of small compass needles will orient themselves in the presence of a bar magnet as shown below: Try replicating the image above with a magnet on the system. Feel free to move the magnet and the compass around. 68 - --- - -- m-- -mw-- -- :: 1. , @Agqg - " Part 2 The magnetic dipole moment vector m is a vector which points out of the north pole of a magnet. The magnetic dipole moment vector for the large bar magnet shown below is upward. Feel free to move the magnet and the compass around. The vector m is shown to you in yellow on the system. Part 3 A magnetic dipole with magnetic dipole moment vector m sitting in an external magnetic field B will feel a torque T = m x B , which tends to align the direction of the dipole with the direction of the external field, like how the compasses are aligned in the figure above. Try this out with two magnets. Feel free to move the magnets and the compass around. 69 Part 4 When holding two bar magnets close to each other, the like poles will repel each other while the opposite poles attract, as shown below. F~-F y Try this out with two magnets. Feel free to move the magnets and the compass around. The force on each of the magnets are shown to you in green. 70 . ...... ..... A.5 Online Survey 7/27/14, 10:45 PM Augmented Reality in Physics Education Survey Request edit access Augmented Reality in Physics Education Survey Thank you for participating in the Enlight Effects of Augmented Reality in the Learning of Concepts in Physics Required Name a How was your experience of pressing "augmented" buttons compared to using a touch screen? a 1 I like the touch screen more 9 2 3 4 5 C) 0 00 I like the buttons more * Did you find the buttons easy to activate? Yes 0 Q No How would you improve the buttons? * Did you find getting the system to recognize the magnets easy? 0 Yes 0 No * Was the object augmentation helpful to understand the concept of magnetic fields? Not so helpful 2 3 4 5 0 0 0 00 Very helpful Would you prefer this system over visualizations on tablets/laptops/desktops? https://docs.google.com/forms/d/1GExVEhPrcGfkEeQW6XAEC3Ry8_RUHXc8b9PbhXmGO/viewform?edit-requested=true a Page 1 of 2 71 ME- -; .. _--, . ... - 1 7/27/14, 10:45 PM Augmented Reality in Physics Education Survey * What would make the system more enjoyable to use? Do you have any other comments or suggestions? cS-ubmit] Never submit passwords through Google Forms. * Google Forms Report Abuse Terms of Service Additional Terms https://docs.google.com/forms/d/1GExVEhPrcGfkEeQW6XAEC3Ry8_RUHXc8b9PbhXmGO/viewform?edIt-requested=true 72 Page 2 of 2 Bibliography [1] John Belcher, Studio Physics at MIT (2001). http: //web. mit . edu/jbelcher/ www/PhysicsNewsLetter.pdf. [2] Professor John Belcher, MIT. http://web.mit.edu/jbelcher/www/. [3] Sue Bennett, Karl Maton, and Lisa Kervin. The 'digital natives' debate: A critical review of the evidence. British journal of educational technology, 39(5):775786, 2008. [4] Charles C Bonwell and James A Eison. Active Learning: Creating Excitement in the Classroom. 1991 ASHE-ERIC Higher Education Reports. ERIC, 1991. [5] Yu-Chien Chen. A study of comparing the use of augmented reality and physical models in chemistry education, 2006. [6] Coursera. https: //www. coursera. org/. [7] Catherine H. Crouch and Eric Mazur. Peer instruction: Ten years of experience and results. American Journal of Physics, 69(9):970-977, 2001. [8] Sebastien Cuendet, Quentin Bonnard, Son Do-Lenh, and Pierre Dillenbourg. Designing augmented reality for the classroom. Comput. Educ., 68:557-569, October 2013. [9] T. De Jong, E. Martin, J.-M. Zamarro, F. Esquembre, J. Swaak, and W.R. Van Joolingen. The integration of computer simulation and learning support: An example from the physics domain of collisions. Journal of Research in Science Teaching, 36(5):597-615, 1999. cited By (since 1996)49. [10] C. Dede, M. C. Salzman, and R. Bowen Loftin. Sciencespace: Virtual realities for learning complex and abstract scientific concepts. In Proceedings of the 1996 Virtual Reality Annual InternationalSymposium (VRAIS 96), VRAIS '96, pages 246-, Washington, DC, USA, 1996. IEEE Computer Society. [11] Son Do-Lenh, Patrick Jermann, Amanda Legge, Guillaume Zufferey, and Pierre Dillenbourg. Tinkerlamp 2.0: Designing and evaluating orchestration technologies for the classroom. In Proceedings of the 7th European Conference on Technology Enhanced Learning, EC-TEL'12, pages 65-78, Berlin, Heidelberg, 2012. Springer-Verlag. 73 [12] Son Do-Lenh, Frederic Kaplan, and Pierre Dillenbourg. Paper-based concept map: The effects of tabletop on an expressive collaborative learning task. In Proceedings of the 23rd British HCI Group Annual Conference on People and Computers: Celebrating People and Technology, BCS-HCI '09, pages 149-158, Swinton, UK, UK, 2009. British Computer Society. [13] K. Duncker and L.S. Lees. On problem-solving. Number v. 58 in Psychological monographs. The American psychological association, inc., 1945. [14] edX. https://www.edx.org/. [15] K Anders Ericsson and Herbert A Simon. Verbal reports as data. Psychological review, 87(3):215, 1980. [16] Richard M Felder and Linda K Silverman. Learning and teaching styles in engineering education. Engineering education, 78(7):674-681, 1988. [17] Anthony F Gregorc and Helen B Ward. A new definition for individual. Nassp Bulletin, 61(406):20-26, 1977. [18] GRID: the iOS app. http://www.binarythumb. com/. [19] Richard R Hake. Interactive-engagement versus traditional methods: A sixthousand-student survey of mechanics test data for introductory physics courses. American journal of Physics, 66(1):64-74, 1998. [20] Marcy Hamby Towns and Edward R. Grant. "i believe i will go out of this class actually knowing something": Cooperative learning activities in physical chemistry. Journal of Research in Science Teaching, 34(8):819-835, 1997. [21] Thomas F. Hawk and Amit J. Shah. Using learning style instruments to enhance student learning. Decision Sciences Journal of Innovative Education, 5(1):1-19, 2007. [22] Maria Blanca Ibafiez, Angela Di Serio, Diego Villaran, and Carlos Delgado Kloos. Experimenting with electromagnetism using augmented reality: Impact on flow student experience and educational effectiveness, 2014. [23] Inhelder, B., and Piaget. The growth of logical thinking from childhood to adolescence: an essay on the construction of formal operational structures. Basic Books, 1958. [24] G. Jacucci, A. Oulasvirta, T. Psik, A. Salovaara, and Wagner. Augmented reality painting and collage: Evaluating tangible interaction in a field study. Proceedings of Tenth IFIP-TC13 International Conference on Human-Computer Interaction INTERACT'05, pages 43-56, 2005. [25] David W Johnson, Roger T Johnson, and Karl A Smith. Active learning: Cooperation in the college classroom. 1991. 74 [26] Christopher J Johnstone, Nicole A Bottsford-Miller, and Sandra J Thompson. Using the think aloud method (cognitive labs) to evaluate test design for students with disabilities and english language learners. technical report 44. National Center on Educational Outcomes, University of Minnesota, 2006. [27] Hannes Kaufmann. Collaborative augmented reality in education. Institute of Software Technology and Interactive Systems, Vienna University of Technology, 2003. [28] Marcia W Keyser. Active learning and cooperative learning: understanding the difference and using both styles effectively. Research strategies, 17(1):35-44, 2000. [29] The Khan Academy. https: //www. khanacademy. org/. [30] Alice Y Kolb and David A Kolb. Learning styles and learning spaces: Enhancing experiential learning in higher education. Academy of management learning < education, 4(2):193-212, 2005. [31] David A Kolb et al. Experiential learning: Experience as the source of learning and development, volume 1. Prentice-Hall Englewood Cliffs, NJ, 1984. [32] Layar. https://layar.com/. [33] Learning Catalytics. http: //learningcatalytics. com/. [34] Lego Duplo. http://www.lego. com/en-us/duplo. [35] C. Lewis. Using the "thinking Aloud" Method in Cognitive Interface Design. Research report. IBM T.J. Watson Research Center. [36] Natan Linder and Pattie Maes. Luminar: Portable robotic augmented reality interface design and prototype. In the Proceedings of UIST, 2010. [37] Natan Linder and Pattie Maes. The design evolution of luminar: A compact and kinetic projected augmented reality interface. In CHI '12 Extended Abstracts on Human Factors in Computing Systems, CHI EA '12, pages 1435-1436, New York, NY, USA, 2012. ACM. [38] E. Mazur. Peer Instruction: A User's Manual. Series in Educational Innovation. Prentice Hall, 1997. [39] E. Mazur. Peer instruction: Getting students to think in class. In Edward F. Redish and John S. Rigden, editors, The Changing Role of Physics Departments in Modern Universities, Part Two: Sample Classes, AIP Conference Proceedings, pages 981-988. American Institute of Physics, 1997. [40] Professor Eric Mazur, Harvard. http://mazur.harvard.edu/ericmazur.php. [41] Meteor (web framework). https://www.rmeteor. com/. 75 [42] Paul Milgram and F. Kishino H. Takemura andA. Utsumi. Augmented reality: A class of displays on the reality-virtuality continuum. Proceedings of Telemanipulator and Telepresence Technologies, pages 2351-34, 1994. [43] Oculus Rift. http://www.oculusvr.com/. [44] Seymour Papert. Mindstroms. Basic Books, 1980. [45] Karl Pearson. Note on regression and inheritance in the case of two parents. Proceedings of the Royal Society of London, 58(347-352):240-242, 1895. [46] Caroline Perry. In AP 50, students own their education. http: //www. seas. harvard.edu/news/2013/09/in-ap-50-students-own-their-education. note = Accessed: 2014-07-18. [47] MIT Institute Archives & Special Collections: Physical Science Study Committee, 1956. http://libraries.rmit.edu/archives/exhibits/pssc/. [48] Ramesh Raskar, Greg Welch, Kok-Lim Low, and Deepak Bandyopadhyay. Shader lamps: Animating real objects with image-based illumination. Proceedings of the 12th Eurographics Workshop on Rendering Techniques, pages 89-102, 2001. [49] Brett E Shelton and Nicholas R Hedley. Using augmented reality for teaching earth-sun relationships to undergraduate geography students. In Augmented Reality Toolkit, The First IEEE International Workshop, pages 8-pp. IEEE, 2002. [50] C. Spearman. The proof and measurement of association between two things. The American Journal of Psychology, 15(l):pp. 72-101, 1904. [51] Steelcase - Bivi Table for Two by Turnstone. http: //myturnstone. com/ products/bivi-table-for-two/. [52] Steelcase - Node: seating for the active learning classroom. //www.steelcase.com/en/products/category/educational/seating/ http: node/pages/node.aspx/. [53] Steelcase - verb. http://www.steelcase.com/en/products/category/ educational/tables/overview/pages/verb.aspx. [54] Lana Trey and Samia Khan. How science students can learn about unobservable phenomena using computer-based analogies. Computers & Education, 51(2):519 - 529, 2008. [55] John S. Underkoffler. The I/O bulb and the luminous room. PhD thesis, Massachusetts Institute of Technology. Dept. of Architecture. Program in Media Arts and Sciences, 1999. [56] RCV Lab Queen's University. Arpool. In Augmented World Expo (AWE) 2013, Santa Clara CA, 2013. 76 [57] Maarten W Van Someren, Yvonne F Barnard, Jacobijn AC Sandberg, et al. The think aloud method: A practical guide to modelling cognitive processes, volume 2. [58] E. von Glasersfeld. The construction of knowledge: contributions to conceptual semantics. Systems inquiry series. Intersystems Publications, 1987. [59] WebRTC (Web Real-Time Communication). http://www.webrtc.org/. [60] Ben Weissmann, Natan Linder, Rony Kubat, and Pattie Maes. Lens: A javascript sdk for building web-based augmented reality applications on the luminar platform. In EECScon, 2013. Presented as poster at EECScon 2013. Cambridge, MA, USA. [61] Pierre Wellner. Interacting with paper on the digitaldesk. Communications of the ACM - Special issue on computer augmented environments: back to the real world, pages 87-96, 1993. [62] Y.Dori and J. Belcher. How does technology-enabled active learning affect undergraduate students' understanding of electromagnetism concepts? The Journal of Learning Sciences, pages 243-279, 2005. [63] Ryder Ziola, Melanie Kellar, and Kori Inkpen. Deskjockey: Exploiting passive surfaces to display peripheral information. In 11th IFIP TC13 Conference in Human-Computer Interaction (INTERACT 2007). Springer Verlag, September 2007. 77