Animating the Inanimate using Aurasma: Applications for Deaf

advertisement
Animating the Inanimate using Aurasma: Applications for Deaf students
Becky Sue Parton and Robert Hancock
Southeastern Louisiana University
USA
becky.parton@selu.edu
robert.hancock@selu.edu
In the past two decades, researchers and developers have attempted to make a link between the
physical world and the digital world through a variety of techniques (Chipman, et all, 2006; Price, 2008).
An exhaustive history of such approaches is beyond the scope of this session, but two examples will
serve to illustrate.
Barcodes, both traditional ones and 2D (QR codes), have been implemented in educational settings. For
example, Porter (2001) taped barcode strips into stories so that elementary students could link to a
video presentation on laserdisc. The major drawback to this approach being the need for a barcode
scanner. An early study by the authors duplicated this concept using an inexpensive CueCat barcode
scanner to trigger video presentations, but the process was still cumbersome due to the specialized
equipment needed (Parton, Hancock, & Mihir, 2010). A project focused on university students, called
‘PaperLinks’ involved embedding barcodes within printed documents which were read by a bluetoothenabled barcode scanner attached to a personal digital assistant (PDA) but again the researcher
reported that the scanner was a barrier (Hitz & Plattner, 2004). More recently, 2D barcodes have gained
popularity in pop culture showing up in magazines, on billboards, cups, and even on grave headstones.
These codes are scanned with a cell phone camera most often. With free programs, a person can take a
photo of the code and then be directed to a variety of information including web-based multimedia. In
academia these codes are being integrated into fieldtrips, textbooks, and more.
Second, Radio Frequency Identification (RFID) technologies were historically used for inventory
purposes, but have also been integrated into classrooms. RFID tags are easier to read than traditional
barcodes since the scanner and the tag do not have to physically touch in order to activate. For
example, Sung, Levisohn, Song, Tomassetti, & Mazalek (2007) created a system called the ‘Shadow Box’
which consisted of a stationary RFID reader with an output monitor. When three to four year old
children placed animal blocks with embedded RFID tags on the reader, the output monitor would show
feedback such as whether it matched the written word for the animal that was requested. The authors
received a federal grant to develop an RFID-based approach to teaching young, Deaf children vocabulary
words in American Sign Language (ASL). Over a two year period, 500 concrete nouns represented by
real toys were tagged with RFID cards. When scanned, the children saw a short video with pictures of
the item along with a video clip of the word in ASL (Parton & Hancock, 2008; Parton, Hancock, CrainDorough, & Oescher, 2009). This system is called ‘LAMBERT’. The scanning process, even for very young
children (1-2 years old), is easy to accomplish. This approach has been very successful in terms of
student engagement, and led to a second project to encourage bilingual literacy development through
the use of interactive storybooks. Pages of the book, written in English, display RFID tags that lead to
videos of the ASL translation (Parton & Hancock, 2011). Although effective, the expense and necessity
of the specialized RFID reader and tags is still a factor.
It is with this knowledge of the benefits of connecting physical and digital data along with the
constraints of either a barcode or RFID bridge, that the research team sought to find alternative
approaches. Aurasma is one such tool. It is an application available on mobile platforms such as the
iphone, ipad2, and Android devices that allows for augmented reality without the need for barcodes or
RFID tags (Aurasma, n.d.). For developers the setup process is straight-forward. A photo is taken of
any item that will serve as a trigger – for example, a statue at a museum or the page of a book, or a toy.
Then that trigger is attached to the related video and uploaded to the server. For users, the process is
very simple as well. He/she points the camera in the mobile device at the real-world objects and when a
match is detected, the linked digital content (i.e. a video) automatically starts.
Combining tangible interfaces (real world objects that control computer-based activities), augmented
reality (real world environments that are enhanced by virtual imagery or other sensory input), and
mobile devices in this manner continues the team’s previous line of research thus two initial studies are
being conducted to examine the feasibility of the approach specifically in regards to use with Deaf
students. For the first study, the researchers chose 25 of the 500 toys from the original LAMBERT kit
along with the book, Lambert’s Colorful World, and retrofitted them to work with the Aurasma app
rather than with the RFID system. Each toy or book page was photographed and linked to the videos
which were already created during previous grant work. Currently, the systems are being field tested in
two schools for the Deaf (in preschool and kindergarten rooms respectively) and the evaluation
responses will be compared to the original study to analyze the differences, if any, in implementation
and user feedback. The researchers are especially interested in determining the age level that is
comfortable and capable of handling a mobile device in the manner needed to trigger the interactive
content. Therefore, a second study was designed to focus on older, Deaf students (middle school). The
research team is currently working with a local statue museum to photograph all of the pieces of art
(approximately 30) and create enrichment videos that discuss the artist and background in ASL. At the
time of this proposal submission, the content is still under development but by the presentation date, a
pilot study will have been conducted and the results shared.
This poster will demonstrate the Aurasma tool , allow participants to launch the LAMBERT toys and book
content using this process, and provide feedback on the results of the study at the Deaf schools. Video
clips from the anticipated field trip to the statue museum will be shown as well. The poster will also
generate discussion on the feasibility of integrating augmented tools in classrooms.
Aurasma (n.d.). Retrieved September 12, 2011 from http://www.aurasma.com .
Chipman, G., Druin, A., Beer, D., Fails, J., Guha, M., & Simms, S. (2006). A case study of
tangible flags: a collaborative technology to enhance field trips. Paper presented at the
5th International Conference for Interactive Design and Children (IDC), Tampere,
Finland.
Parton, B. & Hancock, R. (2008). When physical and digital words collide: A tool for early
childhood learners. Tech Trends, 52(5), p.22-25.
Parton, B., Hancock, R., Crain-Dorough, M., & Oescher, J. (2009). Interactive Media to Support
Language Acquisition for Deaf Students. Journal on School Educational Technology,
5(1).
Parton, B., Hancock, R., & Mihir (2010). Physical world hyperlinking: Can computer-based
Instruction in a K-6 educational setting be easily accessed through tangible tagged
objects? Journal of Interactive Learning Research. 21(1).
Parton, B. & Hancock, R. (2011). Interactive Storybooks for Deaf Children. Journal of
Technology Integration in the Classroom, 3(1).
Hitz, M. & Plattner, S. (2004). PaperLinks – linking printouts to mobile devices. Paper presented
at Mlearn 2004, Rome, Italy.
Porter, S. (2001). Education technology can add value to printed books. Curriculum Review,
40(9), p.14-16.
Price, S. (2008). A representation approach to conceptualizing tangible learning environments.
Paper presented at the 2nd International Conference on Tangible and Embedded
Interaction (TEI), Bonn, Germany.
Sung, J., Levisohn, A., Song, J, Tomassetti, B., & Mazalek, A. (2007). Shadow box: an
interactive learning toy for children. Paper presented at the First IEEE International
Workshop on Digital Game and Intelligent Toy Enhanced Learning (DIGITEL), Jhongli
City, Taiwan.
Download