ZampelisRDPM_2012FinalPaper

advertisement
IRIS: Augmented Reality in Rapid Prototyping
D ZAMPELIS, G LOUDON, S GILL and D WALKER
Cardiff Metropolitan University, Cardiff, UK
dizampelis@cardiffmet.ac.uk, gloudon@cardiffmet.ac.uk, sjgill@cardiffmet.ac.uk,
dwalker@uwic.ac.uk
ABSTRACT
This paper tests a new method for producing rapid, tangible prototypes for information
appliances using a system called IRIS. The system allows designers to effectively integrate
both the Graphical User Interface (GUI) and other input and output functionality into early
stage prototypes. IRIS combines augmented reality techniques with a new viewing solution so
that early stage prototypes appear to have screens embedded within the device without the
complexity of embedding an actual screen. The initial study tested the performance of a real
BT Equinox phone against the performance of a BT Equinox prototype that used the IRIS 1.0
system. The study highlighted certain challenges which were addressed and tested in a second
study measuring performance against the same indicators using the improved system. IRIS
2.0 outperformed IRIS 1.0 and almost matched the performance indicators of the BT Equinox
prototype. However, further opportunities were identified to improve the user-experience.
KEYWORDS: Augmented Reality, Rapid Prototyping, Physicality, Information Appliances
1.
INTRODUCTION
1.1
Rapid prototyping
Using rapid prototyping solutions, product designers can generate physical objects in
prototype form with sufficient accuracy. Such techniques enable designers to test, evaluate
and verify the physicality and functionality of a product at an early stage of production and
manage risk. Through testing prototypes with user focus groups, inefficiencies can be
identified and rectified prior to bringing the product to market. Prototyping can form a critical
stage in the development of a product, as each prototype usually marks a completion of a
particular development phase and therefore has a significant impact on the overall
development schedule [1]. Prototyping techniques like the Wizard of Oz simulations
Experience Prototyping [3], Phidgets [4], Buck Method [5], Switcheroos [6], iStuff [7], Paper
Prototyping [8], Calder Toolkit [9], DTools [10] and Exemplar [11] have successfully
improved product development by verifying product design at an early stage.
However, there can be problems associated with computer embedded products. Most
significantly, the screen can be separate from the prototype [12]. In industry a laptop is often
used and the interface is tested as software only from the laptop screen. In parallel, a block
model, i.e. a physical model that does nothing but key input, allows the users to interact with
the software interface on computer screen. Gill et al. [12] demonstrated that this method was
introducing major delay and usability problems. The rationale for this study is based on the
premise that there is no user-friendly procedure for integrating the software into the hardware
at an early stage of the design process. Studies from Culverhouse et al. and Wooley et al. [13]
have demonstrated that it is crucial for companies to be able to easily alter the components of
a prototype (especially the screen) during the rapid prototyping procedure. Furthermore,
embedding real screens of varying sizes into products at prototyping stage is expensive and
time consuming. Another major detriment is that prototypes created with rapid prototyping
techniques, do not fully meet the requirements of functional prototypes, as neither the serial
material nor the serial production processes are used [14]. In other words, the internal
modification of the prototypes, in order to utilize the prototyping stage electronics such as
screens and buttons, requires a different internal modification of the device than the one
needed in the final product.
This paper examines a rapid prototyping technique that employs augmented reality to
eliminate the need for internal modification for data output purposes by providing a virtual
interactive screen layered on the prototype device.
1.2
Augmented reality on information appliances prototyping
The use of augmented reality as a prototyping tool is gaining momentum, as it provides a
means of blending a prototype model in an early stage of implementation with virtual
functionalities, creating an integrated prototype. A prominent augmented reality technique for
rapid prototyping is Spatial Augmented Reality (SAR). In SAR a virtual element is projected
on a real object using a projector. An early example of SAR is the CAVE [15]. Studies of
SAR as a rapid prototyping technique by Itzstein [16] and Verlinden [17] have proved that
although prototyping is possible with SAR, significant shortcomings remain for large-scale
application as a commercial tool.
Crucially, there are several key problems that render these methods suboptimal for
commercial prototype testing. These limitations include the complexity that the setup as extra
assistive devices may require (e.g. head tracking devices), the modeling form and features
(the location of the buttons on the prototype device) and the support for tracking layout
modifications. [17]
2.
THE IRIS RAPID PROTOTYPING SYSTEM
Our approach makes use of a transparent display that uses a screen-based video. This
approach provides a window to the world [18] solution. There is no need for head tracking as
the screen is placed in a fixed position and angle. Figure 1 demonstrates the concept of using
a prototyping device with a marker and how when placed behind the screen, an interactive
interface appears on the device. This research seeks to forward the debate on the utilisation of
augmented reality for rapid prototyping by approaching the technological shortcomings from
which these devices and approaches have previously suffered and so explore the application
of augmented reality rapid prototyping (ARRP) for commercial purposes.
Figure 1: Left a prototype device with a marker. Right the device and marker behind the screen demonstrating the
interactive interface.
2.1
IRIS SYSTEM IMPLEMENTATION
IRIS is based on a modified version of ARToolKit, an open source augmented reality
framework written in C++ that makes use of visual markers in conjunction with USB cameras
to place virtual elements on real devices [19]. ARToolKit was responsible for the augmented
reality geometry metering and the FantastiqUI for the rendering and interaction of a Flash file
on top of the AR marker. As depicted in Figure 2, the system was combined with FantastiqUI
framework, a C++ implementation for low level access to Flash files in OpenGL
environment. The integration of the system with a Flash interface allows designers to
implement their software interface design (as Flash is a commonly used GUI design tool in
industry) rather than needing software engineers to implement a design in a lower level
computer language.
2.1.1 System Core
Figure 2: The IRIS core
Each time ARToolKit detects a marker in a video frame, it superimposes the Flash interface.
The calculations for the interface placement are made by the ARToolKit core. The interaction
and rendering process is handled from the FantastiqUI plugin. FantastiqUI also accesses the
methods inside the Flash file and is able to interact with the elements displayed in the Flash
movie. The system also supports sound output from the Flash movie.
2.1.2 Prototype - System interaction layer
Figure 3: The IRIS system interaction layer
As the marker is a physical element, and the interface layered over it is a virtual one, there is a
need to combine the two. Responsible for this are the mathematical equations that take place
in the core of ARToolKit - FantastiqUI. The core implementation of the system is able to
interact with the Flash movie through a constant analysis of the shape and angle of the
marker.
An instance of the interface is layered on top of the marker at the same angle and size as the
marker itself. The software part of the system interacts with the marker representation and
translates the data into the system core. It is able to emulate accelerometer functionality by
determining changes in the movement of the markers. The general keyboard/ rendering I/O
handling and the accelerometer emulation is handled in the system’s Prototype
Communication Interface (PCI). As illustrated in Figure 3, the combination of the PCI with
the IRIS core forms the system interaction layer.
2.1.3 Human - Prototype interaction layer
Figure 4: The IRIS prototype interaction layer
The human-prototype interaction layer is responsible for the high level communication
between user and prototype. Figure 4 demonstrates the parts the system consists of: the
hardware elements of the system interact directly with the user, accepting data and providing
output functionality. The accelerometer emulation of the system interaction layer receives
data from the camera connected to the computer and constantly tracks the marker on the
prototype. During the study, a regular web camera was used. For the I/O handling of the
system, an IE3 Unit was used to translate key presses on the physical model into ASCII
codes. An output device connected to the system displayed the layered interface on top of the
marker.
3.
EMPIRICAL TESTING
In a previous study, Gill et al. [20] conducted a series of tests comparing the performance of a
real BT Equinox phone, an Equinox / IE Unit prototype and a screen based prototype using a
methodology developed by Molich and Dumas [21].
The main tasks chosen for our experiments are linked with common functions (turning the
phone on and off) and more complicated ones (calling a number, changing the background
wallpaper). The sequence which the users were asked to accomplish the aforementioned tasks
was based on the task difficulty. This allowed the users to gradually get accustomed with the
functions of the system. Each task was linked to a number of key performance indicators.
16 members of administrative staff from Cardiff Metropolitan University took part in each
study (a total of 32). They ranged in age from 22 to 55 years. Experience of mobile phone
interfaces was broadly similar to that described by Gill et al. [20] in their experiments.
The specific tasks were intended to identify the effectiveness of IRIS implementation on tasks
that users are accustomed to conducting in their everyday use of mobile phones. The results
were compared with the previous implementation of the system, identifying usability
improvements. The same tasks were used in previous studies from Gill et. al. to evaluate the
effectiveness of a prototyping tool called IE Unit [13]. Conducting the same tasks facilitated
the comparison and evaluation of usability effectiveness of the IE implementation with IRIS.
3.1
Procedure description
In both studies, each participant was provided with a questionnaire and instructions. The
questionnaire data, such as the user’s experience with mobile phones, their age, etc. form the
qualitative analysis of the experiment. The users were provided with a basic description of the
interface for the study and they were permitted to ask any questions to familiarize themselves.
In each study, the users were asked to accomplish four tasks using four different devices.
Devices:
• Equinox: The real BT Equinox phone
• IE Unit: The prototype phone using the IE Unit and the GUI displayed on a separate PC
monitor
• Software: The screen based prototype
• 1st study: IRIS 1.0: The physical model using the augmented reality technology of the IRIS
system
• 2nd study: IRIS 2.0: The physical model using the upgraded version of the IRIS system.
Tasks:
• Turn the phone on
• Call a given number
• Change the background photo
• Turn the phone off
After experimentation an optimal distance was found between the system camera and the
hand for the individual’s hand to appear the same size on-screen, as though the user was
looking at it in real life. The performance of participants was converted to four different
interval data per task. These intervals included 0 = success, 1 = minor, 2 = serious, 3 =
catastrophe.
4.1
Study 1
The first study used the IRIS 1.0 system and participants were asked to hold the prototype
device in front of a white background, behind the screen. The system used a regular web
camera to track the device. Analysis of performance outcome and performance time used a 4
(device type) x 4 (phone task) mixed analysis of variance (ANOVA).
Figure 5 shows the mean time taken to complete each of the four phone tasks as a function of
device type. There was a significant main effect of device, F (3, 91) = 28.86, p < 0.001, a
significant main effect of task, F (3, 273) = 113.11 and an interaction between device and
task, F (9, 273) = 6.07, p < .001. To explore the main effect of device, a series of pairwise
post hoc tests (REGWQ) were performed. These showed that there were reliable (p <.05)
differences between software and both IE unit/Equinox and also between IRIS 1.0 and both
IE unit/Equinox. None of the other pairwise comparisons were significant (p > 0.05).
Figure 6 shows the success outcome (rating) in completing each of the four phone tasks as a
function of device type. There was a significant main effect of device, F (3, 91) = 12.07, p <
0.001, a significant main effect of task, F (3, 273) = 34.29 and an interaction between device
and task, F (9, 273) = 4.75, p < 0.001. Again, post hoc tests (REGWQ) revealed that there
were reliable (p <.05) differences between software and both IE unit/Equinox and also
between IRIS and both IE unit/Equinox. However, again no reliable difference (p > 0.05) was
found between the screen based prototype (Software) and the prototype using the IRIS
system. None of the other pairwise comparisons were significant (p > 0.05).
Figure 5: Mean time taken to complete each of the four
phone tasks as a function of device type.
Figure 6: success outcome (rating) in completing each of
the four phone tasks as a function of device type.
The analyses demonstrate that on both the time taken to complete a task and how successfully
it was performed, that the IE Unit was more similar to the real phone than the IRIS 1.0 based
prototype or the software simulation. It also shows that people were more successful using the
IE unit than IRIS 1.0. Judging from the comments of the users when they first kept the device
on their hands, that they felt a bit uncomfortable holding it behind the screen, we can
conclude that even when there is a learning curve, it is not long until users can achieve an
optimum performance fairly quickly.
3.2
Study 2
The main challenges addressed in the IRIS 2.0 system was the low resolution of the image
displayed on screen and the problem of lack of connection of the users to the represented onscreen prototype due to depth perceptional problems introduced by the monitor on which the
prototype was presented. Modifying the system’s core layer, achieved a smooth frame-rate of
25 fps average on a screen resolution of 800x600 in contrast to 15 fps at 640x480 of the
previous study. Higher quality camera equipment was also used. We also placed a blurry
abstract background behind the hands of the users to reduce the aforementioned depth
perceptional problems by creating an illusion of depth between prototype and background.
Analysis of performance outcome and performance time used a 5 (device type) x 4 (phone
task) mixed analysis of variance (ANOVA).
Figure 7 illustrates the mean time taken to complete each of the four phone tasks as a function
of device type. There was a significant main effect of device, F (4, 106) = 23.6, p < 0.001, a
significant main effect of task, F (3, 318) = 159.75 and an interaction between device and
task, F (12, 318) = 7.31, p < 0.001. To explore the main effect of device, a series of pairwise
post hoc tests (REGWQ) were performed. These showed that there were reliable (p < 0.05)
differences between software/IRIS and IE unit/Equinox/IRIS2 and also a reliable (p < 0.05)
difference between IRIS2 and all the other devices. None of the other pairwise comparisons
were significant (p > 0.05).
Figure 7: Mean time taken to complete each of the
four phone tasks as a function of device type
Figure 8: Success outcome (rating) with four phone
tasks as a function of device type.
Figure 8 shows the success outcome (rating) in completing each of the four phone tasks as a
function of device type. There was a significant main effect of device, F (4, 106) = 10.24, p <
0.001, a significant main effect of task, F (3, 318) = 32.48 and an interaction between device
and task, F (12, 318) = 5.81, p < 0.001. To explore the main effect of device, a series of
pairwise post hoc tests (REGWQ) were performed. These showed that there were reliable (p <
0.05) differences between software and IE unit/Equinox/IRIS 2.0 and also between IRIS 1.0
and IE unit/Equinox/IRIS 20. None of the other pairwise comparisons were significant (p >
0.05).
From the results it is evident that IRIS 2.0 outperformed IRIS 1.0 on the first, second and
fourth task. The reason that it did not outperform IRIS 1.0 on the third task is linked to
changes in user interface trends and is discussed hereafter. It should be noted that even though
on the first study the performance of the system was lower than the IE Unit and Equinox, the
improved version of the second study placed them on similar usability levels. Furthermore, it
is clearly demonstrated that IRIS 2.0 performed remarkably well on the first task as the
performance time/rating were in both cases high, outperformed only by the real Equinox
device concerning the mean rating.
4.
OBSERVATION
4.1 Task 1
In this task users were asked to turn the phone on. There was no guidance on where the button
was located and thus from observing the time needed for the users to detect the power button,
an evaluation of the effectiveness of the new high resolution camera was conducted. In the
first study the limited camera resolution prevented the users from identifying the button,
meaning they spent significant time trying to switch on the device by pressing buttons on the
keypad. With the use of a high definition camera, in the second study, the clarity of the image
displayed on the screen was greatly improved and users were able to spot the On/Off button
almost instantly due to the improved contrast and resolution making the shape stand out from
the background.
4.2
Task 2
In this task users were asked to call a specific telephone number. When the users started the
task, the phone was displaying the main UI screen. The users needed at that point to start
typing the number, and when they finished they were asked to press the green button to
perform the call. In our first study, the majority of participants managed to correctly enter the
number. After the introduction of the better camera resolution, in the second study
participants were able to better distinguish easier the numbers on the keypad. This was
reflected on the better timings we recorded during this task. Also the number of mistakes the
users made while typing were minimal. The 16 participants of the second study did an overall
of 4 mistakes, in comparison to 9 mistakes in the first study.
4.3
Task 3
In the third task, users were asked to change the background photo of the phone. The results
in this task are quite interesting and highlight UI usability problems beyond the scope of our
study, which is mainly concerning the IRIS system. The users were asked to navigate through
the phone’s menu until they found the option corresponding to “background customization”
and then change the photo. Even though the users were able to quickly navigate through the
menu, the menu icons proved to confuse the majority of the users (this was out of the control
of the research), forcing a large number of the participants to abandon this task prior to
completion.
The main problem was that the customization section of the phone was represented with a
music note icon. As the device that the prototype was representing was quite old, mobile
phones were not intended to be used as music players. People are becoming more familiar
with multimedia devices and the idea that a music icon represents a music player rather a
customization section for wallpapers and ringtones.
By evaluating the results from the first study, we can find out that during the same task, users
did better on finding the background customization section. In the second study almost half of
the users completely failed to find it. This fact reinforces the conjecture that people are more
familiar with a different representation of the functions in different technology.
4.4
Task 4
In the fourth task the users were asked to turn the phone off by pressing the same button they
used to turn the phone on during the first task. During the first study people managed to
quickly turn the phone off. In the second study it was expected that the timings would
improve as the users had better visibility of the power button. This supposition was correct as
the timings were indeed better. The users were able to almost instantly turn the phone off.
5.
DISCUSSION
One problem with the IRIS system was the covering of the augmented reality sticker (bar
code) with the user’s hand (when switching the device on and off), which resulted in the GUI
not being transposed onto the physical model. An alternative solution to putting the interface
over the sticker would be to put one or two smaller stickers in some other place on the device
(preferably on the top) and draw the interface at the place of the display proportionately to the
placement of the other stickers. This is currently under implementation using chroma key
technology to project the hand on top of the screen, which will be tested in the future.
6.
CONCLUSION
In summary, the feedback from participants and the in-house design team indicates that, with
some further modifications, IRIS 3.0 could provide an optimum prototyping experience and
form part of a commercial toolkit.
7.
REFERENCES
[1]
Chua, C. K., Leong, K. F., & Lim, C. S. (2010). Rapid Prototyping: Principles and
Applications. (World Scientific Publishing Company, Eds.) (p. 512). World Scientific.
[2]
Maulsby, D., Greenberg, S. and Mander, R. (1993), Prototyping an intelligent agent
through Wizard of Oz, in Proceedings of the SIGCHI conference on Human factors in
computing systems, Amsterdam, The Netherlands
[3]
Buchenau, M. and Suri, J.F. (2000) Experience Prototyping. In Proceedings of the
conference on Designing interactive systems: processes, practices, methods, and techniques,
424 – 433, New York City, New York, USA.
[4]
Greenberg, S., Fitchett, C. (2001) Phidgets: easy development of physical interfaces
through physical widgets, in Proceedings of the 14th annual ACM symposium on User
interface software and technology, Orlando, Florida, USA 209 - 218
[5]
Pering, C. (2002) Interaction Design Prototyping of Communicator Devices: Towards
meeting the hardware – software challenge. Interactions journal 9(6) 36-46.
[6]
Avrahami, D., Hudson, S. (2002) Forming Interactivity: A Tool for Rapid Prototyping
of Physical Interactive Products", in Proceedings of DIS'02, pp. 141-146
[7]
Ballagas, R., Ringel, M., Stone, M. and Borchers, J. (2003) iStuff: A physical user
interface toolkit for ubiquitous computing environments in Proceedings of CHI 2003, pp.
537-544
[8]
Snyder, C. (2003), Paper Prototyping, Morgan Kaufmann, San Francisco, CA.
[9]
Lee, J. C., Avrahami, D., Hudson, S.E., Forlizzi, J., Dietz, P.H., and Leigh, D. (2004)
The Calder Toolkit: Wired and Wireless Components for Rapidly Prototyping Interactive
Devices, in Proceedings of Designing Interactive Systems Conference, Cambridge,
Massachusetts.
[10] Hartmann, B., Klemmer, S.R., Bernstein, M., Abdulla, L., Burr, B., Robinson-Mosher,
A. and Gee, J. (2006) Reflective physical prototyping through integrated design, test, and
analysis in Proceedings of UIST 2006
[11] Hartmann, B., Abdulla, L., Klemmer, S. and Mittal, M. (2007) Authoring Sensorbased Interactions by Demonstration with Direct Manipulation and Pattern Recognition in
Proceedings of ACM CHI 2007 Conference on Human Factors in Computing Systems
[12] Gill, S. (2003) ‘Developing Information Appliance Design Tools for Designers’; In
Personal and Ubiquitous Computing Volume 7, Springer Verlag, London
[13] Gill, S., Loudon G., Dix, A., Woolley, A., Devina R.E., and Hare J. (2008). "Rapid
Development of Tangible Interactive Appliances: Achieving the Fidelity / Time balance."
[14] Frank Breitinger, Rapid tooling--a new approach to integrated product and process
development, Proceedings Vol. 3102, Rapid Prototyping and Flexible Manufacturing, RolfJuergen Ahlers, Gunther Reinhart, Editors, pp.43-54
[15] C. Cruz-Neira, D. Sandin, T. DeFanti. “Surround-Screen Projection-Based Virtual
Reality: The Design and Implementation of the CAVE.” In Pro- ceedings of SIGGRAPH 93,
Computer Graphics Proceedings, Annual Con- ference Series, edited by James T. Kajiya, pp.
135–142. New York: ACM Press, 1993.
[16] Itzstein SV, Thomas BH, Smith RT, Walker S. Using Spatial Augmented Reality for
Appliance Design. Design. 2011:665-667.
[17] Verlinden, J., Horváth, I., & Edelenbos, E. (2006). Treatise of technologies for
interactive augmented prototyping. (I. Horváth & P. Xirouchakis, Eds.)proc of Tools and
Methods of Competitive Engineering in print, pp, 1-14. Citeseer.
[18] Feiner S. Windows on the World : 2D Windows for 3D Augmented Reality.
Computing. 1993;93:145-155.
[19] C. K. Chua, K. F. Leong, C. S. Lim, Rapid Prototyping: Principles and Applications,
World Scientific, 2010
[20] Gill, S., Loudon, G. and Walker D. (2008) Designing a design tool: working with
industry to create an information appliance design methodology, Journal of Design Research,
Vol. 7, No.2 pp. 97 - 119
[21] Molich, R., Dumas, J. S. (2006) Comparative usability evaluation (CUE-4) Behaviour
& Information Technology, 1 – 19, December 2006, Taylor & Francis.
Download