Putting Tangible User Interfaces in Context: A Unifying

advertisement
Putting Tangible User Interfaces in Context: A Unifying
Framework for Next Generation HCI
Michael S. Horn, Orit Shaer, Audrey Girouard, Leanne M. Hirshfield
Erin Treacy Solovey, Jamie Zigelbaum*, Robert J.K. Jacob
Tufts University Computer Science
161 College Ave.
Medford, Mass. 02155 USA
{audrey.girouard, leanne.miller, erin.treacy}@tufts.edu
{mhorn01, oshaer, jacob}@cs.tufts.edu
*MIT Media Lab
Tangible Media Group
20 Ames St.
Cambridge, Mass. 02139 USA
zig@media.mit.edu
ABSTRACT
innovations proceeding on unrelated fronts, we propose that
they share salient and important commonalities, which can
help us understand, connect, and analyze them. First, they
are all designed to take advantage of users’ well-entrenched
skills and expectations about the real world. That is,
interfaces are behaving more like the real world. Second,
these interaction styles are transforming interaction from a
segregated activity taking place at a desk into a fluid, freeform activity that takes place in our everyday environment.
That is, interaction takes place in the real world. In both
cases, new interaction styles draw strength by building on
users’ pre-existing knowledge of the everyday, noncomputer world to a much greater extent than before. We
propose that these emerging interaction styles can be
understood together as a new generation of HCI through the
notion of Reality-Based Interaction (RBI). Viewing
interaction through the lens of RBI can provide insights for
designers, can uncover gaps or opportunities for future
research, and may lead to the development of improved
evaluation techniques.
Here, we propose the notion of Reality-Based Interaction
(RBI) as a basis for understanding the role of tangible user
interfaces (TUIs) within the broader context of emerging
human-computer interaction styles. We are developing a
Reality-Based Interaction framework to understand,
compare, and relate current paths of HCI research. Viewing
tangible interaction through the lens of RBI may provide
insights for designers and may allow us to find gaps or
opportunities for future development. Furthermore, we are
using RBI to develop new evaluation techniques for
features of emerging interfaces that are currently
unquantifiable.
INTRODUCTION
Over the last two decades, we witnessed a proliferation of
interaction techniques that have redefined our
understanding of both computers and interaction. Motivated
by both advances in computer technology and an improved
understanding of human psychology, HCI researchers have
developed a broad range of new interfaces that diverge
from the "window, icon, menu, pointing device" (WIMP) or
Direct Manipulation interaction style (DM). As TUIs allow
users to collaboratively interact with physical objects in
order to access and manipulate digital information, they
provide an important example of such emerging interaction
techniques. Some other emerging post–WIMP interaction
styles include ubiquitous computing, perceptual and
affective interaction, mixed reality, augmented reality, and
virtual reality.
To date, work that attempts to explain or organize emerging
styles of interaction has focused more on individual classes
of interfaces than on ideas that unify several classes [1-5,
10, 12, 15, 16]. Our RBI framework is inspired by the work
that helped to define the GUI generation [6,13] and
considers a wider range of emerging interaction styles.
REALITY-BASED INTERACTION
The post-WIMP generation of human-computer interaction
is unified by an increased use of real world interactions
over previous generations. By “real world”, we mean the
undigital world, including physical, social, and cultural
reality outside of any form of computer interaction. We
introduced the term Reality-Based Interaction for emerging
interaction styles that share this common feature. We have
identified two overlapping classes of reality-based
interactions: those that are embedded in the real world, and
those that mimic or are like the real world. Both types of
interactions leverage knowledge of the world that users
already possess.
Although some may see these interaction styles as disparate
1
Interactions in the Real World
IMPLICATIONS FOR DESIGN
With ubiquitous, mobile and sensor technology,
computation has moved out of the lab or office and into the
greater world. While portability is a major part of this shift,
we believe that both the integration of devices within the
physical environment and the acquisition of input from the
environment, serve as factors contributing to it as well. As
interaction takes place within the real-physical world, users
are allowed to engage their full bodies in the interaction and
use their physical environment to organize information
(figure 1). Indeed, research has found evidence of
distributed cognition involved with reality-based interaction
[17]. Furthermore, interaction in the real world often
involves multiple users interacting in parallel with multiple
devices.
We believe the trend toward more reality-based interaction
is a positive one. Basing interaction on the real world can
reduce the mental effort required to operate the system
because the user is already skilled in those aspects of the
system. For casual use, this reduction might speed learning.
For use in situations involving information overload, time
pressure, or stress, this reduction of overhead effort could
conceivably improve performance.
Figure 1. As interaction moves into the real world, users are
allowed to engage their full bodies and use their environments
to organize information.
Interactions like the Real World
As technology moves into the real world, we also observe
that interactions are becoming more like the real world in
that they leverage prior knowledge and abilities that users
bring from their experiences in the real world. For example,
virtual reality interfaces gain their strength by exploiting the
user's perceptual and navigational abilities while tangible
user interfaces leverage users’ spatial skills. The idea of
transfer of knowledge–that it is easier to transfer already
learned skills to a new task rather than learning completely
new skills–is well known in psychology literature [11].
Although the user may already know more arcane facts,
such as pressing the Alt-F4 command to close a window on
a desktop computer system, it seems intuitively better to
exploit the more basic knowledge that the user obtained in
childhood rather than exploiting less innate knowledge.
Information that is deeply ingrained in the user, like
navigational and spatial abilities, seems more robust, more
highly practiced, and should take less effort to use than
information learned recently.
However, simply making an interface as reality-based as
possible is not sufficient. A useful interface will rarely
entirely mimic the real world, but will necessarily include
some “unrealistic” or artificial features and commands. In
fact, much of the power of using computers comes from
this “multiplier” effect, the ability to go beyond a precise
imitation of the real world. We therefore propose a view
that identifies some fraction of a user interface as based on
realistic knowledge or abilities plus some other fraction that
provides computer-only functionality that is not realistic.
As a design approach or metric, the goal would be to make
the first category as large as possible and use the second
only as necessary.
For example, consider the character Superman. He walks
around and behaves in many ways like a real man. He has
some additional functions for which there is no analogy in
real humans, such as flying and X-ray vision. When doing
realistic things, he uses his real-world commands, walking,
moving his head, looking around. But he still needs some
additional non real-world commands for flying and X-ray
vision, which allow him to perform tasks in a more efficient
way, just like a computer provides extra power. In the
design of a reality-based interface, we can go a step further
and ask that these non real-world commands, be analogous
to some realistic counterpart. For example, in a virtual
reality interface, a system might track users’ eye
movements, using intense focus on an object as the
command for X-ray vision [14] or in a tangible user
interface, crumpling a piece of paper might correspond to
deleting a file from a digital storage system.
We can thus divide the non-realistic part of the interface
into degrees of realism (x-ray by focus vs. by menu pick).
The goal of new interaction designers should be to allow
the user to perform realistic tasks realistically, to provide
additional non real-world functionality, and to use analogies
for these commands whenever possible.
There is a tradeoff between power and reality. Here we
refer to “power” as a generalization of functionality and
efficiency. The goal is to give up reality only explicitly and
only in return for increasing power. Consider an interface
that is mapped to point A in figure 3. If the interface is
redesigned and moves to the upper left quadrant, its power
would increase, but its reality would decrease, as often
occurs in practice. According to RBI this is not necessarily
bad, but it is a tradeoff that must be made thoughtfully and
explicitly. The opposite tradeoff (more reality, less power)
which they use pieces of knowledge and skills that the user
has acquired from the real world may help.
is made if the interface moves to the lower right quadrant.
However, if the interface is redesigned and moves
anywhere in the grey area, RBI theory claims that this
interface would be worse, since both power and reality have
been decreased. Similarly, moving anywhere in the top
right quadrant is desirable, as it would make the interface
better on both counts.
Furthermore, in addition to commonly used user interface
measurements (e.g. speed and accuracy), other
measurements such as workload, engagement, frustration,
and fatigue may also be valuable for RBI. However, these
measurements are generally only measured subjectively.
More quantitative tools are needed. We use a relatively new
non-invasive, lightweight brain imaging tool called
functional near-infrared spectroscopy (fNIRs) to objectively
measure workload and emotional state while completing a
given task. This tool has been shown to quantitatively
measure attention, working memory, target categorization,
and problem solving [7]. We hypothesize that an objective
measure of cognitive workload may prove useful for
evaluating the intuitiveness of an interface. We further
conjecture that reality-based interfaces will be associated
with lower objective user frustration and workload than non
reality-based systems.
CONCLUSION
We seek to advance the area of emerging interaction styles
by providing a unifying framework that can be used to
understand, compare and relate emerging interaction styles.
Viewing tangible user interfaces through the lens of realitybased interaction allows us to focus on creating designs that
leverage users’ pre-existing skills and knowledge.
REFERENCES
Figure 2. Power vs. Reality Tradeoff: each data point
represents a hypothetical interface. Consider the point marked
A. The dashed horizontal line represents interfaces with
equivalent power. The dashed vertical line represents
interfaces with equivalent levels of reality. RBI suggests that
adding reality to these interfaces without loss of power will
make them better, and that giving up reality to gain power
should be done
1. Beaudouin-Lafon, M. Instrumental Interaction: An
Interaction Model for Designing Post-WIMP User
Interfaces. Proc. ACM CHI 2000 Human Factors in
Computing Systems, Addison-Wesley/ACM Press, 2000,
446-453.
FUTURE WORK
3. Fishkin, K.P. A Taxonomy for and Analysis of Tangible
Interfaces, Seattle, Wash., 2003.
2. Dourish, P. Where The Action Is: The Foundations of
Embodied Interaction, MIT Press, Cambridge, Mass.,
2001.
To perform experimental evaluations of the RBI
framework, we are developing interfaces designed in
different interaction styles and intended to differ primarily
in their level of reality. We will conduct a study to
determine the effects of each interaction style on users’
time, accuracy, and attitudes while completing a given task.
This can provide some quantitative measure of the effect of
reality on the interaction.
4. Fishkin, K.P., Moran, T.P. and Harrison, B.L. Embodied
User Interfaces: Toward Invisible User Interfaces. Proc.
of EHCI'98 European Human Computer Interaction
Conference, Heraklion, Crete, 1998.
5. Hornecker, E. and Buur, J., Getting a Grip on Tangible
Interaction: A Framework on Physical Space and Social
Interaction. Proc. of CHI 2006, (Montreal, Canada,
2006), ACM, 437-446.
Another important consideration for the new generation of
HCI is how new interfaces themselves should be evaluated.
At the CHI workshop, “What is the next generation of
Human-Computer Interaction?” [8, 9] we brought together
researchers from a range of emerging areas in HCI. A
prevalent concern among the participants was that
evaluation techniques for direct manipulation interfaces
may be insufficient for the newly emerging generation.
Many new interfaces claim to be “intuitive,” which is often
difficult to quantify, but listing and measuring the extent to
6. Hutchins, E.L., Hollan, J.D. and Norman, D.A. Direct
Manipulation Interfaces. Draper, D.A.N.a.S.W. ed. User
Centered System Design: New Perspectives on Humancomputer Interaction, Lawrence Erlbaum, Hillsdale,
N.J., 1986, 87-124.
7. Izzetoglu, M., Izzetoglu, K., Bunce, S., Onaral, B. and
Pourrezaei, K. Functional Near-Infrared Neuroimaging.
3
IEEE Trans. on Neural Systems and Rehabilitation
Engineering, 13 (2). 153-159.
8. Jacob, R.J.K. Technical Report 2006-3, Department of
Computer Science, Tufts University: What is the Next
Generation of Human-Computer Interaction?
http://www.cs.tufts.edu/tr/techreps/TR-2006-3.
9. Klemmer, S.R., Hartmann, B. and Takayama, L. How
bodies matter: five themes for interaction design Proc.
of the 6th ACM conference on Designing Interactive
Systems, ACM Press, University Park, PA, USA, 2006,
140-149.
10. Nielsen, J. Noncommand User Interfaces. Comm. ACM,
1993, 83-99.
11. Reed, S.K. Transfer on trial: Intelligence, Cognition and
Instruction. in Singley, K. and Anderson, J.R. eds. The
Transfer of Cognitive Skill, Harvard University Press,
Cambridge, MA, 1989, 39.
12. Rohrer, T. Metaphors We Compute By: Bringing Magic
into Interface Design, Center for the Cognitive Science
of Metaphor, Philosophy Department, University of
Oregon, 1995.
13. Shneiderman, B. Direct Manipulation. A Step Beyond
Programming Languages. IEEE Transactions on
Computers, 16 (8). 57.
14. Tanriverdi, V. and Jacob, R.J.K. Interacting with Eye
Movements in Virtual Environments. Proc. ACM CHI
2000 Human Factors in Computing Systems
Conference, Addison-Wesley/ACM Press, 2000, 265272.
15. Ullmer, B. and Ishii, H. Emerging Frameworks for
Tangible User Interfaces. Carroll, J.M. ed. HumanComputer Interaction in the New Millenium, AddisonWesley/ACM Press, Reading, Mass., 2001.
16. Weiser, M. The Computer for the Twenty-first Century
Scientific American, 1991, 94-104.
17. Zigelbaum, J., Horn, M., Shaer, O., and Jacob, R.J.K.,
The Tangible Video Editor: Collaborative Video Editing
with Active Tokens. Proc. TEI'07 First International
Conference on Tangible and Embedded Interaction,
(2007).
Download