Uploaded by vefeja2628

Half day workshop on mental models of robots

advertisement
Workshop Summary
HRI ’20 Companion, March 23–26, 2020, Cambridge, United Kingdom
Half Day Workshop on Mental Models of Robots
Matthew Rueben
University of Southern California
mrueben@usc.edu
Stefanos Nikolaidis
University of Southern California
nikolaid@usc.edu
Elizabeth Phillips
United States Air Force Academy
elizabeth.phillips@usafa.edu
Lionel Robert
University of Michigan
lprobert@umich.edu
Minae Kwon
Maartje de Graaf
Utrecht University
m.m.a.degraaf@uu.nl
David Sirkin
Stanford University
sirkin@stanford.edu
Sam Thellman
Stanford University
mnkwon@stanford.edu
Linköping University
sam.thellman@liu.se
make decisions. These models are similar in structure to, though
often simpler than, the thing or concept that they represent; they facilitate predicting, explaining, and understanding each interaction.
Mental models are developed through interaction and determine
how people focus their attention and behave [14].
People often hold inaccurate or overly presumptuous mental
models about robots [11, 15], likely due to a lack of experience
and the strong influence of robot design, e.g., appearance [21, 23],
stated task/role [8], and social cues [10]. Thus, there is often a
mismatch between user mental models or expectations of robots and
actual robot capabilities [9]. This mismatch can lead to over trust,
inappropriate reliance [16] or even discontinued use (especially of
social robots) [1, 5].
As robotic systems become increasingly complex and their presence more commonplace, HRI researchers must find ways to help
users understand them more accurately. Some researchers have
already identified different factors influencing user mental models and how they can be exploited to increase user acceptance or
improve human-robot interactions [12, 18]. Others have focused
on anthropomorphism and humanlikeness [17, 25], mental state
attribution to robots [13, 22], increasing the legibility [7] and transparency [24] of robot behavior and capabilities, and on estimating
users’ mental models to inform robot behavior [6, 20]. The overarching aim of this workshop is to promote common ground and shared
understanding within the HRI community of the role of user mental
models in human-robot interactions, and to foster interdisciplinary
collaboration in this broad research area.
ABSTRACT
Robotic systems are becoming increasingly complex, hindering
people from understanding the robot’s inner workings [24]. Simply
providing the robot’s source code may be useful for software and
hardware engineers who need to test the system for traceability
and verification [3], but not for the non-technical user. Plus, looks
can be deceiving: robots that merely resemble humans or animals
are perceived differently by users [25]. This workshop aims to
provide a forum for researchers from both industry and academia
to discuss the user’s understanding or mental model of a robot:
what the robot is, what it does, and how it works. In many cases
it will be useful for robots to estimate each user’s mental model
and use this information when deciding how to behave during an
interaction. Designing more transparent robot actions will also be
important, giving users a window into what the robot is “thinking”,
“feeling”, and “intending”. We envision a future in which robots can
automatically detect and correct inaccurate mental models held by
users. This workshop will develop a multidisciplinary vision for
the next few years of research in pursuit of that future.
ACM Reference Format:
Matthew Rueben, Stefanos Nikolaidis, Maartje de Graaf, Elizabeth Phillips,
Lionel Robert, David Sirkin, Minae Kwon, and Sam Thellman. 2020. Half Day
Workshop on Mental Models of Robots. In Companion of the 2020 ACM/IEEE
International Conference on Human-Robot Interaction (HRI ’20 Companion),
March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA,
2 pages. https://doi.org/10.1145/3371382.3374856
Background
Mental models refer to an individual’s internal representation of
a system, such as a robot [2, 4]. Specifically, they are knowledge
frameworks used to describe, explain, and predict system purpose,
function, form, and state [19]. Mental models provide utility because they allow individuals to more efficiently store and access
information in order to perform tasks, understand phenomena, and
Topics and Target Audience
This workshop brings together all researchers and practitioners
interested in helping users understand robotic systems. We welcome contributions from those working on estimating aspects of
users’ mental models using perception and learning algorithms,
designing communicative robot actions or other interventions, and
developing decision making systems that connect all these elements to create autonomous robot behaviors. Applicants with a
background in human-computer interaction, natural language processing, design, human factors, psychology, neuroscience, cognitive
science, and any other related disciplines are welcome to apply. We
especially encourage submissions from researchers and practitioners contributing theories, methods and applications that are only
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
HRI ’20 Companion, March 23–26, 2020, Cambridge, United Kingdom
© 2020 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-7057-8/20/03.
https://doi.org/10.1145/3371382.3374856
658
Workshop Summary
HRI ’20 Companion, March 23–26, 2020, Cambridge, United Kingdom
sparsely used in the HRI community such as human factors, gaming,
and employee education for working with industrial robots.
Prospective participants are welcome to submit papers (max. 4
pages) and poster presentations (max. 2 pages) covering topics
within the broad scope of this workshop. Example areas of relevance
include, but not limited to: user modeling, explainability, anthropomorphism, and personalization. All papers should be submitted in
PDF format using the ACM template for late-breaking reports and
will be peer reviewed based on their originality, relevance, technical soundness, and clear presentation. Paper acceptance requires
that at least one author registers for and attends the workshop.
We will provide online access to the workshop proceedings on the
workshop website after the conference.
REFERENCES
[1] Anol Bhattacherjee. 2001. Understanding information systems continuance: an
expectation-confirmation model. MIS quarterly (2001), 351–370.
[2] John M Carroll and Judith Reitman Olson. 1988. Mental models in humancomputer interaction. In Handbook of human-computer interaction. Elsevier,
45–65.
[3] Jane Cleland-Huang, Orlena Gotel, Andrea Zisman, et al. 2012. Software and
systems traceability. Vol. 2. Springer.
[4] K Craik. 1943. The Nature of Explanation, Cambridge University Press. Cambridge, UK (1943).
[5] Maartje De Graaf, Somaya Ben Allouch, and Jan Van Dijk. 2017. Why do they
refuse to use my robot?: Reasons for non-use derived from a long-term home
study. In Proc. 2017 ACM/IEEE Intl. Conf. on Human-Robot Interaction. ACM,
224–233.
[6] Sandra Devin and Rachid Alami. 2016. An implemented theory of mind to
improve human-robot shared plans execution. In 2016 11th ACM/IEEE Intl. Conf.
on Human-Robot Interaction (HRI). IEEE, 319–326.
[7] Anca D Dragan, Kenton CT Lee, and Siddhartha S Srinivasa. 2013. Legibility and
predictability of robot motion. In Proc. 8th ACM/IEEE Intl. Conf. on Human-robot
interaction. IEEE Press, 301–308.
[8] Jennifer Goetz, Sara Kiesler, and Aaron Powers. 2003. Matching robot appearance
and behavior to tasks to improve human-robot cooperation. In Proc. 12th IEEE
Intl. Workshop on Robot and Human Interactive Communication. Ieee, 55–60.
[9] Kerstin S Haring, Katsumi Watanabe, Mari Velonaki, Chad C Tossell, and Victor
Finomore. 2018. FFAB—The Form Function Attribution Bias in Human–Robot
Interaction. IEEE Transactions on Cognitive and Developmental Systems 10, 4
(2018), 843–851.
[10] Sara Kiesler. 2005. Fostering common ground in human-robot interaction. In
ROMAN 2005. IEEE Intl. Workshop on Robot and Human Interactive Communication,
2005. IEEE, 729–734.
[11] Takanori Komatsu, Rie Kurosawa, and Seiji Yamada. 2012. How does the difference between users’ expectations and perceptions about a robotic agent affect
their behavior? Intl. Journal of Social Robotics 4, 2 (2012), 109–116.
[12] Sau-lai Lee, Ivy Yee-man Lau, Sara Kiesler, and Chi-Yue Chiu. 2005. Human
mental models of humanoid robots. In Proc. 2005 IEEE International Conference
on Robotics and Automation. 2767–2772.
[13] Serena Marchesi, Davide Ghiglino, Francesca Ciardo, Jairo Perez-Osorio, Ebru
Baykara, and Agnieszka Wykowska. 2019. Do We Adopt the Intentional Stance
Toward Humanoid Robots? Frontiers in Psychology 10 (2019).
[14] Donald A Norman. 2014. Some observations on mental models. In Mental models.
Psychology Press, 15–22.
[15] Steffi Paepcke and Leila Takayama. 2010. Judging a bot by its cover: an experiment
on expectation setting for personal robots. In 2010 5th ACM/IEEE Intl. Conf. on
Human-Robot Interaction (HRI). IEEE, 45–52.
[16] Raja Parasuraman and Victor Riley. 1997. Humans and automation: Use, misuse,
disuse, abuse. Human factors 39, 2 (1997), 230–253.
[17] Elizabeth Phillips, Daniel Ullman, Maartje MA de Graaf, and Bertram F Malle. 2017.
What Does A Robot Look Like?: A Multi-Site Examination of User Expectations
About Robot Appearance. In Proc. Human Factors and Ergonomics Society Annual
Meeting, Vol. 61. SAGE Publications Sage CA: Los Angeles, CA, 1215–1219.
[18] Aaron Powers and Sara Kiesler. 2006. The advisor robot: Tracing people’s mental
model from a robot’s physical attributes. In Proc. 1st ACM SIGCHI/SIGART Conf.
on Human-Robot Interaction. ACM, 218–225.
[19] William B Rouse and Nancy M Morris. 1986. On looking into the black box:
Prospects and limits in the search for mental models. Psychological bulletin 100,
3 (1986), 349.
[20] Dorsa Sadigh, Anca D Dragan, Shankar Sastry, and Sanjit A Seshia. 2017. Active
Preference-Based Learning of Reward Functions.. In Robotics: Science and Systems.
[21] Valerie K Sims, Matthew G Chin, David J Sushil, Daniel J Barber, Tatiana Ballion,
Bryan R Clark, Keith A Garfield, Michael J Dolezal, Randall Shumaker, and
Neal Finkelstein. 2005. Anthropomorphism of robotic forms: A response to
affordances?. In Proc. Human Factors and Ergonomics Society Annual Meeting,
Vol. 49. SAGE Publications Sage CA: Los Angeles, CA, 602–605.
[22] Sam Thellman and Tom Ziemke. 2019. The Intentional Stance Toward Robots:
Conceptual and Methodological Considerations. In The 41st Annual Conf. of the
Cognitive Science Society, July 24-26, Montreal, Canada. 1097–1103.
[23] Sarah Woods, Kerstin Dautenhahn, and Joerg Schulz. 2004. The design space of
robots: Investigating children’s views. In 13th IEEE Intl. Workshop on Robot and
Human Interactive Communication. IEEE, 47–52.
[24] Robert H Wortham and Andreas Theodorou. 2017. Robot transparency, trust and
utility. Connection Science 29, 3 (2017), 242–248.
[25] Jakub Złotowski, Diane Proudfoot, Kumar Yogeeswaran, and Christoph Bartneck. 2015. Anthropomorphism: opportunities and challenges in human–robot
interaction. Intl. Journal of Social Robotics 7, 3 (2015), 347–360.
Workshop Objectives and Activities
The vision for the workshop schedule is to first develop a complete,
high-level picture of the problem space by inundating ourselves
with all the aspects of user mental model research from all the
related disciplines. This holistic overview then serves as a common ground for the break-out session. The half day workshop will
feature oral presentations of accepted papers, keynote speakers,
and over an hour of interactive activities. Activities will be done
in small groups and will include summarizing the problem space,
brainstorming use cases and collaboration ideas, and making consensus recommendations to difficult questions. After the workshop,
products from these efforts will be made available on the workshop
website.
Time
9:00
9:15
10:15
11:00
11:30
12:45
Activity
Welcoming speech by organizers
Invited speaker talks
Contributed talks
Coffee break
Break-out activities and discussions
Close for lunch
Duration
15
60
45
30
75
Organizing Team
The team of eight organizers will recruit participants from their
networks as well as through major mailing lists for all the relevant
disciplines. The diversity of disciplinary backgrounds on the organizing team will ensure the recruitment of a multidisciplinary
audience and presenters. In addition, we aim to represent audience
members and presenters from different geographic backgrounds,
genders, and levels of seniority.
Matthew Rueben
Privacy
Website
Stefanos Nikolaidis Computational modeling
Website
Maartje de Graaf
Behavioral science
Website
Elizabeth Phillips
Human factors
Website
Lionel Robert
Teamwork, autonomous vehicles Website
David Sirkin
Design
Website
Minae Kwon
Algorithms
Website
Sam Thellman
Cognitive science
Website
659
Download