A New, HRI Inspired, View of Intention and Intention Communication

A New, HRI Inspired, View of Intention and Intention Communication
Yasser F. O. Mohammad and Toyoaki Nishida
Department of Intelligence Science and Technology
Graduate School of Informatics
Kyoto University, Japan
Abstract
mapping function between the position of this point in this
domain and the actual behavior appearing in the real world
domain through actions. This is the view of intention employed by folk psychology and is the underlying framework
for most research in HRI and AI.
It is worthwhile to note that intention in this view (as well
as believes, desires, and goals) has all the qualifications of a
symbol in the traditional AI sense. This explains the fact that
most of the research done in AI and HRI related to intention
modeling to date uses symbolic manipulation. Evolution of
the internal state is coded in this framework as continuous
adoption and revocation of different intentions. In the following subsections, some reflections of this view on AI, and
HRI will be given to show how much this folk psychology
based understanding guides research in these areas.
Intention is one of the most important concepts in understanding human subjective existence as well as in
creating naturally interacting artifacts and robots. Currently intention is modeled as a fixed yet may be unknown value that affects the behavior of humans and
is communicated in natural interactions. This folk psychology inspired vision is underlying the understanding
of intention in AI, and HRI. Despite this wide utilization, this view of intention is challenged by recent results in experimental psychology and neuroscience. The
contribution of this paper is two fold: 1) A new theory
for understanding both intention (Intention as a Reactive Plan) and intention communication (the Intention
through Interaction hypothesis) is proposed and compared with the traditional view. 2) Three concrete systems based on the proposed view of intention in the
areas of HRI, Intelligent User Interfaces, and Robotic
Control Architectures are introduced with comparisons
to traditional view based systems. This novelle approach to intention can inform research in HRI toward
more naturally interactive robots and psychology toward a better understanding of our own intentions.
Intention Modeling in AI and HRI
(Bratman 1987) considers intention as the main attitude that
directs future planning. His Planning Theory of Intention
emphasizes the goal side of intention. Intention in this view
is a goal that is closely coupled to believes. He keeps that
the whole concept of intention is used to guide practical reasoning to the important areas related to the current intentions
to increase its efficiency (Bratman 1999).
Most approaches to model intention in AI, are based
on the Planning Theory of Intention. A very good example is the BDI framework which is a symbolic framework
used extensively by researchers in the agent community
(L. Braubach & Lamersdorf 2004). Intentions in this framework are modeled as commitments to specific plans to focus
the reasoning power of the agent (Morreale et al. 2006).
Intentions in HRI are usually manipulated using BDI like
deliberation also (Parsons et al. 2000), (Stoytchev & Arkin
2004).
Introduction
In their final report for the DARPA/NSF interdisciplinary
study on human-robot interaction, Burke and her colleagues
identified intention recognition as one of the most important technological challenges facing HRI research (Burke
& Scholtz 2004). Despite the increase in the research resources devoted to this area, this challenge is still far from
being met by any available robot. We propose that one of
the main reasons for that is the reliance on the traditional
view of intention based on the dictionary definition of the
English word and folk psychology, which led most research
to employ symbolic processing and high level planning techniques that are not robust in real world environments and this
situation in turn led to, at most, modest success.
Intention communication in AI and HRI
The Traditional View of Intention
The traditional characterization of intention leads to a specific understanding of intention communication that separates this process into two different stages:
The simplest approach to model intention is as a point in
what can be called the intention domain and to consider a
1. Intention Expression: Behaving in a way from which the
intention can be inferred
c 2007, Association for the Advancement of Artificial
Copyright Intelligence (www.aaai.org). All rights reserved.
2. Intention Recognition: Finding the intention behind a specific behavior
21
a result of that implementation intentions cannot be implemented as simple atomic entities but as rules of revoking and
adopting intentions based on environmental cues.
(Rachid Alami & Chatila 2005) proposed using humanrobot interaction as an integral part of the robot’s architecture, but the way this is proposed to be achieved is still relying on the traditional view of intention as a goal/plan and
uses the JIT (Joint Intention Theory) to form the coherence
between the robot’s and the human’s goals (intentions).
The JIT itself builds over the view of intention as a goal
and bases the success of team operation on the commitment
of the team members toward a Joint Persistent Goal which is
considered a way to synchronize the intentions of the members (Cohen & Levesque 1990). The JIT uses symbolic manipulation and logic to manage the intention/belief of team
members, and while this can be very successful in dialogue
management, or simple collaborative scenarios it often fails
in complex noisy environments where the reliance on symbolic beliefs can lead to ungrounded behavior.
Neuroscience
(Gollwitzer 1999) distinguishes between two types of intention:
A main feature of the traditional view of intention is its characterization of intention as a cause of action. This can also
be seen in the Planning Theory of Intention and all the work
done under the umbrella of the BDI framework in AI. It is
beneficial here to consider the two types of intentions defined by Searle, namely prior intentions that specify actions
or goals in the future and intention-in-action which happens
just before the action causing it to feel intentional to the human and, supposedly, causes the action (Searle 1983).
Many recent studies in neuroscience challenge this view
and show that the conscious feeling of the intention-inaction happens actually after the initiation of action preparation not before it. As surprisingly as it may sound, many
neuroscientists tell us that intentions are results of action
preparation in the brain rather than being causes of it (Haggard 2005).
It is interesting to note that the traditional view of intention does not have problems with modeling goal intentions
or prior intentions as the domain of effect of these types of
intention is completely within the mind, but once the role of
intention in relation to action enters the picture (implementation intentions and intention-in-action), problems start to
appear. This should not be surprising, given the symbolic
nature of the traditional view of intention.
Goal Intentions Those intentions specify what should be
achieved (i.e. I intend to achieve X).
Human Robot Interaction
Problems of the Traditional View
As simple and ubiquitous as the folk psychology based traditional view of intention, it has many difficulties when investigated using experimental psychology and neuroscience
perspectives, and systems based on this view of intention in
HRI face some practical difficulties as well. The following
subsections show some of those difficulties.
Experimental Psychology
Research in mobile robotics showed the importance of embodiment to achieve intelligent behavior in complex real
world situations which caused a shift in robotic control architectures toward reactive or hybrid reactive-deliberative
architectures. HRI research was not an exception and many
researchers in HRI use reactive architectures in order to embody the robot into the environment, as long as intention
communication is not considered. Once intention communication becomes the focus of research a shift back to symbolic and deliberative architectures appears again (Stoytchev
& Arkin 2004). The reason of that lies in the traditional
understanding of intention as a product of a transcendental
mind that is connected to the actuators of the robot only indirectly through a planning process.
The practical need to embody the robot in the interaction context was the primary reason that the authors tried
to find a new theory of intention that can show the dynamics
of intention communication in an experience embodied way
(Mohammad & Nishida 2007b).
Implementation Intentions Those intentions define a kind
of coupling between the perceived situation and the execution of a goal directed behavior (i.e. If situation Y
arises, I will start executing the goal-directed behavior Z).
Studies in experimental psychology showed that only
20% to 30% of human behavior related to a specific domain
of action can be accounted for by goal intentions (Sheeran
2002), moreover the main reason of this weakness in the
correlation between intention and behavior is the failure to
construct a plan from the goal (Sheeran 2002). Implementation intentions were shown to increase the attainability of
goals (Sheeran 2002) and reduce the effect of environmental
distracters (Gollwitzer & Sheeran 2006) (Paschal Sheeran &
Gollwitzer 2005). It is believed that the existence of an implementation intention increases the probability of detecting
the associated situation (Gollwitzer & Sheeran 2006), and
converts the effortful plan initiation action needed to attain
goal intentions into effortless unconscious behavior launching based on environmental cues (Gollwitzer 1999).
Although traditional if-then constructs can be used to encode implementation intentions in the symbolic framework,
those constructs are not available for traditional intention
processing systems like the BDI framework except through
complicated logic based manipulations that reduce the effectiveness of those representations. A main reason of this
situation is that the traditional framework of intention treats
intention as a single point in the intention domain, and as
Challenges for the Theory of Intention
Any theory of intention must meet two challenges:
The first challenge is modeling intention in a way that can
represent all aspects of intention mentioned in the discussion above including being a goal and a plan, while being
able to represent implementation intentions and being in accordance with the neuroscience evidence that intention-in-
22
action becomes conscious only as a result of action preparation rather than being the cause of action.
The second challenge is to model the intention communication in a way that respects its continuous synchronization
nature which is not captured by the recognition/expression
duality of the traditional view. This means recasting the
problem of intention communication from two separate
problems to a single problem we call Mutual Intention formation and maintenance.
Mutual Intention is defined in this work as a dynamically
coherent first and second order view toward the interaction
focus.
First order view of the interaction focus, is the agent’s
own cognitive state toward the interaction focus.
Second order view of the interaction focus, is the agent’s
view of the other agent’s first order view.
Two cognitive states are said to be in dynamical coherence
if and only if the two states co-evolve according to a fixed
dynamical low.
Figure 1: The evolution of the Intention Function while
drawing a ”B” character
Intention through Interaction
This hypothesis can be stated as:
1. Intention can be best modeled not as a fixed unknown
value, but as a dynamically evolving function over all possible outcomes.
The Proposed Model of Intention
To solve the problems with the traditional view , we propose
a new vision of intention that tries to meet the two challenges
presented in the last section based on two main ideas:
2. Interaction between two agents couples their intention
functions creating a single system that co-evolves as the
interaction goes. This co-evolution can converge to a mutual intention state.
1. Intention as a Reactive Plan, which tries to meet the representational issue (challenge 1).
The intention of the agent at any point of the time,in this
view, is not a single point in the intention domain but is a
function over all possible outcomes of currently active plans.
From this point of view every possible behavior (modeled
by a Reactive Plan) has a specific probability to be achieved
through the execution of the reactive plan(intention) which
is called its intentionality.
The view presented in this hypothesis is best described by
a simple example. Fig. 1 shows the progress of the intention
function over time while a human is drawing a ”B” character.
In the beginning when the reactive drawing plan is initiated,
its internal processing algorithm limits the possible output
behaviors to a specific set of possible drawings (that can be
called the intention at this point). When the human starts to
draw and with each stroke point the possible final characters
are reduced and the probability of each drawing change. If
the reactive plan is executing correctly, the intention function will tend to sharpen out with time. By the time of the
the disengagement of the plan, the human will have a clear
view of his/her own intention which corresponds to a very
sharp intention function. It should be clear that the mode of
the final form of the intention function need not correspond
to the final drawing except if the coupling low was carefully
designed (Mohammad & Nishida 2007c).
The relation between this view of intention and the traditional view presented before has an analogy in physics. Intention in the traditional view is semantically similar to the
Newtonian view of physical variables as single valued unknowns (e.g. speed, position, momentum, etc), while in the
new view, intention has a sematic analogous to the view of
the wave function in quantum physics.
2. Intention through Interaction, which meets the communication modeling issue (challenge 2).
Intention as a Reactive Plan
This hypothesis can be stated as: Every atomic intention is
best modeled as a Reactive Plan.
A Reactive Plan in this context is defined as an executing process, with specific set of final states (goals) and/or a
specific behavior, that reacts to sensor data directly and generates a continuous stream of actions to attain the desired
final states (goals) or realize the required behavior.
This definition of the atomic intentions has a representational capability that exceeds the traditional view of the
intention point as a goal or a passive plan because it can
account for implementation intentions that was proven very
effective in affecting the human behavior.
This definition of the atomic intentions is in accordance
with the aforementioned neuroscience results, as the conscious recognition of the intention-in-action can happen any
time during the reactive plan execution (it may even be a specific step in this plan) which explains the finding that preparation for the action (the very early steps of the reactive plan)
can proceed the conscious intention.
A single intention cannot then be modeled by a single
value at any time simply because the plan is still in execution and its future generated actions will depend on the
inputs from the environment (and other agents). This leads
to the next part of the new view of intention as explained in
the following section.
23
Figure 3: Intentionality Evolution of the Suggestion and
Obedience Plans. (1) Approaching an obstacle, (2) User Insists to Collide, (3) Very near to a collision, (4) User Gives
Avoidance Command
Figure 2: EICA: Organizational View
Application Oriented Comparison Between
the Two Views
the next action) of all those intentions based on the outputs
of the emotional module (that summarizes the information
from the short past history), and the innate drives module.
The TalkBack robot is the first realization of EICA into a
miniature robot to test the effectiveness of using simple motion cues in expressing the intention of the robot to a human
who deals with it for the first time. In this experiment, the
user should guide the robot using hand gesture to follow a
specific path projected into the ground. Along this path there
exist some obstacles that need to be avoided, and gates that
require precise timing or a key to pass through them. The obstacles and gates are only seen by the robot when it is within
a few centimeters from it and is not seen by the human. The
task could be accomplished if and only if the robot and the
human succeeded in exchanging their knowledge and intentions. For details refer to (Mohammad & Nishida 2007d).
Using traditional formulation, the intention expression
functionality could have been implemented using a BDI
agent that connects the robot’s believes about the environment and its desire to inform the user about the objects it recognizes to a discrete set of intentions. Those intentions then
guide a planner to produce the responses required. This formulation will suffer from known problems of intention revocation, coping with unexpected situations, and other problems typical to this formulation.
In (Mohammad & Nishida 2007d), the authors used EICA
to overcome those limitations by realizing the intention
modeling formulation presented in this paper although intention communication using dynamical coupling was not
needed. Consider the situation in which the robot is moving according to the human command then faces an obstacle
that need to be avoided. Fig. 3 shows the evolution of the
intentionality of the reactive plan planShowSuggestion that
generates the actual feedback and the actions generated by
the innate drive driveObey responsible of forcing the robot
to follow the human commands. In the beginning the intentionality of the feedback intention was not high enough to
generate the feedback until the robot became very near to
the obstacle ({1} in the Figure), at a specific point the feedback started, but unfortunately the human did not understand
To prove the applicability of the proposed view, and to show
how HRI can be used to study its implications for understanding human intention, the following subsections brief
three concrete implementations of systems designed in accordance with it, and compare them to folk psychology inspired systems.
Robot Intention Modeling in HRI
Many researchers proposed robotic control architectures
that try to go behind pure reactivity using hybrid reactivedeliberative architectures where intention expression and
recognition are implemented in the deliberative part using
symbolic manipulation (mostly as a BDI agent (Stoytchev
& Arkin 2004)). This is in accordance with the traditional
view of intention.
Based on the new model of intention as a reactive plan the
authors proposed the EICA (Embodied Interactive Control
Architecture) as a hybrid reactive-deliberative architecture
for real world agents. In this system intention-in-action is
implemented in the reactive part while prior/goal intentions
can be modeled by the deliberative part.
Fig. 2 shows the organizational view of the EICA architecture. The system consists of a dynamic set of concurrently active processes that continuously generate either
simple actions or reactive plans (intentions). Those reactive plans and simple actions are registered into the intention function which realizes the model of intention proposed
here. The action integrator operates on the Intention function, handling the conflicts between intentions, and generating the final actuating commands to the motor system of the
robot or agent (Mohammad & Nishida 2007b), (Mohammad
& Nishida 2007a).
The Intention Function module is the heart of EICA. This
module is the computational realization of the view of intention presented in this paper. The intention function module
is realized as an active memory module (a dynamical system) in which the required behavior plans (intentions) of all
the other active modules in the architecture are registered
and then the module automatically updates the intentionality (which is a measure of the probability of selection as
24
Figure 4: Interactive Perception Function Illustration
Figure 5: The Interactive Perception System
it immediately so (s)he repeatedly gave a command to go in
the direction of the obstacle ({2} in the Figure). This caused
the intentionality of the obedience generated actions to raise
stopping the feedback. This in turn caused a very-near-toa-collision situation({3} in the Figure) raising the feedback
intentionality again generating the feedback until the human
finally gave a direction command that caused the robot to
avoid the obstacle. As seen in this example the evolution
of the intentionality of different intentions (reactive plans)
is smooth which causes the robot actions to be more natural and less jerky than traditional systems in which intention
switches suddenly.
Unfortunately, no comparative study is available between
EICA based and BDI based implementations.
Filter Passive
Interactive
Interactive
PerceptionPerception onlinePerception Offline
FIR 0.340394
0.24669
0.30225
IIR 0.203921
0.14026
0.19822
Table 1: Error in recognizing human’s intention behind hand
motion (cm) using interactive and passive perception.
All what is needed is a system which internal intention function evolves in alignment with the user’s intention function.
Based on this idea the authors proposed the general framework of Interactive Perception in (Mohammad & Nishida
2006).
The Interactive Perception system was designed to attenuate the effects of unintended behavior on the artifact’s perception of human body movement in noisy complex environments (Mohammad & Nishida 2006). This system achieves
its goal by allowing the perception module to align its perception with the intended behavior of the human using implicit clues in the human response to its feedback. The working of the system is illustrated in Fig. 4
The input to the system consists of three signals:
Intention Communication in Human Hand Motion
Let’s consider the problem of inferring the intended drawing
behind free hand movement. This problem can be faced in
intelligent interfaces as well as in robot learning by demonstration. This section compares the traditional intention
based approach with a concrete system designed by the authors based on the new model of intention communication.
The solution based on a BDI agent will assume that the
user has a specific drawing in mind and because the system
must reason about the user’s intention, the possibilities have
to be limited. There are several methods to limit the possible
drawings to be considered:
1. Limiting to a countable set of possible objects.
1. Intended Behavior Signal (IBS) which represents the part
of the human behavior that results from the execution of
its intention reactive plans
2. Limiting the drawings to a specific mathematical model
and inferring the parameters of the model from the input
of the user like in (Li et al. 2005).
2. Unintended Behavior Signal (UBS) which represents the
result of imperfection in the mind-body control system as
well as small errors in the reactive plan.
3. Limiting the drawings by some mathematical restrictions
on its form. Splines are very famous in this category.Most
commercial drawing applications use some form of this
variation.
3. Environmental and sensor noise
All the solutions presented for the traditional system have
some limitations when dealing with free drawing because of
the inherent need to limit the possible user intentions to a
countable set about which reasoning can be done.
The new view of intention communication on the other
hand needs no such restrictions because it does not need to
explicitly represent or reason about the human’s intention.
1. Intended and unintended signals occupy the same range
in the frequency spectrum, and the unintended signal has
much smaller power
The system tries to recover the intended behavior signal
employing implicit human feedback utilizing three features
of human body motion:
2. Intended and unintended signals are highly correlated
3. Intention is not unique (not a single point as the intention
as reactive plan model states).
25
Figure 7: Outline of entrainment based interaction
(T. Tajima 2004)
Figure 6: The Results of NaturalDraw(left) and CorelDraw(right) achieved by the same user. For every drawing
the user’s subjective score, as well as the time needed to finish it are also depicted
The system utilizes an adaptive nonlinear filter followed
by another nonlinear operator that is configured using the
implicit feedback from the human to create a primitive form
of dynamical coherence between the evolution of the human’s intention and the evolution of the system’s perception
as suggested by the intention through interaction hypothesis stated earlier. Fig. 5 shows the main blocks of the system. A proof of applicability of the system can be found in
(Mohammad & Nishida 2006). Table 1 shows the deviation
of the system’s perception of the human’s intention, and as
shown the interactive perception system gave better results
than traditional adaptive systems. Table 1 shows also that
interactive perception worked better in the online settings
which supports the claim that the system actually utilizes
implicit feedback from the user to achieve better mutual intention formation.
Based on the same ideas the authors built a concrete application called NaturalDraw for general drawing that gave significantly better results than a system based on the third variation above in a comparison experiment. Some of the comparative results achieved by NaturalDraw and CorelDraw 10
are depicted in Fig. 6 (Mohammad & Nishida 2007c).
Figure 8: Comparison between entrainment based algorithm
and other algorithms (Toyoaki Nishida & Hiramatsu 2005)
behavior of the human. The second step is called the modulation phase and during which the dynamical system created
in step one is linearly transformed to follow the changes in
the nonverbal behavior rhythm of the human. Fig. 7 summarizes this operation (Toyoaki Nishida & Hiramatsu 2005).
This system can be considered as an implementation of
the intention through interaction hypothesis proposed in this
paper. Fig. 8 shows a performance comparison between a
vacuum cleaning robot controlled using the aforementioned
system and using a remote controller (which can be considered an ideal intention communication mechanism according to the traditional view). The results show that although
the time needed to finish the job using the proposed system
was higher the operating time during which the human had
to keep attention to the cleaning operation is significantly
less. This reduction in the cognitive load needed to operate
the robot is a result of the ability of the entrainment based
system to evolve its intention function in synchronization
with the human’s intention function.
Joint Intention in HRI
In the traditional framework, joint intention is assumed to be
a matter of synchronizing the intention symbols between the
human and the robot or agent based on intention recognition
and expression. This approach usually fails to work in the
micro-interaction level needed to achieve natural interaction
using nonverbal behaviors.
In (T. Tajima 2004), entrainment using dynamical systems
is proposed as a technical mechanism to achieve joint intention in HRI based on synchronizing the rhythm of nonlinear
behavior between the human and the robot in two steps. The
first step is called the synchronization phase in which a low
dimensionality dynamical system is created in the agent that
converges to an orbit encoding the rhythm of the nonverbal
Advantages of the New View
The main advantages of the new view of intention over the
traditional view can be summarized as:
1. The new view can model implementation intentions as
atomic intentions much simpler that the traditional view.
2. The new view is compatible with recent neuroscience
26
Li, J.; Zhang, X.; Ao, X.; and Dai, G. 2005. Sketch recognition with continuous feedback based on incremental intention extraction. In IUI ’05: Proceedings of the 10th international conference on Intelligent user interfaces, 145–
150. New York, NY, USA: ACM Press.
Mohammad, Y. F. O., and Nishida, T. 2006. Interactive perception for amplification of intended behavior in complex
noisy environments. In International Workshop of Social
Intelligence Design 2006 (SID2006), 173–187.
Mohammad, Y. F. O., and Nishida, T. 2007a. EICA: Combining interactivity with autonomy for social robots. In
International Workshop of Social Intelligence Design 2007
(SID2007). to appear.
Mohammad, Y. F. O., and Nishida, T. 2007b. Intention
thorugh interaction: Towards mutual intention in humanrobot interactions. In IEA/AIE 2007 conference. to appear.
Mohammad, Y. F. O., and Nishida, T. 2007c. Naturaldraw:
interactive perception based drawing for everyone. In IUI
’07: Proceedings of the 12th international conference on
Intelligent user interfaces, 251–260. New York, NY, USA:
ACM Press.
Mohammad, Y. F. O., and Nishida, T. 2007d. Talkback:
Feedback from a miniature robot. In IEEE International
Conference in Intelligent Robotics and Systems. submitted.
Morreale, V.; Bonura, S.; Francaviglia, G.; Centineo, F.;
Cossentino, M.; and Gaglio, S. 2006. Reasoning about
goals in bdi agents: the practionist framework. In Joint
Workshop From Objects to Agents.
Parsons, S.; Pettersson, O.; Saffiotti, A.; and Wooldridge,
M. 2000. Intention reconsideration in theory and practice.
In Horn, W., ed., the Fourteenth European Conference on
Artificial Intelligence (ECAI-2000). John Wiley & Sons.
Paschal Sheeran, T. L. W., and Gollwitzer, P. M. 2005.
The interplay between goal intentions and implementation
intentions. Personality Social Psychology Bulletin 31:87–
98.
Rachid Alami, Aurelie Clodic, V. M. E. A. S., and Chatila,
R. 2005. Task planning for human-robot interaction. In
Joint sOc-EUSAI conference, 81–85.
Searle, J. 1983. Intentionality. CUP.
Sheeran, P. 2002. Intention-behavior relations: A conceptual and empirical review. In Stroebe, W., and Hewstone,
M., eds., European Review of Social Psychology. Chichester, UK: Wiley. 1–36.
Stoytchev, A., and Arkin, R. C. 2004. Incorporating
motivation in a hybrid robot architecture. Journal of Advanced Computational Intelligence and Intelligent Informatics 8(3):269–274.
T. Tajima, Y. Xu, T. N. 2004. Entrainment based humanagent interaction. In IEEE Conference on Robotics, Automation, ans Mechatronics.
Toyoaki Nishida, Kazunori Terada, T. T. M. H. Y. O. Y. S.
Y. X. Y. M. K. T. T. O., and Hiramatsu, T. 2005. Toward
robots as embodied knowledge media. IEICA Trans. Inf.
and Syst. E89-D(6):1768–1780.
findings that indicate that conscious intention actually follows action preparation and so cannot cause it.
3. The new view gives a straight forward way to implement
mutual intention by coupling the intention functions of the
interacting agents.(Mohammad & Nishida 2007c).
4. The reactivity of the intention in the new view guards it
against noise and environmental disturbances (Mohammad & Nishida 2007b).
5. The new view gives a way to separate intended and unintended behaviors of the human based on interaction.(Mohammad & Nishida 2006).
6. The new view is simpler to implement in modern reactive or hybrid robotic architecture as it needs no complex
symbolic planners. (Mohammad & Nishida 2007d).
Conclusion
In this paper we analyzed the traditional folk psychology
based view of intention underlying most current research
in intention modeling and intention communication in AI
and HRI and showed that it has many difficulties based on
evidence from experimental psychology, neuroscience, and
HRI research. We also presented a new view of intention
that can overcome all of those limitations and briefly introduced three real world systems designed using the new view
of intention and compared them to systems based on the traditional view of intention.
Directions for future research include implementing the
EICA architecture in a complete robotic application to assess the effectiveness of the new view of intention.
References
Bratman, M. E. 1987. Intentions, Plans, and Practical
Reason. Cambridge, MA: Harvard University Press.
Bratman, M. 1999. Faces of Intention: Selected Essays on
Intention and Agency. Cambridge University Press, United
Kingdom.
Burke, J., M. B. R. E. L. V., and Scholtz, J. 2004. Final report for the darpa? nsf interdisciplinary study on humanrobot interaction. IEEE Transaction Systems, Man and
Cybernatics, Part C 34(2):103–112.
Cohen, P., and Levesque, H. 1990. Intention is choice with
commitment. Artificial Intelligence 42:213–261.
Gollwitzer, P. M., and Sheeran, P. 2006. Implementation
intentions and goal achievement: A meta-analysis of effects and processes. Advances in Experimental Social Psychology 38:249–268.
Gollwitzer, P. M. 1999. Implementation intentions: Strong
effects of simple plans. American Psychologist 54:493–
503.
Haggard, P. 2005. Conscious intention and motor cognition. TRENDS in Cognitive Sciences 9(6):290–295.
L. Braubach, A. Pokahr, D. M., and Lamersdorf, W. 2004.
Goal representation for BDI agent systems. In the AAMAS
Workshop on Programming in Multi-Agent Systems (PROMAS’04), 44–65.
27