The Semantics of Potential Intentions
Xiaocong Fan and John Yen
School of Information Sciences and Technology
The Pennsylvania State University
University Park, PA 16802
{zfan, jyen}@ist.psu.edu
Abstract
The SharedPlans theory provides an axiomatic framework of collaborative plans based on four types of intentional attitudes. However, there still lacks an adequate semantics for the ‘potential intention’ operators.
In this paper, we give a formal semantics to potential
intentions, and examine models that can validate various relations between beliefs, intentions, and potential
intentions.
Introduction
Philosophers have long struggled over how best to characterize the concept of intention (Searle 1983; Bratman
1987), which is intimately connected with means-ends reasoning. Normal modal logics have been employed to define possible-worlds semantics for intentions (Cohen &
Levesque 1990; Rao & Georgeff 1995). Konolige & Pollack (1993) provide a representationalist theory of intention,
using cognitive structures to directly represent intentions
and the means-ends relationship among intentions. Singh
& Asher (1993) give a theory of intentions based on Discourse Representation Theory. Existing solutions have been
extended to investigate intentions involving groups and cooperations (Grosz & Sidner 1990; Singh 1993; Herzig &
Longin 2002).
The SharedPlans theory (Grosz & Kraus 1996) provides
an axiomatic framework of collaborative plans based on
four types of intentional attitudes. Operators Int.To and
Int.Th represent intentions that have been adopted by an
agent, while Pot.Int.To and Pot.Int.Th represent potential
intentions—intentions that an agent would like to adopt, but
to which it is not yet committed. Int.To and Pot.Int.To apply
to actions while Int.Th and Pot.Int.Th apply to propositions.
However, Grosz & Kraus only informally characterized
what it means for an agent to have a potential intention.
There still lacks an adequate semantics for the ‘potential
intention’ operators. Our aim in this paper is to present a
formal semantics of potential intentions and investigate the
relationships of potential intentions with agent beliefs and
normal intentions.
Desire is a potential influencer of conduct while intention is a conduct-controlling pro-attitude (Bratman 1990).
From such a sense, potential intentions are agent desires.
c 2005, American Association for Artificial IntelliCopyright °
gence (www.aaai.org). All rights reserved.
Desires play an important role in determining goals and
intentions that stand for the consistent worlds an agent is
striving for (Cohen & Levesque 1990). The notion of ‘potential intentions’ differs from the concepts of ‘goal’ and
‘want’ as appeared in the literature. Goals are chosen desires (Cohen & Levesque 1990). Thus, one major difference between potential intentions and goals is that an agent
cannot hold incompatible goals but can hold incompatible potential intentions. Potential intentions as used in the
SharedPlans theory are also different from the ‘want’ attitude proposed by Sadek (1992). want(A, p) abbreviates
Bel(A, p) ∨ Int(A, Bel(A, p)). Thus, an agent wants what
it believes. This is quite different from potential intentions.
The paper is organized as follows. We introduce the motivations in Sec. 2. We then give the formal language (Sec.
3), model (Sec. 4), and semantics (Sec. 5), and examine
models (Sec. 6) that can validate various relations between
beliefs, intentions, and potential intentions.
Motivations
Possible worlds have been treated as collections of propositions (Halpern & Moses 1992), time lines representing a sequence of events (Cohen & Levesque 1990), and
time trees with branching futures (Singh & Asher 1993;
Rao & Georgeff 1995). A common ground of these approaches is that the accessibility relations are maps from
a single world to a set of possible worlds. One exception
in Rao & Georgeff’s work (1995) is that they examined
the relationship between the belief-, desire-, and intentionaccessible worlds with respect to the structure of possible
worlds, where a world is a sub-world of another if the former contains fewer paths but they are otherwise identical to
each other.
From a representationalist’s perspective, Konolige & Pollack (1993) used the notion of embedding graph to represent the structure of intentions by means of relations among
relevant scenarios (i.e., a set of possible worlds) 1 . Singh
& Asher (1993) employed an embedding function to assign
strategies (i.e., the abstract specifications of the behavior of
an agent. This is similar to recipes (Grosz & Sidner 1990)
1
Rao & Georgeff (1995) examined the set relationship among
the belief-, desire-, and intention-accessible worlds. This is different from the set relationship considered by Konolige & Pollack,
who focused on the relationship between relevant intentions
AAAI-05 / 71
and action expressions (Cohen & Levesque 1990)) to agents
at different worlds and times.
Due to the unique nature of potential intentions and their
relations with normal intentions, we choose to give a nonnormal possible-worlds semantics to intentions and potential
intentions, drawing upon techniques used by representationalist. Particularly, our model will involve three kinds of relationship among possible worlds: (a) pointwise relationship:
from a world to a sub-world; (b) accessibility relationship:
from a world to a set of worlds; and (c) set relationship: from
a set of worlds to another set. Our model will also include
an embedding function for assigning recipes to agents.
The dynamics of intentions considered here is based on
the following observations on the SharedPlans theory:
(a) An agent may hold conflicting potential intentions but
cannot hold conflicting intentions. Conflicts can arise because an agent may only have partial plans, or an agent
may only have incomplete picture of the plans being used
by the other group members.
(b) Potential intentions can be upgraded to intentions, as long
as it would not cause intention conflicts. But not all intentions are originated from potential intentions.
(c) Potential intentions typically arise from means-ends reasoning (cf. Axioms A5 and A6 (Grosz & Kraus 1996)).
Potential intentions are thus connected with recipes. We
consider three kinds of originated-from relations between
potential intentions and intentions:
• An agent with an intention that p hold will adopt a potential intention to do action α if the agent has a recipe for
α that can lead to p;
• An agent has to follow some recipe to pursue an intention. A recipe may contain choice points, each is associated with several alternative courses of action. These
alternatives typically serve the same ends (e.g., to make
p hold) but may have different requirements (both physically and epistemically) on the agent. However, due to uncertainty of the situation, the agent may not know which
one works better when reaching a choice point. In such a
case, the agent can adopt potential intentions rather than
full-fledged intentions so that it can make a better choice
in a later deliberation process;
• A group of agents with an intention to do a course of
action may reach an action whose doer has not been resolved yet. To be helpful, each agent in the group who
has the capability and capacity can adopt a potential intention to do the action.
Formal language
The formal language L is a multi-modal predicate calculus,
augmented with temporal operators from the branching-time
logic CTL* (Emerson 1990).
Definition 1 The alphabet of L has the following symbols:
a denumerable set Pred of predicate symbols; a denumerable set of constant symbols Const ⊇ Const Ag ∪
Const Ac ∪ Const Gr ∪ Const Re ∪{‘true’} where the mutually disjoint sets ConstAg , ConstGr , ConstAc and ConstRe
are constants for agents, agent groups, primitive act-types
(the empty act nil ∈ ConstAc ) and recipes, respectively;
a denumerable set of variable symbols Var ⊇ VarAg ∪
VarGr ∪ VarAc ∪ VarRe where the mutually disjoint sets
Var Ag = {x, y, x1 , · · · }, Var Gr = {G, G0 , · · · }, Var Ac =
{a, b, a0 , · · · } and Var Re = {γ, γ 0 , · · · } are variables for
agents, agent groups, primitive act-types and recipes, respectively; the recipe refinement operator w; the membership relation operator ∈; the classical connectives ∨ (‘or’),
¬ (‘not’), and the universal quantifier ∀; the modal operators Bel, Pot.Int.To, Pot.Int.Th, Int.To, Int.Th; the temporal operators X (next), U (until), and path quantifier E
(some path in the future); the action constructor symbols ‘;’
(sequential) , ‘|’ (choice), ‘?’ (test), ‘!’ (achieve), and ‘*’
(iterative); and the punctuation symbols ‘)’ , ‘(’, ‘,’ and ‘·’.
A term is either a constant or a variable. The syntax
of well-formed state formulas and path formulas is defined
in Figure 1. Let AE be the set of well-formed action expressions, Fstate be the set of well-formed state formulas,
α, β, e, e0 range over AE , and φ, ψ, φ0 range over Fstate .
hpredi ::=
hag-termi ::=
hac-termi ::=
hgr-termi ::=
hre-termi ::=
hvari ::=
htermi ::=
hac-expi ::=
any element of Pred
any element of TermAg
any element of TermAc
any element of TermGr
any element of TermRe
any element of Var
any element of ∪s∈{Ag,Ac,Gr,Re} Termk
hac-termi | hre-termi
| hac-expi; hac-expi | hac-expi0 |0 hac-expi
| hs-fmlai? | hs-fmlai! | hac-expi∗
hs-fmlai ::= (hpredi htermi, · · · , htermi)
| ¬hs-fmlai | hs-fmlai ∨ hs-fmlai
| ∀hvari · hs-fmlai | Ehp-fmlai
| hag-termi ∈ hgr-termi | hac-termi ∈ hre-termi
| hre-termi w hre-termi | Bel(hag-termi, hs-fmlai)
| Int.To(hag-termi | hgr-termi, hac-expi, hs-fmlai)
| Pot.Int.To(hag-termi | hgr-termi, hac-expi, hs-fmlai)
| Int.Th(hag-termi | hgr-termi, hs-fmlai, hs-fmlai)
| Pot.Int.Th(hag-termi | hgr-termi, hs-fmlai, hs-fmlai)
hp-fmlai ::= hs-fmlai| ¬hp-fmlai | hp-fmlai ∨ hp-fmlai
| ∀hvari · hp-fmlai | Xhp-fmlai | hp-fmlaiUhp-fmlai
Figure 1: The syntax
To simplify the presentation, we omit the time arguments of the modal operators used in the SharedPlans theory.
Bel(A, φ) represents agent A believes φ; Int.To(A, α, C)
and Int.Th(A, φ, C) represents that agent A under context C intends to do α, intends that φ hold, respectively; Pot.Int.To(A, α, C) and Pot.Int.Th(A, φ, C) represents that agent A under context C potentially-intends to
do α, potentially-intends that φ hold, respectively. We use
Pot.Int.Tx(A, α, C) to refer to either Pot.Int.To(A, α, C) or
Pot.Int.Th(A, α, C). Such used, α refers to either an action
(expression) or a proposition. Similar applies to Int.Tx (but
keep the ‘x’ fixed in a specific definition).
Other connectives and operators such as ∧, →,
F(sometime in the future), A (all paths in the future), and
AAAI-05 / 72
G (all times in the future), can be defined as abbreviations.
A recipe, γ = heγ , ργ i, is composed of an action expression eγ ∈ AE and a constraint ργ ∈ Fstate . The doing of
the course of action in eγ under constraint ργ constitutes the
performance of γ.
Definition 2 The roles contained in an action expression e,
Role(e), is defined recursively:
0. Role(nil) = ∅; Role(φ?) = ∅; Role(φ!) = ∅;
1. Role(a) = {a} if a ∈ TermAc ;
2. Role(γ) = Role(eγ ) if γ ∈ TermRe ;
3. Role(e1 ; e2 ) = Role(e1 ) ∪ Role(e2 );
4. Role(e1 |e2 ) = Role(e1 ) ∪ Role(e2 );
5. Role(e∗ ) = Role(e).
Fagin & Halpern (1988) introduce the notion of “noninteracting clusters” of beliefs, where a belief held in one
cluster or frame of mind may contradict a belief held in another cluster. Drawing upon this idea, we take potential intentions (the relation P above) as partitioning the possible
futures into various “frames of mind”.
The dynamic graph π captures the evolution of agent intentions. A dynamic theory of intention evolution can be
very complicated. Here, we simply assume that π is populated with three kinds of state-indexed relations: upgrade
relation (d), originated-from relation (Ã), and elaboration
relation (³). Below we give some semantic rules for populating π with these relations.
The Formal Model
Rule 1 If at state s of world w, an agent A has a potential intention Pot.Int.Tx(A, α, C), and there is no intention conflict resulting from this potential intention, then add
Pot.Int.Tx(A, α, C) dw,s Int.Tx(A, α, C) to π.
The formal model treats each possible world as a time tree
with a single past and a branching future (Singh & Asher
1993; Wooldridge & Jennings 1994; Rao & Georgeff 1995).
Each world w = hSw , Rw i is indexed with a set of states
Sw partially ordered by temporal precedence Rw . In particular, we use w0 to denote the starting state of world w (i.e.,
6 ∃s0 · s0 Rw w0 ). A path in world w is any single branch of w
starting from a given state and contains all the states in some
linear sub-relation of Rw . A state transition is caused by the
occurrence of a primitive action or an event.
Let the domain of quantification D = DAc ∪DRe ∪DAg ∪
DGr ∪ DU , where DAc is the set of all primitive act-types,
DRe is the set of recipes defined over DAc , DAg is the set
of agents populating each world, DGr = 2DAg \ ∅ is the set
of agent groups, and DU is the set of other individuals. Let
DA = DAg ∪ DGr , and DT = DAc ∪ DRe .
Definition 3 A model, M, is a structure
M = hW, S, D, B, P, I, π, Υ, Ω, Ξ, Φ, Π1 , Π2 i, where
• W isS
a set of possible worlds;
• S = w∈W Sw is the set of states occurred in W ;
• D is the domain of individuals;
• B ⊆ W ×S ×DAg ×W is the belief accessibility relation;
• P ⊆ W × S × DAg × 2W assigns the frames of mind relation to each agent in DAg at different worlds and states. If
P(w, s, A) = {L1 , · · · , Lk }, then each Li (1 ≤ i ≤ k) contains the set of worlds that agent A, in that frame of mind,
considers possible from state s of world w;
• I ⊆ W × S × DAg × W is the intention accessibility
relation. I(w, s, A) contains the set of worlds that agent A
considers possible from state s of world w;
• π is an embedding dynamic graph for relating intentions.
π is ∅ initially;
• Υ ⊆ W × S × DAg × Pred × DRe establishes relations
between recipes leading-to a proposition with agents at different worlds and states. Υ(w, s, A, p) is the set of recipes
that agent A can use at state s of w to bring about p;
• Ω ⊆ DAc × DAg relates act-types and agents. Ω(α) gives
the set of agents each can take the role of doing action α;
• Ξ interprets constants; and
• Φ ⊆ W × S × Pred × Dk interprets predicates (Φ preserves arity);
• Π1 ⊆ DA ×Fstate ×Fstate assigns initial intentions-that;
• Π2 ⊆ DA × AE × Fstate assigns initial intentions-to.
By using Rule 1, the original potential intention is upgraded to a full-fledged intention.
The next rule states how potential intentions are derived
from full-fledged intentions. Let hasRole(A, α) , ∃% ∈
Role(α) · A ∈ Ω(%). Predicate contains(e1 , e2 ) is true iff
e2 is a sub action-expression of e1 ; doer(A, α) represents
that agent A is the doer of action α.
Rule 2 (deriving potential intentions)
(1) Add Int.Th(A, φ, C) Ãw,s Pot.Int.To(A, α, C 0 ) to π
where C 0 = C ∧ Int.Th(A, φ, C), if at state s of world w
agent A has the intention with respect to φ, and ∃γ · γ ∈
Υ(w, s, A, φ) ∧ α = eγ ;
(2) Add Int.To(A, e, C) Ãw,s Pot.Int.To(A, α, C 0 ) to π
where C 0 = C ∧ Int.To(A, e, C), if at state s of world w
agent A has the intention to do e, and ∃β·contains(e, α|β)∧
hasRole(A, α);
(3) Add Int.To(G, e, C) Ãw,s Pot.Int.To(A, α, C 0 ) to π
where C 0 = C ∧ Int.To(G, e, C) ∧ isDoer (A, α), if at
state s of world w, the group G has the intention to do
e, and contains(e, α)∧ ¬resolved (α, G) ∧ (A ∈ G) ∧
hasRole(A, α), where resolved (α, G) is defined in Fig. 2.
Rule 2.(1) says that an agent adopts a potential intention
to do α if α is the action expression of some recipe that can
lead to φ. Rule 2.(2) says that an agent A adopts a potential
intention to do α if A can take the role of α and α is a branch
of some choice expression in e. Rule 2.(3) says that an agent
A in a group G adopts a potential intention to do α if α is a
sub-expression of e, A can take some role required in α, and
the group G has not resolved who will take the role of doing
α. Here, isDoer (A, α) in C 0 serves as an escape condition:
A can drop the potential intention if later it turns out that the
group designates another agent as the doer of α.
Elaboration relation is used to depict the means-ends
structure of intentions. We first define a refinement relation.
Definition 4 Given recipes γ1 = heγ1 , ργ1 i and γ2 =
heγ2 , ργ2 i, refines(γ2 , γ1 , w, s) holds iff:
(1) γ1 and γ2 lead to a same goal: ∃φ∃A · {γ1 , γ2 } ⊆
Υ(w, s, A, φ);
(2) Constraint ργ2 is satisfiable given that ργ1 : ∃V such that
hM, V, w, si, ργ1 |= ργ2 (cf. Fig. 2 for the def. of |=); and
AAAI-05 / 73
(3) eγ2 reifies eγ1 : eγ1 B+ eγ2 , where eγ2 results from eγ1
by applying B one or more times, where B is defined as:
• if predicate contains(e, e1 |e2 ) holds, then e B e[e1 |e2 , e1 ]
and e B e[e1 |e2 , e2 ], where e[e1 |e2 , e1 ] is e with the occurrence of ‘e1 |e2 ’ replaced by e1 ;
• if contains(e, e∗1 ), then e B e[e∗1 , ek1 ], where ek1 is a sequence of e1 with a fixed length k;
• if contains(e, γ) and refines(γ 0 , γ, w, s), then eBe[γ, γ 0 ];
• if contains(e, φ!) and ∃A∃γ · γ ∈ Υ(w, s, A, φ), then
e B e[φ!, γ].
The basic idea behind the notion of recipe refinement is
that rational agents tend to adopt concrete intentions to pursue general ones.
Rule 3 Add Int.To(A, eγ1 , C) ³w,s Int.To(A, eγ2 , C 0 ) to
π where C 0 = C ∧ refines(γ2 , γ1 , w, s), if at state s of
world w agent A has the intention to do eγ1 , and ∃γ2 ·
refines(γ2 , γ1 , w, s).
Rule 3 says that an agent adopts an intention to do eγ2 if the
recipe γ2 refines γ1 –the one being followed by the agent.
Here, refines(γ2 , γ1 , w, s) in C 0 accounts for the adoption
of the more elaborated intention.
More rules, which are omitted here, can be given to further extend ³ to accommodate the “elaboration” relation
relative to decomposition and specialization as considered
by Konolige & Pollack (1993).
We now define the notion of ‘sub-world’.
Definition 5 A world w0 is a sub-world of the world w, denoted by w0 ≺ w, if and only if
(a) Sw0 ⊆ Sw ;
(b) Rw0 ⊆ Rw ;
(c) ∀s ∈ Sw0 , Φ(w0 , s) = Φ(w, s);
(d) ∀s ∈ Sw0 , ∀A ∈ DAg , ∀v ∈ W , (w0 , s, A, v) ∈ B iff
(w, s, A, v) ∈ B;
(e) ∀s ∈ Sw0 , ∀A ∈ DAg , ∀L ∈ 2W , (w0 , s, A, L) ∈ P iff
(w, s, A, L) ∈ P;
(f) ∀s ∈ Sw0 , ∀A ∈ DAg , ∀v ∈ W , (w0 , s, A, v) ∈ I iff
(w, s, A, v) ∈ I; and
(g) ∀s ∈ Sw0 , ∀A ∈ DA , ∀p ∈ Pred , ∀γ ∈ DRe ,
(w0 , s, A, p, γ) ∈ Υ iff (w, s, A, p, γ) ∈ Υ.
The Semantics
Notations: wsri denotes the path in w starting from state si
and defined by r—a linear sub-relation of Rw . si <r sj
denotes that sj is the next state of si along r; si <∗r sj
denotes that sj is a state accessible from state si along r.
[s0 , sk )r = {s|s0 <∗r s <∗r sk and s 6= sk }. We also use
si <∗r sj to refer to the segment of path r between si and sj .
Let act(si <∗r sj ) be the sequence of primitive actions
occurred in the path segment si <∗r sj . run(κ, α) holds if
the action sequence κ is a run of action expression α.
The function [[ · ]] gives the denotation of a term relative
to Ξ and a sort-preserving variable assignment function V:
[[τ ]] is Ξ(τ ) if τ ∈ Const, and V(τ ) otherwise. Let V xc be an
assignment function agreeing with V except for variable x:
V xc (x) = c, and ∀y 6= x · V xc (y) = V(y).
Satisfaction of formulas, denoted by |=, is given with respect to a model M, a variable assignment V, a world w,
and a state s or a path wsr . Figure 2 gives the semantics of
L. The semantics of first-order connectives, temporal operators, and Bel is straightforward. We here focus on the
semantics of intentional operators.
Int.Th: An agent A intends that φ hold under context C,
iff (i) φ is an ascribed intention ((φ, C) ∈ Π1 (A)) and C
holds; or (ii) Int.Th(A, φ, C) is upgraded from, or connected
with, some other (potential) intention (recorded in π), and
for any v accessible from (w, s), (a) φ and C hold at the first
state of v, (b) v is a sub-world of w, and (c) there exists a
path from s to v0 . This semantics of intention differs from
the existing approaches in three aspects. First, it leverages
the features of both the representational and accessibilityrelation approaches. Consequently, the model accommodates both ascribed intentions and dynamically generated intentions. Second, satisfying the intention-accessibility relation is no longer the sufficient condition for an agent to hold
a non-ascribed intention. The formula itself must have some
relation with other intention as recorded in π. This, again,
is a strong representationalist feature. Third, to have a nonascribed intention, any world accessible from (w, s) must be
a sub-world of w and the world can be reached from s along
some path in w. This establishes a reasonable connection
between the accessibility relation and the temporal precedence relation. Thus, intuitively, if Int.Th(A, φ, C) (where
(φ, C) 6∈ Π1 (A)) holds at (w, s), it must be the case that
every intention-accessible world can be reached from state s
after some effort.
An intention is dropped when the goal is satisfied or it
becomes unachievable. The contexts of intentions can be
used for such purposes. For example, add ¬Bel(A, φ) to
C when an agent A has an intention Int.Th(A, φ, C) where
(φ, C) ∈ Π1 (A). Then the intention can be dropped when
Bel(A, φ) holds (e.g., remove φ from Π1 ). Similarly, the
context of an intention could contain the source intention;
the intention is abandoned when the source is dropped. The
detail of this topic is out of the scope of this paper.
Int.To: An agent A intends to do α under context C, iff (i)
α is an ascribed intention ((α, C) ∈ Π2 (A)) and C holds;
or (ii) Int.To(A, α, C) is upgraded from, or connected with,
some other (potential) intention (recorded in π), and for any
v accessible from (w, s), (a) Done(α) and C hold at the first
state of v, (b) v is a sub-world of w, and (c) there exists a
path from s to v0 . Here, Done(α) is defined backward from
s along a path: there exists a past state s0 such that the action
sequence occurred in the path segment s0 <∗r s is a run of α.
Pot.Int.Th: An agent potentially intends that φ hold under
context C, iff (a) the agent does not have an intention regarding φ yet; (b) Pot.Int.Th(A, φ, C) is originated from some
other intention (recorded in π), and there exists a frame Li
of mind accessible from (w, s) such that for any v ∈ Li : (c)
φ and C hold at the first state of v; (d) v is a sub-world of
w; and (e) there exists a path from s to v0 . Intuitively, if
Pot.Int.Th(A, φ, C) holds at (w, s), it must be the case that
there exists a frame of A’s mind accessible from (w, s) such
that every world in the frame can be reached after some effort (along some path) from state s in world w. Such defined,
the model allows inconsistent potential intentions: they can
hold in different frames of mind.
Pot.Int.To: An agent potentially intends to do α under
AAAI-05 / 74
hM, V, w, si |=true
hM, V, w, si |=p(τ1 , · · · , τn ) iff h[[τ1 ]], · · · , [[τn ]]i ∈ Φ(w, s, p)
hM, V, w, si |=¬φ iff hM, V, w, si 6|= φ
hM, V, w, si |=φ ∨ ψ iff hM, V, w, si |= φ or hM, V, w, si |= ψ
hM, V, w, si |=∃y · φ iff ∃m ∈ D such that hM, V ym , w, si |= φ
hM, V, wsr i |=φ iff hM, V, w, si |= φ, where φ is a state f ormula
(1)
(2)
(3)
(4)
(5)
(6)
hM, V, wsr i |=Xφ iff hM, V, wsr0 i |= φ, where s <r s0
hM, V, wsr i
|=φUψ iff (a) ∃sk · s
0
<∗r
<∗r
sk such that
(7)
hM, V, wsrk i
0
|= ψ, and ∀s ∈
[s, sk )r , hM, V, wsr0 i
|= φ, or
0
s , hM, V, wsr0 i |= φ
path wsr such that hM, V, wsr i
(b) ∀s · s
hM, V, w, si |=Eφ iff there exists a
|= φ
hM, V, w, si |=(x ∈ G) iff [[x]] ∈ [[G]]
hM, V, w, si |=(α ∈ γ) iff contains([[eγ ]], [[α]])
hM, V, w, si |=(γ1 w γ2 ) iff refines([[γ1 ]], [[γ2 ]], w, s)
hM, V, w, si |=Bel(A, φ) iff ∀v ∈ B(w, s, A) · hM, V, v, v0 i |= φ
hM, V, w, si |=Int.Th(A, φ, C) iff (i) (φ, C) ∈ Π1 (A) and hM, V, w, si |= C, or
(8)
(9)
(10)
(11)
(12)
(13)
(ii) [∃η, w0 , s0 · (η dw0 ,s0 Int.Th(A, φ, C) ∈ π) ∨ (η ³w0 ,s0 Int.Th(A, φ, C) ∈ π)], and
∀v ∈ I(w, s, A), (a) hM, V, v, v0 i |= φ ∧ C, (b) v ≺ w, and (c) ∃r · s <∗r v0
hM, V, w, si |=Int.To(A, α, C) iff (i) (α, C) ∈ Π2 (A) and hM, V, w, si |= C, or
(14)
(ii) [∃η, w0 , s0 · (η dw0 ,s0 Int.To(A, α, C) ∈ π) ∨ (η ³w0 ,s0 Int.To(A, α, C) ∈ π)], and
∀v ∈ I(w, s, A), (a) hM, V, v, v0 i |= Done(α) ∧ C, (b) v ≺ w, and (c) ∃r · s <∗r v0
(15)
hM, V, w, si |=Pot.Int.Th(A, φ, C) iff (a) hM, V, w, si 6|= Int.Th(A, φ, C), (b) ∃η, w0 , s0 · (η Ãw0 ,s0 Pot.Int.Th(A, φ, C) ∈ π),
and ∃Li ∈ P(w, s, A), ∀v ∈ Li , (c) hM, V, v, v0 i |= φ ∧ C, (d) v ≺ w, and (e) ∃r · s <∗r v0
(16)
hM, V, w, si |=Pot.Int.To(A, α, C) iff (a) hM, V, w, si 6|= Int.To(A, α, C), (b) ∃η, w0 , s0 · (η Ãw0 ,s0 Pot.Int.To(A, α, C) ∈ π),
and ∃Li ∈ P(w, s, A), ∀v ∈ Li , (c) hM, V, v, v0 i |= Done(α) ∧ C, (d) v ≺ w, and (e) ∃r · s <∗r v0
hM, V, w, si |=IntX(G, α, C) iff ∀x ∈ G, hM, V, w, si |= IntX(x, α, C ∧ IntX(G, α, C)),
where IntX ∈ {Int.To, Int.Th, Pot.Int.To, Pot.Int.Th}
0
0
<∗r
0
<∗r
hM, V, w, si |=Done(α) iff ∃r, s · s
s, and run(act(s
s), α)
hM, V, w, si |=resolved (α, G) iff ∀% ∈ Role(α), ∃A ∈ G ∩ Ω(%) such that ∀x ∈ G · hM, V, w, si |= Bel(x, isDoer (A, %))
(17)
(18)
(19)
(20)
Figure 2: The formal semantics
context C, iff (a) the agent does not have an intention to
do α yet; (b) Pot.Int.To(A, α, C) is originated from some
other intention (recorded in π), and there exists a frame Li
of A’s mind accessible from (w, s) such that for any v ∈ Li :
(c) Done(α) and C hold at the first state of v; (d) v is a
sub-world of w; and (e) there exists a path from s to v0 .
Group intentions: A group G has an intention regarding α iff each agent in G has an intention regarding
α under the context that the individual intention originates from, and is governed by, the group intention. An
intention ultimately leads to the performing of actions.
Since group intentions entail individual intentions, the fulfillment of Int.To(G, e, C) depends on the fulfillment of
Int.To(A, e, C 0 ) for each A ∈ G. Without going into detail,
we here assume that by fulfilling Int.To(A, e, C 0 ), agent A
simply performs those primitive actions in e that are within
A’s capability. Such assumed, if ∀% ∈ Role(e) · A ∈
Ω(%)∨B ∈ Ω(%), then Int.To(A, e, C 0 ) and Int.To(B, e, C 0 )
jointly will get e done.
Relationship among intentions
The above model allows us to spell out the relations between
intentions and potential intentions.
Definition 6 The structure M is upgrade-allowable iff:
if Pot.Int.Tx(A, α, C) dw,s Int.Tx(A, α, C) ∈ π, then
(1) hM, V, w, s−1 i |= Pot.Int.Tx(A, α, C), where
s−1 Rw s;
(2) hM, V, w, si |= Int.Tx(A, α, C); and
(3) ∃F ⊆ P(w, s−1 , A) suchSthat F is consistent with respect to α, and I(w, s, A) ⊆ Li ∈F Li .
Definition 7 The structure M is proactive-allowable iff:
(1) if Int.Th(A, φ, C) Ãw,s Pot.Int.To(A, α, C 0 ), then
∃Li ∈ P(w, s, A) such that Li ⊆ I(w, s, A) and
hM, V, w, vi |= φ ∧ Done(α) for all v ∈ Li ;
(2) if Int.To(A, e, C) Ãw,s Pot.Int.To(A, α, C 0 ), then
(a) ∃Li ∈ P(w, s, A) such that Li ⊆ I(w, s, A) and
hM, V, w, vi |= Done(e) ∧ Done(α) for all v ∈ Li ; and
(b) ∃Lj ∈ P(w, s, A) such that Lj ⊆ I(w, s, A) and
hM, V, w, vi |= Done(e) ∧ ¬Done(α) for all v ∈ Lj ;
(3) if Int.To(G, e, C) Ãw,s Pot.Int.To(A, α, C 0 ), then
∃Li ∈ P(w, s, A) such that Li ⊆ I(w, s, A) and for
all v ∈ Li , hM, V, w, vi |= Done(e) ∧ Done(α) and
∃% ∈ Role(α) such that % ∈ act(s <∗ v) and isDoer (A, %).
Definition 8 The structure M is refinement-allowable iff:
AAAI-05 / 75
if Int.To(A, eγ1 , C1 ) ³w,s Int.To(A, eγ2 , C2 ), then
∃φ · γ1 ∈ Υ(w, s, A, φ) such that ∀v ∈ I(w, s, A),
hM, V, w, vi |= φ ∧ Done(eγ2 ).
Let M0 be a model that is upgrade-allowable, proactiveallowable, and refinement allowable. We examine the relations between potential intentions and intentions in M0 .
Proposition 1
1. M0 allows agents to hold conflicting potential intentions;
2. An intention cannot coexist with a potential intention towards
the same ends:
hM0 , V, w, si 6|= Int.Tx(A, α, C) ∧ Pot.Int.Tx(A, α, C);
3. Agents proactively choose actions relative to their competence:
hM0 , V, w, si |= Int.To(A, α|β, C) ∧ hasRole(A, α) →
Pot.Int.To(A, α, C ∧ Int.To(A, α|β, C));
hM0 , V, w, si |= Int.To(G, α; β, C) ∧ (A ∈ G) ∧
hasRole(A, α) ∧ ¬resolved (α, G) → Pot.Int.To(A, α, C ∧
Int.To(A, α|β, C)).
We then consider the relations between Bel and Int.Tx.
Definition 9 Define the following constraints on B and I:
(R1 ) ∀w∀s∀A∃v, v ∈ I(w, s, A) ∩ B(w, s, A);
(R2 ) ∀w∀s∀A∀v ∈ B(w, s, A), I(w, s, A) ⊆ I(w, v, A);
(R3 ) ∀w∀s∀A∀v ∈ I(w, s, A), B(w, v, A) ⊆ I(w, s, A).
(R1 ) is actually the weak-realism constraint (Rao &
Georgeff 1995). Let M{i} be M0 satisfying (Ri ). We have
the following properties about belief-intention relations.
Proposition 2
hM{1} , V, w, si |= Int.Th(A, φ, C) → ¬Bel(A, ¬φ);
hM{1} , V, w, si |= Int.To(A, α, C) → ¬Bel(A, ¬Done(α));
hM{2} , V, w, si |= Bel(A, Int.Th(A, φ, C)) → Int.Th(A, φ, C);
hM{3} , V, w, si |= Int.Th(A, φ, C) → Int.Th(A, Bel(A, φ), C);
hM{2,3} , V, w, si |= Bel(A, Int.Th(A, φ, C))
→ Int.Th(A, Bel(A, φ), C).
In Proposition 2, the first two say that in M{1} , an agent
does not intend what it believes impossible (This is the Axiom 1 of the SharedPlans theory (Grosz & Kraus 1999)).
The third one states that an agent does have the intentions
if it believes it so intends (This is the Axiom (A3) of the
SharedPlans theory (Grosz & Kraus 1996)). We can give
constraints similar to (R2 ) to validate the Axioms (i.e., A2
and A4 (Grosz & Kraus 1996)) that relate Bel with Int.To
and potential intentions. The fourth one states that an agent
cannot intend φ without intending to believe φ (Sadek 1992).
The (R2 ) and (R3 ) together validate the fifth one, which
shows certain commutativity between Int.Th and Bel.
Summary
While Int.To and Int.Th play the functional roles of intentions (Bratman 1990), Pot.Int.To and Pot.Int.Th only represent potential commitments. In this paper, we give a formal
semantics to the four intentional operators of the SharedPlans theory. Our model considers the dynamic relationship
among the four types of intention attitudes, drawing upon
both the representationalist approach and the accessibilitybased approach. Using the formal semantics defined in this
paper, one can validate many of the proposed properties
of the SharedPlans theory and other belief-intention driven
multi-agent collaboration teamworks.
References
Bratman, M. E. 1987. Intention, Plans, and Practical Reason. Harvard University Press.
Bratman, M. E. 1990. What is intention? In Intentions in
Communication, 15–31. MIT Press.
Cohen, P. R., and Levesque, H. J. 1990. Intention is choice
with commitment. Artificial Intelligence 42:213–261.
Emerson, E. A. 1990. Temporal and modal logic. In Handbook of theoretical computer science (vol. B): formal models and semantics. MIT Press. 995–1072.
Fagin, R., and Halpern, J. Y. 1988. Beliefs, awareness and
limited reasoning. Artificial Intelligence 34:39–76.
Grosz, B., and Kraus, S. 1996. Collaborative plans for
complex group actions. Artificial Intelligence 86:269–358.
Grosz, B., and Kraus, S. 1999. The evolution of sharedplans. In Rao, A., and Wooldridge, M., eds., Foundations
and Theories of Rational Agencies, 227–262.
Grosz, B., and Sidner, C. 1990. Plans for discourse. In
Cohen, P.; Morgan, J.; and Pollack, M., eds., Intentions in
communication. MIT Press. 417–444.
Halpern, J. Y., and Moses, Y. 1992. A guide to completeness and complexity for modal logics of knowledge and
belief. Artificial Intelligence 54(3):319–379.
Herzig, A., and Longin, D. 2002. A logic of intention
with cooperation principles and with assertive speech acts
as communication primitives. In AAMAS ’02: Proceedings
of the first international joint conference on Autonomous
agents and multiagent systems, 920–927.
Konolige, K., and Pollack, M. E. 1993. A representationalist theory of intention. In Bajcsy, R., ed., Proceedings
of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI-93), 390–395. Chambéry, France:
Morgan Kaufmann publishers Inc.: San Mateo, CA, USA.
Rao, A., and Georgeff, M. 1995. Formal models and decision procedures for multi-agent systems. Technical Report 61, Australian Artificial Intelligence Institute, Melbourne, Australia.
Sadek, M. 1992. A study in the logic of intention. In
Nebel, B.; Rich, C.; and Swartout, W., eds., Proceedings of
the Third International Conference on Principles of Knowledge Representation and Reasoning, 462–473. Morgan
Kaufmann publishers Inc.: San Mateo, CA.
Searle, J. R. 1983. Intentionality: An essay in the philosophy of mind. Cambridge University Press.
Singh, M. P., and Asher, N. M. 1993. A logic of intentions
and beliefs. Journal of Philosophical Logic 22:513–544.
Singh, M. P. 1993. Intentions for multiagent systems.
Technical Report KBNL-086-93, MCC, Information Systems Division.
Wooldridge, M., and Jennings, N. R. 1994. Towards a
theory of cooperative problem solving. In Proc. Modelling
Autonomous Agents in a Multi-Agent World (MAAMAW94), 15–26.
AAAI-05 / 76