Item Descriptions Add Value to Plans

advertisement
From: AAAI Technical Report FS-96-01. Compilation copyright © 1996, AAAI (www.aaai.org). All rights reserved.
Item Descriptions Add Value to Plans
Will Fitzgerald
and R. James Firby
Intell/Agent Systems
1840 Oak
Evanston, Illinois 60201
w-fitzgerald@nwu.edu, firby@cs.uchicago.edu
Abstract
Item descriptions
add value to plans. Item
descriptions are descriptions about individual objects
(or sets of objects) in the world that can be used
within plans for reasoning about actions on the
objects. Typical reasoning tasks include dereferencing
the description to real objects, disambiguating
multiple real objects based on the description, and
tracking the status of desired goals. Wedescribe the
evolution of our thought on a-language for item
descriptions, and describe some open questions that
remain.
Introduction
One of the delightful things about the AAAISymposium
Series is the chance to discuss work in progress. Because
the work usually remains in progress, what one reports on
for the final version of the paper is often different from
what one originally submitted. In this paper, we attempt
something slightly different. Wepresent the ori~nal
submission, and then annotate it with commentsabout
what was right about it, what was wrong, and what some
open questions are that are left. Thus, we provide the
ori~nal submission and then the comments.
The Original
Submission
Traditional AI planning systems were concerned more
with the ordering of plan operators than with the objects
these operators acted upon. These systems assumed
(usually without stating it) that the plan execution system
could easily mapfrom an internal, symbolic representation
to the object represented in the real world- For example, if
a planning system created the plan step (pickup
block-a), a one-to-one mapping existed between the
symbol block-a
and some block in the real world. It also
assumedthat the plan execution system could immediately
sense and act on that block.
This is clearly an oversimplification; the real world does
48
not provide an immediate connection from objects to a
planning system’s internal representations. This has been
recognized for some time. For example, Agre and
Chapman(1990), argue for a radically situated planning
systemin whichobject representations are strictly deictic.
In their terms, this means an active, functionally
motivated causal relationship between an agent and an
object
in the world, such as the-bee-i-amfollowing. Another approach is to distinguish between
internal representations and sensor names(Firby, 1989).
As an agent moves around in the world, it can use the
information it receives from its sensors to disambiguate
objects it senses into its internal representations.
What is missing from these approaches is the value of
giving planning agents access to descriptions of objects in
the world in addition to representations of objects in the
world_ Plans typically describe howto act on objects that
are currently believed to have someproperties. Wesuggest
that it is often powerfulto write plans to act on objects that
meet a description of those properties.
For example,an agent might have a plan to paint all red
fuel drum.~blue. If the agent acts only on what it currently
believes about objects in the world, then it maymiss some
fuel drum~, because it does not know enough about them
to justify action. If, however,an agent carries with it the
description of the properties, then it can act to find out
more about objects in the world_ For example, an agent
might knowthat a particular object was a fuel drum, but
not its current color; or it mightknowthat it is red, but not
whatkind of object it is. Withoutaccess to a description of
the objects upon which an agent is to act, an agent cannot
tell whether it needs to get more information, act anyway,
avoidaction, etc.
An item description carries with it two types of
information. First, it describes the kind of object that is
being descnq~ed(for example, that the object is a member
of the set of fuel drums).Second,it describes the qualities
of the object being described (for example,that the color is
red). The information about qualities can be missing, of
course (for example, an agent could look for any fuel
drum). Similarly, the information about kind can be
missing (for example, an agent can look for objects that
are red in color). In our current syntax, this informationis
defined using describes and slot-value propositions. So the
description underlying "red fuel drum"is:
(describes
object-description-n
(slot-value
object-description-n
Comments on the Original
There were several things that were right with the ori~nal
submission, several things that are clearly wrong, and
several questions left open. Wediscuss these in turn.
What was right
fuel-drum)
color red)
The basic premise of the first submission, that "item
descriptions add value to plans" is fundamentallyright. In
order for an agent to be able to judge among similar
objects in the world, or to communicateto other agents in
the world the differences amongsimilar objects, or to track
the state of desired end goal states, the agent will need
sometype of description languagefor those objects.
In conjunction with propositions of the sort
(instance-of
object-m
(color object-m red)
fuel-drum)
different plans can be written which depend on either the
item description or beliefs about the object. Plans can then
depend on combinations of the item descriptions and
beliefs about objects. For example, it is nowpossible to
express, "if you’re asked to paint fuel drum~red, and the
object is already red, don’t paint it." That is, in the
context:
(and (describes ?description fuel-drum)
(slot-value ?description color red)
(color ?object red)
where
?description
and
?object
Submission
What was wrong
There were a few things wrong with the original
submission. It ignored the complexities that arise when
one has an item description language that essentially
mimics the knowledge representation
language. It
overlooked the generativity would result from using the
same language for item description that one uses for
knowledgerepresentation.
are presumably
boundin this context.
Item descriptions have three moreuseful qualities. First,
recording item descriptions allows an agent to recognize
that it is being requested to act on the same type of thing
as it was before (for example, "you just asked mefor a red
fuel drum, so I’m not going to complywith the request").
Second, they can mimichigher order propositional logical
propositions within a weaker, frame-based logic, so one
can assert, for example, John believes p, without being
forced to assert p. Third, they mapclosely onto natural
language descriptions of objects. Whenan agent is asked
(for example)to paint, in natural language, the red fuel
drums, what is being asked maps closely to the item
descriptions we have described, not just propositions about
objects. This is especially true whenlinguistic phenomena
such as definite and indefinite reference or anaphora are
taken into consideration.
What we are arguing for is the use of information from
three sources. First, there are sensing data from the real
world such as whether the vision system senses a blue
object. Second,there are stored beliefs about objects such
as whether the memorysystem has recorded this object as
being blue in color. Third, there are what we believe to be
a distinct information source, item descriptions, such as
whether the memorysystem has recorded that what is
wantedis a blue object.
49
Unexpectedcomplexities. By using an item description
language that essentially
mimicked the knowledge
representation language, unexpected complexities resulted.
In particular, the plans needed for agents to act in the
world required special care to consider whether the
conditions under which they applied were "real" (that is,
reflected in the state of the knowledgeof the world) or
"descriptive" (that is, using item descriptions).
For example, consider a plan to take an object to a
location. Essentially, this is a two-placeoperator: the plan
takes the "object" and the "location" as arguments. What
we hopedto do with item descriptions is to makeit easy to
write plans of i.he followingsort:
Transport(object, location):
If the agent knowswhatthe object is,
moveto the location of the instance of object;,
otherwise, ask whereto find an instance object.
Pick up the instance of the object,
If the agent knowswherethe location is,
moveto the place described by the location,
otherwise, ask whereto f’md the location.
This pseudo-code of a plan is for example only, it’s not
necessarily the best way to achieve the goal of taking
something to a location. This plan has two arguments: are
they item descriptions
or pointers to a packaged
knowledgerepresentation? It’s complex to knowwhich it
should be, and we found ourselves writing plans of the
sort:
gets to painting a particular drum, another agent might
have painted it red already, and nowit doesn’t have to. On
the other hand, a drum whose color it didn’t knowmight
turn out to be green when the agent arrives, and so it
knowsto paint it red.
If object is an item description,
find out x aboutthe object;,
otherwise, find outy about the object.
where x and y were typically very similar predicates--usually finding out the sameinformation, just on different
types of representation. For example,we might write:
Themoral, then, is this:
Represent the world, but treat your representations
lightly.
It is importantfor an agent to treat its ownrepresentations
of the world as likely to go out of date in a changing
world-
If object is an item description,
then if it/tem-/sa halon-drum,do z;
otherwise, if it/sa halon-drum
do z.
where isa describes an instance/subclass
to class
relationship, and item-isa is true if the value of a describes
predicate describes an instance/subclass
to class
relationship.
For example, given (instance-of
obj-47 halon-drum) and (describes id-3
halon-drum), (isa obj-47 halon-drum)
(item-isa obj-47 halon-drum) would both
true.
This makes for very complex, difficult to comprehend
plans. Wefound even ourselves befuddled by the plans we
were writing. The plans becomeespecially complex when
multiple arguments are used, of course, because plan
branches have to be repeated for each clause. It was also
complex to pass the item descriptions or knowledge
representations to subplans, for one had to be very careful
to pass the right type of representation.
Missing Generativity. The obverse to this complexity is
the lack of generativity of these plans. It becameclearer to
us that what we were calling "knowledgerepresentation"
and "item descriptions" were essentially the same thing.
To write a plan such as the Transport plan shown above
is just to use standard methodsto represent the knowledge
underlying the plan.
The moral. Wedesired to include item descriptions in
plans so that agents could do (moderate) inference on the
descriptions for disambiguation and goal tracking. We
ended up with a cumbersome double representation
scheme, which brought us back to using a standard single
representation for facts about objects the world and
descriptions of objects in the world. Currently, then, we
are using a single representational scheme (Fitzgerald,
Wiseman,& Firby 1996).
Still,
what we set out to accomplish with item
descriptions remains. In a dynamically changing world, it
is often important to rememberwhat’s been asked for in
general terms, and not to commit prematurely to
dereferencing what’s being asked for. To use the previous
example, an agent might be asked to paint all green drums
red- It might be important to rememberthat one was asked
to paint just those drumsthat were green. By the time it
5O
Open Questions
There are some major questions that are raised when
moving to this commonrepresentation scheme.
Giventwo (sets of) descriptions about an object, when
can you license believing they refer to the same
object?
This is the matching problem. Consider the painting robot
again. It senses something dram-like, that is red, and has
another description of a red drum. Does this description
matchthe other description? Is it the same?
Given two (sets of) descriptions about an object that
refer to the same object, how does one combine the
descriptions without losing information?
This is the mergingproblem.The key. problemhere is this.
Youmight want to combine the results of two sensors so
you have a larger set of beliefs about a particular object.
But, there may be plans which depend on one of the
descriptions, which one does not want merged so deeply
into another representation that it can’t be recovered- We
need to retain both the ori~nal representations as well as
the mergedrepresentation.
¯
Whenshould representations
be dereferenced
to
objects?
This is the dereferencing timing problem. On the one
hand, one doesn’t
want to constantly
expend
computational
energy making inferences
on
representations. On the other hand, one doesn’t want to
lose information along the way.
*
Howmuch representation
howshould it be used?
should be constructed and
It is important that plans be tractable, and so we don’t
want to allow unlimited inferencing to occur. Are there
ways to write planners so they knowjust howmuch
representation should be used? In our natural language
work,this is drivenby the "natural" waysthere are to talk
to an agent--if there is a natural, normalwayto express
something, we’dlike to be able to represent that. The
dangerremains, though, of writing plans (or dynamically
creating them)that can’t be executedin a reasonabletime
frame.
Conclusions
Wehavetried to showthat item descriptions addvalue to
plans. These item descriptions allow an agent to make
reasonable inferences about the objects it acts on; for
example, to judge among ambiguous objects or to
communicatespecific goals. Wecurrently believe that
item descriptions are just representations of knowledge
about objects in the world.Wesay this with two provisos.
First, agents musttake their ownbeliefs aboutobjects in
the worldlightly, being readyto revise themas the world
changes.Second,agents mustbe able to carry descriptions
as they execute plans, and they mustbe able communicate
these descriptionsto other agents.
Acknowledgments
The authors would like to thank John Wisemanfor his
invaluableassistancein the preparationof this paper.
References
Agre, P. E. and Chapman,D. 1990. Whatare plans for?
Robotics and AutonomousSystems6:17-34.
Firby, R. J. 1989. Adaptive Execution in Complex
DynamicWorlds.Ph.D. diss. Dept. of ComputerScience,
YaleUniversity.
Fitzgerald W.; Wiseman,J; Firby ILL 1996. Dynamic
Predictive MemoryArchitecture: Conceptual Memory
Architecture. TechnicalReport#1, Intell/Agent Systems,
Evanston,IL.
51
Download