Examples of Quantitative Modeling of Complex Computational Task Environments *

advertisement
From: AAAI Technical Report WS-93-03. Compilation copyright © 1993, AAAI (www.aaai.org). All rights reserved.
Examplesof Quantitative Modelingof ComplexComputationalTask
Environments*
Keith Decker and Victor Lesser
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
Email: DECKER@CS.UMASS.EDU
Abstract
There are manyformal approaches to specifying how
the mentalstate of an agent entails that it performparticular actions. Theseapproachesput the agent at the
center of analysis. For somequestionsand purposes,it
is morerealistic and convenientfor the center of analysis to be the task environment,domain,or society of
whichagents will be a part. This paper presents such
a task environment-orientedmodelingframeworkthat
can work hand-in-hand with more agent-centered approaches.Ourapproachfeatures careful attention to the
quantitative computationalinterrelationships between
tasks, to what informationis available (and when)
updatean agent’smentalstate, andto the generalstructure of the task environment
rather than single-instance
examples. This frameworkavoids the methodological
problemsof relying solely on single-instance examples,
and providesconcrete, meaningfulcharacterizationswith
whichto state general theories. Taskenvironment
models
built in our framework
can be used for bothanalysis and
simulation to answerquestions about howagents should
be organized,or the effect of variouscoordinationalgorithms on agent behavior.This paperis organizedaround
an examplemodelof cooperative problemsolving in a
distributed sensor network.
This workshopsubmissionis a shortened version of a
muchlonger technical report available fromthe authors;
see also our paper in the mainconferenceproceedings
[Deckerand Lesser, 1993b].
Introduction
This paper presents an overview of a framework, TASMS
(Task Analysis, EnvironmentModeling, and Simulation),
with whichto modelcomplexcomputational task environmentsthat is compatiblewithboth formal agent-centeredapproaches[Cohenand Levesque,1990; Shoham,1991; Zlotkin
and Rosenschein,! 990] and experimentalapproaches[Carver
"This workwas supportedby DARPA
contract N00014-92-J1698,Office of NavalResearchcontract N00014-92-J1450,and
NSFcontract CDA
8922572.Thecontent of the informationdoes
not necessarilyreflect the positionor the policyof the Government
andno official endorsement
shouldbe inferred.
61
and Lesser, 1991; Coheneta/., 1989; Durfee eta/., 1987;
Pollack and Ringuette, 1990]. The frameworkallows us to
bothanalyzeandquantitativelysimulatethe behaviorof single
or multi-agentsystemswithrespect to interesting characteristics of the computationaltask environmentsof whichthey
are part. Webelieve that it providesthe correct level of abstraction for meaningfiAly
evaluatingcentralized,parallel, and
distributed control algorithms,negotiationstrategies, and organizational designs. Noprevious characterization formally
captures the rangeof features, processes,and especially interrelationships that occurin computationailyintensive task
environments.
Weuse the term computationaltask environmentto refer to a problemdomainin whichthe primary resources to
be scheduledor planned for are the computationalprocessing resourcesof an agent or agents, as opposedto physical
resources such as materials, machines, or men.Examples
of such environmentsare distributed sensor networks, distributed design problems,complexdistributed simulations,
andthe control processesfor almostany distributed or parallel AI application. A job-shopschedulingapplication is not
Ia computational task environment. However,the control
of multipledistributed or large-grainparallel processorsthat
are jointly responsiblefor solvinga job shopschedulingproblem is a computationaltask environment.Distributed sensor networksuse resources (such as sensors), but these resources are typically not the primaryscheduling consideration. Computationaltask environments are the problem
domainfor control algorithms like manyreal-time and parallel local scheduling algorithms [Boddyand Dean, 1989;
Garveyand Lesser, 1993;Russell and Zilberstein, 1991]and
distributed coordinationalgorithms[DeckerandLesser, ! 991;
Durfeeet aL, 1987].
The reasonwehavecreated the T..EMS
frameworkis rooted
in the desire to producegeneraltheories in AI [Cohen,1991].
Considerthe difficulties facing an experimenterasking under
what environmentalconditions a particular local scheduler
producesacceptableresults, or whenthe overheadassociated
witha certain coordinationalgorithmor organizationalstructure is acceptablegiven the frequencyof particular environmentalconditions. At the very least, our framework
provides
a characterizationof environmentalfeatures and a concrete,
i Organization,
planning,and/orscheduling
of computation.
meaningfullanguagewith whichto state correlations, causal
explanations,and other formsof theories. Thecareful specification of the computationaltask environmentalso allows
the use of verystronganalytic or experimentalmethodologies,
includingpaired-responsestudies, ablation experiments,and
parameteroptimization. TAEMS
exists as both a languagefor
stating generalhypothesesor theories and as a systemfor simulation. Thesimulatorsupportsthe graphicaldisplay of generated subjectiveandobjectivetask structures, agentactions,
and statistical data collection in CLOS
on the TI Explorer.
The basic form of the computational task environment framework--theexecution of interrelated computational tasks--is taken from several domainenvironment
simulators [Carver and Lesser, 1991; Cohenet a/., 1989;
Durfeeet al., 1987]. If this were the only impetus, the
result mighthave been a simulator like Tileworld[Pollack
and Ringuette, 1990]. However,formal research into multiagent problemsolving has beenproductivein specifyingformal
properties, and sometimes
algorithms,for the control process
by whichthe mentalstate of agents(termedvariously:beliefs,
desires, goals, intentions, etc.) causesthe agents to perform
particular actions [Cohenand Levesque,1990;Shoham,1991;
Zlotkin and Rosenschein,1990]. This research has helped to
circumscribethe behaviorsor actions that agents can produce
basedon their knowledge
or beliefs. The final influence on
T/EMS
wasthe desire to avoidthe individualisticagent-centered
approachesthat characterize mostAI (whichmaybe fine) and
DAI(whichmaynot be so fine). The concept of agency
TAEM$is basedon simplenotions of execution,communication,
and information
gathering.Anagentis a locusof belief (state)
and action. Byseparatingthe notion of agencyfromthe model
of task environments,
wedonot haveto subscribeto particular
agent architectures (whichone wouldassumewill be adapted
to the task environmentat hand), and we mayask questions
about the inherent social nature of the task environmentat
hand(allowing that the conceptof society mayarise before
the conceptof individualagents).
Sectionwill discussthe generalnatureof the three modeling
framework
layers. Sectionsthroughdiscussthe details of the
three levels, and are organizedarounda modelbuilt withthis
framework
for the study of organizationaldesign and coordination strategies in a multi-agentdistributed sensor network
environment.
General Framework
Theprinciple purposeofa TAEMS
modelis to analyze,explain,
or predict the performanceof a systemor somecomponent.
WhileTAEMS
does not establish a particular performance
criteria, it focuses on providingtwo kinds of performanceinformation:the temporalintervals of task executions,and the
qualityof the executionor its result. Qualityis an intentionally
vaguely-defined
termthat mustbe instantiatedfor a particular
environmentand performancecriteria. Examplesof quality
measuresinclude the precision, belief, or completenessof a
task result. Wewill assumethat quality is a single numeric
term with an absolute scale, althoughthe algebra can be extended to vector terms. In a computationallyintensive AI
system,several quantities--the quality producedby executing
a task, the time taken to performthat task, the time whena
task can be started, its deadline,andwhetherthe task is necessaryat all--are affected by the executionof other tasks. In
real-time problemsolving, alternate task executionmethods
maybe available that trade-off time for quality. Agentsdo
not haveunlimited access to the environment;whatan agent
believesandwhatis really there maybe different.
Themodelof environmentaland task characteristics proposedhas threelevels: objective,subjective,andgenerative. The
objectivelevel describesthe esscntial,’real’ task structureof a
particularproblem-solving
situation or instanceovertime. It is
roughlyequivalentto a formaldescriptionof a single problemsolvingsituation suchas those presentedin [DurfecandLesser,
1991], without the informationabout particular agents. The
subjective level describes howagents viewand interact with
the problem-solving
situation over time (e.g., howmuchdoes
an agent knowabout what is really going on, and howmuch
doesit cost to find out--wherethe uncertaintiesare fromthe
gent’spoint of view).Thesubjectivelevel is essentialfor evaluating control algorithms, becausewhile individual behavior
and systemperformancecan be measuredobjectively, agents
must makedecisions with only subjective information,z Finally, the generative
level describesthe statistical characteristics
requiredto generatethe objective andsubjectivesituations in
a oomaln.
Objective Level
Theobjectivelevel describesthe essentialstructureof a particular problem-solving
situation or instanceovertime. It focuses
on howtask interrelationships dynamicallyaffect the quality
and durationof each task. Thebasic modelis that task groups
appearin the environment
at somefrequency,and inducetasks
T to be executedby the agents understudy. Taskgroupsare
independentof one another, but tasks within a single task
grouphaveinterrelationships. Taskgroupsor tasks mayhave
deadlinesD(T).Thequality of the executionor result of each
task influences the quality of the task groupresult Q(T)in
a precise way.Thesequantities can be used to evaluate the
performanceof a system.
Anindividual task that has no subtasksis called a method
Mand is the smallest schedulable chunk of work (though
somescheduling algorithms will allow somemethodsto be
preempted,and someschedulers will schedule at multiple
levels of abstraction). There maybe morethan one method
to accomplisha task, and each methodwill take someamount
of time and producea result of somequality. Quality of an
agent’s performanceon an individual task is a function of
the timingand choiceof agent actions (’local effects’), and
possibly previoustask executions(’non-local effects’))
basic purposeof the objective modelis to formally specify
21norganizational
theoreticterms,subjectiveperception
canbe
usedto predictagentactionsor outputs,butunperceived,
objective
environmental
characteristicscanstill affect performance
(or outcomes)
[Scott,1987].
3When
local or non-localeffects exist between
tasks that are
known
bymorethanone agent,wecall themcoordination
relationships[Decker
andlesser, 1991]
62
howthe executionand timing of tasks affect this measureof
quality.
A completepresentation of the mathematicaldetails of an
objective modelcan be found in our mainconferencepaper
[Deckerand Lesser, 1993b]. Wewill concentrate here on
examples.
Objective Modeling Example
This examplegrows out of the set of single instance examplesof distributed sensor network(DSN)problemspresented in [Durfeeeta/., 1987], whichwere solved using the
Distributed Vehicle MonitoringTestbed (DVMT)[Lesser
and
Corkill, 1983]. The authors of that paper compared
the performanceof several different coordinationalgorithmson these
examples,and concludedthat no one algorithmwasalwaysthe
best. Thisis the classic type of result that the TA~MSframeworkwascreated to address--wewishto explain this result,
and better yet, to predict whichalgorithmwill do the best in
each situation. Thelevel of detail to whichyoubuild your
modelwill dependon the question you wish to answermwe
wishto identify the characteristics of the DSNenvironment,
or the organizationof the agents, that causeone algorithmto
outperformanother. Evenmoreimportantly (as will be addressedin Section) weare interestednotjust in singleinstance
examples,but the generaldistribution of episodesin the environmentto analyzethe relative meritsof different approaches.
In a DSNproblem,the movements
of several independent
vehicleswill be detectedover a period of timeby one or more
distinct sensors,whereeachsensoris associatedwithan agent.
The performanceof agents in such an environmentis based
on howlong it takes themto create completevehicle tracks,
including the cost of communication.The organizational
structure of the agents will implythe portionsof eachvehicle
track that are sensedby eachagent.
In our modelof DSNproblems,each vehicle track is modeled as a task group.Thesimplestobjectivemodelis that each
task group~ is associatedwith a track of length 1~ and has
the followingobjectivestructure, basedon a simplifiedversion
of the DV~vlT[Lesser
andCorkill, 1983]:(1~) vehicle location
methods(VLM)
that represent processingraw signal data at
singlelocationresultingin a singlevehiclelocationhypothesis;
(1~ - 1) vehicle tracking methods(VTM)
that represent short
tracks connectingthe results of the VLM
at time t with the
results of the VLM
at time t + 1; (1) vehicle track completion method(VCM)that represents merging all the VTMs
together into a completevehicle track hypothesis.Non-local
enablementeffects exist as shownin Figure l--two VLMs
enable each VTM,and all VTMs
enable the lone VCM.
Wehave used this modelto develop expressions for the
expectedvalue of, and confidenceintervals on, the time of
terminationof a set of agents in any arbitrary DSNenvironmentthat has a static organizationalstructure andcoordination algorithm [Decker and Lesser, 1993a]. Wehave also
used this modelto analyze a dynamic,one-shot reorganization algorithm (and have shownwhenthe extra overheadis
worthwhileversus the static algorithm).In each case wecan
predict the effects of addingmoreagents,changingthe relative
cost of communicationand computation, and changing how
63
the agents are organized.Theseresults wereachievedby direct mathematicalanalysis of the modeland verified through
simulationin T~F-.MS. Wewill give a summary
of these results
later in the paper(Section),after discussingthe subjectiveand
generativelevels.
Expandingthe ModelWewill nowadd somecomplexity to
the model.Thelength of a track 1~aboveis a generativelevel
parameter.Givena set of these generativeparameters,wecan
construct the objective modelfor a specific problem-solving
instance, or episode. Figure 1 showsthe general structure
of episodes in our DSNenvironmentmodel, rather than a
particular episode.Todisplayan actual objectivemodel,let us
assumea simplesituation: there are twoagents, A and B, and
that there is one vehicle track of length 3 sensed onceby A
alone (Tt), once by both A and B (Tz), and once by B alone
(T3). Wenowproceed to modelthe standard features that
haveappearedin our DVMT
workfor the past several years.
Wewill add the characteristic that each agent has twomethods
with which to deal with sensed data: a normal VLM
and a
’levd-hopping’ (LH) VLM
(the level-hopping VLM
produces
less quality than the full methodbut requires less time; see
[Deckereta/., 1990;Deckeret al., 1993]for this and other
approximate methods). Furthermore, only the agent that
senses the data can executethe associatedVLM;
but any agent
can execute VTMs
and VCMs
if the appropriate enablement
conditionsare met.
Figure2 displays this particular problem-solving
episode.
Tothe description above,we haveaddedthe fact that agent B
has a faulty sensor(the durationsof the grayedmethodswill be
longer than normal);wewill explorethe implicationsof this
after we havediscussedthe subjective level of the framework
in the next section. Anassumptionmadein [Durfee et al.,
1987]is that redundantworkis not generally useful; this is
indicated by using maxas the combinationfunction for each
agent’s redundantmethods.Wecould alter this assumption
by simplychangingthis function (to mean,for example).Another characteristic that appearedoften in the DVMT
literature
is the sharingof results between
methods(at a singleagent);
wouldindicate this by the presenceof a sharing relationship
(similar to facilitates) betweeneach pair of normalandlevelhopping VLMs.Sharing of results could be only one-way
betweenmethods.
Nowwe will add two final features that makethis model
morelike the DVMT.
First, low quality results tend to make
things harderto processat higherlevels. For example,the impact of using the level-hopping
VLM
is not just that its quality
is lower,but also that it affects the qualityanddurationof the
VTM
it enables (because not enoughpossible solutions are
eliminated). To modelthis, we will use the precedencerelationship instead of enablez not only do the VLM
methods
enable the VTM,
but they can also hinder its executionif the
enabling results are of lowquality. Secondly,the first VLM
executionprovidesinformationthat slightly shortens the executions of other VLMs
in the samevehicle track (because
the sensors havebeen properly configured with the correct
signal processingalgorithm parameterswith whichto sense
that particularvehicle).Asimilar facilitation effect occursat
the tracking level. Theseeffects occur bothlocally and when
method
(executabletask)
taskwithquality
accrualfunctionrain
suhtask
relationship
enablesrelationship
Figure1: Objectivetask structureassociatedwith a single vehicle track.
method (executable
task)
~’1
fauhy
sensor method
task with quality
accrual function mitt
subtask relationship
- - - -I~ enables relationship
Figure 2: Objectivetask structure associated with two agents. T is the highest level task--to generate a single vehicle track. It
hasthreesubtasks:
a taskto generate
a trackfromtime1 to time2, a taskto generate
a trackfromtime2 to time3, andthetask
T,,,I.2,3
to
join
these
results
into
a
full
track.
The
subtasks
each
have
a
similar
structure,
againwiththreesubtasks:
process
thedata
CM
at time 1 (T~,/,M), process the data at time 2 (T~.z,M), and generate the track fromtime 1 to time (T~2TM).Ma
ny ofthelow
level tasks have methodsthat can be executedby either agent, and sometimeseach agent has multiple methods(see text).
results are sharedbetweenagents--infact, this effect is very
important in motivatingthe agent behaviorwhereone agent
sends preliminary results to another agent with bad sensor
data to help the receivingagent in disambiguating
that data.
Figure3 repeats the objectivetask structure fromthe previous
figure, but omitsthe methodsfor clarity. Twonewtasks have
beenaddedto modelfacilitation at the vehicle location and
vehicletrack level.4 TVLindicates the highestquality initial
workthat has beendoneat the vehiclelevel, and thus uses the
quality accrual function maximum.
TVTindicates the progress
onthe full track; it usessummation
as its qualityaccrualfunction. The moretracking methodsare executed, the easier
the remainingones become.The implications of this model
are that in a multi-agentepisode,then, the questionbecomes
whento communicate
partial results to another agent: the
later an agent delays communication,
the morethe potential
impacton the other agent, but the morethe other agent must
delay. Weexaminedthis question somewhatin [Decker and
Lesser,1991].
Othereffects, suchas howearly results facilitate later ones,
can also be modeled.At the end of the next section, we will
return to this exampleand addto it subjective features: what
informationis availableto agents, when,and at whatcost.
Subjective Level
Thepurposeof a subjectivelevel modelof an environment
is to
describewhatportionsof the objective modelof the situation
are availableto ’agents’.It answersquestionssuchas ~when
is a
piece of informationavailable," "to whom
is it available," and
~wbatis the cost to the agent of that piece of information".
This is a description of howagents mightinteract with their
environment--what
options are available to them.
To build such a description we mustintroducethe concept
ofagencyintothe model.Oursis one of the fewcomprehensive
descriptions of computationaltask environments,but there
are manyformal and informal descriptions of the conceptof
agency(see [Gasser, 1991; Hewitt, 1991]). Rather than add
our owndescription, wenotice that these formulationsdefine
the notion of computationat one or moreagents, not the
environmentthat the agents are part of. Mostformulations
containa notion of belief that can be applied to our concept
of ~whatan agent believes about its environment".Ourview
is that an ~agent"is a locus of belief and action (such as
computation).
Asubjective mappingof an objective problemsolving situation is a function~0 froman agentand objectiveassertionsto
the beliefs of an agent. For example,wecould define a mapping~0 whereeachagent has a probabilityp of believingthat
the maximum
quality of a methodis the objective value, and
a probability 1 - p of believing the maximum
quality is twice
the objective value. Anyobjective assertion has somesubjective mapping,including q (maximum
quality of a method),
d (duration of a method),deadlines, and relationships like
subtaskor non-localeffects.
The beliefs of an agent affect its actions throughsome
control mechanism.
Since this is the focus of most of our
and others’ research on local scheduling, coordination, and
other control issues, we will not discuss this further . The
agent’scontrol mechanism
uses the agent’s current set of beliefs to updatethree special subsetsof these beliefs (alternatively, commirments[Shoham,
1991])identified as the sets of
information gathering, communication,and methodexecution actions to be computed.Weprovide a meta-structure
for the agent’sstate-transitionfunctionthat is dividedinto the
following4 parts: control, informationgathering, communication, and methodexecution. First the control mechanisms
assert (committo) information-gathering, communication,
and methodexecutionactions and then these actions are computedoneat a time, after whichthe cycleofmeta-statesrepeats.
Details aboutthe state changesassociatedwith these actions
can be foundin [Deckerand Lesser, 1993b].
Subjective Modeling Example
Let’s return to the examplewebeganin Section to demonstrate howaddinga subjective level to the modelallows us
to represent the effects of faulty sensors in the DVMT.We
will define the default subjective mappingto simplyreturn
the objective value, i.e., agents will believethe true objective quality and duration of methodsand their local and
non-local effects. Wethen alter this default for the case
of faulty (i.e., noisy)sensors--anagent witha faulty sensor
will not initially realize it (d0(faulty-VLM)= 2d0(VLM),
5 Other subjecbut ~0(A, d0(faulty-VLM)) = d0(VLM)).
tive level artifacts that are seenin [Durfeeet al., 1987]and
other DVMT
workcan also be modeledeasily in our framework. For example, ’noise’ can be viewedas VLM
methods
that are subjectively believed to have a non-zero maximum
quality (~0(A,q0(noise-VLM))
> 0) but in fact have0
tive maximum
quality, whichthe agent doesnot discoveruntil
after the methodis executed.Thestrength withwhichinitial
data is sensed can be modeledby loweringthe subjectively
perceivedvalue of the maximum
quality q for weaklysensed
data. Theinfamous’ghost track’ is a subjectively complete
task groupappearingto an agent as an actual vehicle track,
whichsubjectivelyaccruesquality until the haplessagent executes the VCM
method,at whichpoint the true (zero) quality
becomesknown.If the track (subjectively) spans multiple
agents’sensor regions, the agent can potentially identify the
chimerictrack throughcommunication
with the other agents,
whichmayhaveno belief in such a track (but sometimesmore
than one agent suffers the samedelusion).
Generative Level
Byusing the objective and subjective levels of’fArMswe can
modelany individual situation; addinga generativelevel to
the modelallows us to go beyondthat and determine what
the expectedperformanceof an algorithmis over a long period of time and manyindividual problemsolving episodes.
Ourprevious workhas created generative modelsof task interarrival times (exponentialdistribution), amountof work
5Atthis point, oneshouldbe imagining
an agentcontrollerfor
this environment
that notices whena VLM
methodtakes unusually
long,andrealizesthat thesensoris faultyandreplansaccordingly.
4Notethat these tasks wereaddedto makethe modelmoreexpressive;theyare not associated
withnewmethods.
65
~
m~
taskwithquality
accruaJ
functionrain
submsk
relationship
- - - -4,. precedence
relationship
limtes relauomhtp
Figure3: Non-localeffects in the objective task structure. Thisis an expansionof the previousfigure with the methodsremoved
for clarity. Twonewtasks havebeenadded: one to represent the amountof workthat has beendoneon individual tracks (TvT),
and oneto representthe best workdoneon the initial data (TVL).Thesenewabstract tasks are usedto indicate facilitation effects:
informationfromthe first VLmethodcan be used to speed up other VLmethods(by providingconstraints on vehicle type, for
example),and the moretrackinginformationis available, the easier trackingbecomes
as the vehicle’s path is constrained.
in a task cluster (Poisson),task durations(exponential),
the likelihood of a particular non-local effect betweentwo
tasks [Deckerand Lesser, 1993a; Deckerand Lesser, 1991;
GarveyandLesser, 1993].Generativelevel statistical parameters can also be used byagents in their subjectivereasoning,
for example,an agent maymakecontrol decisions based on
the knowledgeof the expectedduration of methods.
Agenerativelevel modelcan be constructedbycareful analysis of the real environmentbeing modeled,or by observing
the statistical propertiesof real episodes(if that is possible).
Evenwhencertain parametersof the real worldare unknown,
they can be madevariables in the modeland then you can
ask questionsabout howmuchthey affect the things youcare
about. Our approach so far has been mverify our assumptions aboutthe environment
withsimplestatistical approaches
[Kleijnen, 1987].Detailedmodelverification will be moreimportant whenusing our frameworkto optimizeparametersin
a real application,as opposedto learningthe general effects
of parameterson a coordinationor negotiationalgorithm(see
Section).
Analysis Summary
Organizationaltheorists havelong held that the organization
of a set of agentscannotbe analyzedseparatelyfromthe agents’
task environment,that there is no single best organization
for all environments,
and that different organizationsare not
equally effective in a given environment[Galbraith, 1977].
Mostof these theorists viewthe uncertainties present in the
environment
as a key characteristic, thoughthey differ in the
mechanisms
that link environmentaluncertainty meffective
organization.In particular, the transactioncost economics
approach[Moe,1984]focuseson the relative ~ciencies of various organizationsgivenan uncertain environment,while the
updatedversions of the contingencytheory approach[Stinchcombe,! 990] focus on the needfor an organizationto expand
towardthe earliest availablein~rmation
that resolvesuncertainties in the current environment.
Wehaveused these general concepts, in conjunctionwith
our modelingframework,to analyze potential organizational
structures for the environmentthat has been our examplein
this paper, i.e., the class of naturally distributed, homogeneous, cooperativeproblemsolving environmentswheretasks
arrive at multiplelocations, exemplifiedby distributedsensor
networks.Our approachwasto first construct the task environmentmodelwe have been using as an examplein this
paper, and then to developexpressionsfor the eexpected0
ficiencies of static and dynamicorganizationalstructures, in
terms of performance--thecost of communication
and time
to completea givenset of tasks. Finally, wevalidated these
mathematicalmodelsby using simulations.
Webriefly showedat the end of Sectionhowwe can predict
the performanceof a systemgiven the objective and subjective modelsof a particular episode. This is very useful for
explainingor predictingagent behaviorin a particular episode
or scenario, but not overmanyepisodesin a real environment.
Todo this, webuild probabilistic modelsof the relevant objective and subjective parameters(nowviewedas randomvariables) that are basedon generativelevel parameters.Another
paper, [Deckerand Lesser, 1993a], details this process, and
showshowthe distributions of objective parameterssuch as
"the numberof VLM
methodsseen by the maximallyloaded
agent" (S) and "the maxnumberof task groups seen by the
sameagent" (N) can be defined from just the generative parameters2) =<A, r/, r, o, 7" >.
For example,the total timeuntil terminationfor an agent
receivingan initial dataset of size S is the timeto dolocal work,
combineresults from other agents, and build the completed
results, plus two communication
and information gathering
66
actions:
~talic
Sdo(VLM)+ (~’ - N)d0(VTM)
(a - I)Ndo(VTM)4- s~do(VCM)
2d0(I) + 2do(C)
(1)
Wecan use Eq. 1 as a predictor by combiningit with the
probabilities for the values of S and/~r. Again,werefer the
interested reader to [Deckerand Lesser, 1993a]for derivations, verification, andapplicationsof these results. Notethat
if the assumptionsbehindour generative modelchange(for
instance,if weassume
all agentsinitially line up side-by-side,
instead of in a square, or if vehiclesmadea loop before exiting the sensedarea) the probabilitydistributions for S and
might change, but that the form of Eqn. 1 does not. If
the gent’s coordinationalgorithmchanges,then Eqn. 1 will
change(see [Deckerand Lesser, 1993a]).
Wehave looked at both static organzations and dynamic
organizations(in whichthe responsibilities of agents can be
reassignedbasedon a developingviewof the problemat hand).
Dueto the uncertaintiesexplicitlyrepresentedin the task environmentmodel,there maynot be a dear performancetradeoff
betweenstatic and dynamicorganizationalstructures. Agents
that havea dynamic
organizationhavethe option of recta-level
communication--communicating
about the current state of
problemsolving as opposedto communicating
about solving
the problemitself. In this way,informationthat resolvesuncertainties about the current environmentbecomesavailable
to the agents, allowingthe agents to then create the mostefficient organizationfor the situation. In [Deckerand Lesser,
1993a]we present equations similar to Eq. 1 that showthe
potential benefits of dynamicreorganizationin an arbitrary
environment~), and discuss whenthe overheadof recta-level
communication
is worthwhile.
Conclusions
This paper has presented an overviewof TASMS,
a framework
for modelingcomputationallyintensive task environments.
TAEMS
exists as both a languagefor stating generalhypotheses
or theories andas a systemfor simulation.Theimportantfeatures of T~MS
include its layereddescription of environments
(objectivereality, subjectivemapping
to agent beliefs, generative descriptionof the other levels acrosssingle instances);
its acceptanceof any performance
criteria (basedon temporal
location and quality of task executions);and its non-agentcenteredpoint of viewthat canbe used by researchersworking
in either formalsystemsof mental-state-inducedbehavioror
experimental methodologies.TAEMS
provides environmental
and behavioralstructures andfeatures with whichto state and
test theories about the control of agents in complexcomputational domains,such as howdecisions madein scheduling
onetask will affect the utility andperformance
characteristics
of othertasks.
TtEMS
is not only a mathematicalframework,but also a
simulation language for executing and experimentingwith
modelsdirectly. TheTAEMSsimulator supports the graphical
display of generatedsubjectiveand objective task structures,
67
agent actions, and statistical data collection in CLOS
on the
TI Explorer. Thesefeatures help in both the model-building
stage and the verification stage. TheT~ClSsimulatoris being
used not only for researchinto the organizationand coordination of distributed problemsolvers[Deckerand Lesser, 1993a;
Deckerand Lesser, 1991;Deckerand Lesser, 1992], but also
for researchinto real-rime schedulingof a single agent[Gamey
and Lesser, 1993],schedulingat an agent with parallel processing resourcesavailable, and soon, learning coordination
algorithmparameters.
T,£MS
does not at this time automaticallylearn modelsor
automaticallyverify them.Whilewe havetaken initial steps
at designinga methodology
for verification (see [Deckerand
Lesser, 1993a]),this is still an openarea of research[Cohen,
1991]. Our future work will include building new models
of different environments
that mayinclude physical resource
constraints, such as airport resourcescheduling.Theexisting frameworkmayhave to be extendedsomewhatto handle
consumableresources. Other extensions we envision include
specifying dynamicobjective modelsthat changestructure as
the result of agent actions. Wealso wishto expandour analyses beyondthe questionsof schedulingand coordination to
questionsaboutnegotiationstrategies, emergentagent/society
behavior,and organizationalself-design.
References
Boddy,Markand Dean,Thomas1989.Solvingtimedependentplanningproblems.In Proceedingsof the Eleventh
InternationalJoint Conference
onArtificial Intelligence.
Carver, Normanand Lesser, Victor 1991. A newframework
for sensorinterpretation: Planningto resolvesourcesof uncertainty. In Proceedings
of the NinthNationalConference
on
Artificial Intelligence.724-731.
Cohen,Philip R. and Levesque,Hector J. 1990. Intention
is choicewithcommitment.
Artificial Intelligence42(3).
Cohen,Paul; Greenberg, Michael; Hart, David; and Howe,
Adele1989. Trial by fire: Understanding
the designrequirements for agents in complexenvironments. AI Magazine
10(3):33-48. Also COINS-TR-89-61.
Cohen,Paul R. 1991. A survey of the eighth national conferenceonartificial intelligence: Pullingtogetheror pulling
apart? AI Magazine12(1): 16-41.
Decker, Keith S. and Lesser, Victor R. 1991. Analyzinga
quantitative coordination relationship. COINS
Technical
Report91-83, Universityof Massachusetts.Toappearin the
journal GroupDecisionand Negotiation,1993.
Decker, Keith S. and Lesser, Victor R. 1992. Generalizing
the partial global planningalgorithm.InternationalJournal
of Intelligent andCooperative
Information
Systems1(2).
Decker,Keith S. and Lesser, Victor R. 1993a. Anapproach
to analyzing the need for meta-level communication.In
Proceedings
of the ThirteenthInternationalJoint Conference
on
Artificial Intelligence,Chamb~ry.
Decker,Keith S. and Lesser, Victor R. 1993b.Quantitative
modelingof complexcomputationaltask environments.In
Proceedings
of the EleventhNationalConference
on Artificial
Intelligence, Washington.
Decker,KeithS.; Lesser, VictorR.; andWhitehair,RobertC.
1990. Extendinga blackboardarchitecture for approximate
processing.7"heJournalof Real.7~me
Systems2( l /2):47-79.
Decker,Keith S.; Garvey,AlanJ.; Humphrey,
Mart*/A.;and
Lesser, Victor R. 1993. Areal-time control architecture for
an approximateprocessingblackboardsystem. International
Journalof PatternRecognition
andArtificialIntelligence7 ( 2 ).
Durfee,E.H.and Lesser, V.R.1991.Partial global planning:
Acoordinationframeworkfor distributed hypothesisformation. IEEETransactionson Systems, Man,and Cybernetics
21(5):1167-1183.
Durfee, EdmundH.; Lesser, Victor R.; and Corkill,
Daniel D. 1987. Coherent cooperation among communicating problemsolvers. IEEETramactionson Computers
36(11):1275-1291.
Galbraith, J. 1977. OrganizationalDesign.Addison-Wesley,
Reading, MA.
Garvey,Alan and Lesser, Victor 1993. Design-to-timerealtime scheduling. IEEETransactions on Systems, Man,and
Cybernetics23(6). Special Issue on Scheduling,Planning,
and Control.
Gasser, Les 1991. Social conceptionsof knowledgeand action. Artificial Intelligence47(1):107-138.
Hewitt, Carl 1991. Openinformation systems semantics
for distributedartificial intelligence.Artificial Intelligence
47(1):79-106.
Kleijnen, Jack P. C. 1987. Statistical Tools#r Simulation
Practitioners. MarcelDekker,NewYork.
Lesser, Victor R. and Corkill, Daniel D. 1983. The distributed vehicle monitoringtestbed. A/Magazine
4(3):63109.
Moe,Terry M. 1984. The neweconomicsof organization.
AmericanJournalof Political Science28(4):739-777.
Pollack, MarthaE. and Ringuette, Marc1990. Introducing
Tileworld:Experimentally
evaluatingagent architectures. In
Proceedings
of the EighthNationalConference
on Artificial Intelligence. 183-189.
Russell, Stuart J. and Zilberstein, Shlomo199I. Composing
real-timesystems.In Proceedings
of the TwelfthInternational
Joint Conference
on Artificial Intelligence,Sydney,Australia.
212-217.
Scott, W.Richard 1987. Organizations: Rational Natura~
andOpen
Systems.Prentice-Hall,Inc., Englewood
Cliffs, NJ.
Shoham,Yoav1991. AGENT0:
A simple agent language and
its interpreter. In Proceedings
of the NinthNationalConference
on Artificial Intelligence, Anaheim.
704-709.
Stinchcombe,Arthur L. 1990. InJbrmationand Organizations. Universityof CaliforniaPress, Berkeley,CA.
ZIotkin,GiladandRosenschein,
JeffreyS. 1990.Blocks,lies,
and postal freight: Thenatureof deceptionin negotiation.In
Proceedings
of the TenthInternationalWorkshop
on Distributed
AI, Texas.
Download