Lectures TDDD10 AI Programming Agents and Agent Architectures

advertisement
Lectures
AIProgramming:Introduction
IntroductiontoRoboRescue
3
AgentsandAgentsArchitecture
4
Communication
5
MultiagentDecisionMaking
6
CooperationAndCoordination1
7
CooperationAndCoordination2
8
MachineLearning
9
KnowledgeRepresentation
10
PuttingItAllTogether
1
2
TDDD10AIProgramming
AgentsandAgentArchitectures
CyrilleBerger
2/83
Lecturegoals
Lecturecontent
Acquireknowledgeonwhatisanagent.
Acquireknowledgeonwhatarethe
differentagentarchitectureandhow
theytakedecision
Agents
AnOverviewofDecisionMaking
AgentArchitectures
DeliberativeArchitecture
ReactiveArchitecture
StateMachines
HybridArchitecture
Summary
3/83
4/83
Whatisanagent?
Agentsareautonomous:capableofacting
independentlyandexhibitingcontrolover
theirinternalstate
Anagentisa(computer)systemthatis
situatedinsomeenvironmentandthatis
capableofautonomousactioninthis
environmentinordertomeetitsdelegated
objectives.
Agents
6
Whatisanagent?
Whatisanagent?
Shouldanagentbeabletolearn?
Shouldanagentbeintelligent?
7
8
IntelligentAgentProperties
SocialAbility
Reactivity
Cooperationisworkingtogetherasateamtoachievea
sharedgoal.
Intelligentagentsareabletoperceivetheirenvironment,and
respondinatimelyfashiontochangesthatoccurinitinorderto
meetitsdelegatedobjectives.
Oftenpromptedeitherbythefactthatnooneagentcanachievethegoal
alone,orthatcooperationwillobtainabetterresult(e.g.,getresult
faster).
Proactivity
Coordinationismanagingtheinterdependencies
betweenactivities.
Negotiationistheabilitytoreachagreementson
mattersofcommoninterest.
Intelligentagentsareabletoexhibitgoal-directedbehavior
bytakingtheinitiativeinordertomeetitsdelegated
SocialAbility
Intelligentagentsarecapableofinteracting(cooperating,
coordinatingandnegotiating)withotheragents(andpossible
humans)inordertomeetitsdelegatedobjectives.
Typicallyinvolvesofferandcounter-offer,withcompromisesmadeby
participants.
9
AgentsasIntentionalSystems
ThephilosopherDanielDennettcoinedthetermintentionalsystemto
describeentities“whosebehaviourcanbepredictedbythemethodof
attributingbelief,desiresandrationalacumen”.
Isitlegitimateorusefultoattributebeliefs,desires,andsoon,tocomputer
systems?
Withverycomplexsystemsamechanisticexplanationofitsbehaviourmay
notbepracticaloravailable.But,themoreweknowaboutasystem,theless
weneedtorelyonanimistic,intentionalexplanationsofitsbehaviour.
Ascomputersystemsbecomeevermorecomplex,weneedmorepowerful
abstractionsandmetaphorstoexplaintheiroperation—lowlevel
explanationsbecomeimpractical.Theintentionalstanceissuchan
abstraction,whichprovideuswithaconvenientandfamiliarwayof
describing,explaining,andpredictingthebehaviourofcomplexsystems.
10
ObjectOrientedProgrammingvsMulti-AgentSystems
Object-OrientedProgramming
Objectsarepassive,i.e.
anobjecthasnocontrol
overmethodinvocation
Objectsaredesignedfor
acommongoal
Typicallyintegratedinto
asinglethread
11
Multi-AgentSystems
Agentsareautonomous,
i.e.pro-active
Agentscanhave
diverginggoals,e.g.
comingfromdifferent
organizations
Agentshaveownthread
ofcontrol
12
Agent-OrientedProgramming
Structuralunit
Relationto
previouslevel
Machine
Language
Structured
Programming
ObjectOriented
Programming
Agent-Oriented
Programming
Program
Subroutine
Object
Agent
Boundunitof
program
Object+
independent
Subroutine+
persistentlocal threadof
state
control+
initiative
AgentOrientedProgramming
(YoavShoham)
Basedontheagentdefinition:“Anagentisanentity
whosestateisviewedasconsistingofmental
componentssuchasbeliefs,capabilities,choices,and
commitments.”
Thementalconstructswillappearintheprogramming
languageitself.
Thesemanticswillberelatedtothesemanticsofthe
mentalconstructs.
Acomputationwillconsistofagentsperforming
speech-actsoneachother.
13
AGENT0(1/2)
14
AGENT0(2/2)
Eachcommitmentrulecontains
AGENT0isimplementedinLISP
EachagentinAGENT0has4components:
amessagecondition
amentalcondition
anaction
asetofcapabilities(thingstheagentcando)
asetofinitialbeliefs
asetofinitialcommitments(thingstheagentwilldo)
asetofcommitmentrules
Oneach‘agentcycle’...
Themessageconditionismatchedagainstthemessagesthe
agenthasreceived
Thementalconditionismatchedagainstthebeliefsof
theagent
Iftherulefires,thentheagentbecomescommittedtothe
action(theactiongetsaddedtotheagent’scommitmentset)
Thekeycomponent,whichdetermineshow
theagentacts,isthecommitmentruleset
15
16
ExampleAGENT0Rule
ExampleAGENT0Rule
COMMIT(
(agent,REQUEST,DO(time,action)
),;;;msgcondition
(B,[now,Friendagent]AND
CAN(self,action)AND
NOT[time,CMT(self,anyaction)]
),;;;mentalcondition
self,
DO(time,action)
)
Onerulecouldbe:
ifIreceiveamessagefromagentwhich
requestsmetodoactionattime,andI
believethat:
agentiscurrentlyafriend
Icandotheaction
Attime,Iamnotcommittedtodoinganyother
action
actionattime
thencommittodoing
17
18
Individualdecisionmaking
Explicitdecisionmaking
AnOverviewofDecisionMaking
Decisiontrees
Rules
Automata
Singleagenttaskspecificationlanguages
Decisiontheoreticdecisionmaking
MarkovDecisionProcesses(MDP)
PartiallyObservableMarkovDecisionProcesses(POMDP)
Declarative(logic-based)decisionmaking
TheoremProving
Planning
Constraintsatisfaction
20
Multiagentdecisionmaking
Explicit
Mutualmodeling
Norms
OrganizationsandRoles
Multiagenttaskspecificationlanguages
AgentArchitectures
Decisiontheoretic
DecentralizedPOMDPs(Dec-POMDP)
Gametheoretic
Auctions
Declarative
Multiagentplanning
Distributedconstraintsatisfaction
21
AgentArchitectures
AgentArchitectures
“[A]particularmethodologyforbuilding
[agents].Itspecifieshow…theagentcan
bedecomposedintotheconstructionofa
setofcomponentmodulesandhowthese
modulesshouldbemadetointeract.”(P.
Maes1991)
Threetypes:
deliberative(symbolic/logical)
reactive
hybrid.
23
24
DeliberativeArchitecture
Wedefineadeliberative(orreasoning)agentor
agentarchitecturetobeonethat:
containsanexplicitlyrepresented,symbolicmodeloftheworldand
makesdecisionsviasymbolicreasoning.
DeliberativeArchitecture
Viewsagentsasknowledge-basedsystems.
Wecansaythatadeliberativeagentmakesan
actioninthreesteps:
Sense
Plan
Act
26
PracticalReasoning
PracticalReasoning
“Practicalreasoningisamatterof
weighingconflictingconsiderationsfor
andagainstcompetingoptions,wherethe
relevantconsiderationsareprovidedby
whattheagentdesires/values/cares
aboutandwhattheagentbelieves.”
Bratman
Humanpracticalreasoningconsistsof
twoactivities:
deliberation-decidingwhatstateofaffairswewant
toachieve;
means-endsreasoning-decidinghowtoachieve
thesestatesofaffairs.
Theoutputofdeliberationisintentions.
27
28
Intentions(1/4)
Intentions(2/4)
Agentsneedtodeterminewaysof
achievingintentions.
Agentsbelievetheirintentionsare
possible.
IfIhaveanintentiontoφyouwouldexpectmeto
devoteresourcestodecidinghowtobringabout
Anagentbelievesthereisatleastsomewaythatthe
intentionscouldbebroughtabout.
Intentionsprovideafilterforadopting
otherintentions,whichmustnotconflict.
Agentsdonotbelievetheywillnot
bringabouttheirintentions.
IfIhaveanintentiontoφ,youwouldnotexpect
metoadoptanintentionψsuchφandψare
mutuallyexclusive.
Itwouldnotberationalofmetoadoptan
intentiontoφifIbelievedφwasnotpossible.
29
Intentions(3/4)
30
Intentions(4/4)
Undercertaincircumstances,agentsbelieve
theywillbringabouttheirintentions.
ItwouldnotnormallyberationalofmetobelievethatI
wouldbringmyintentionsabout;intentionscanfail.Moreover,
itdoesnotmakesensethatifIbelieveφisinevitablethatI
wouldadoptitasanintention.
Agentstrackthesuccessoftheirintentions,and
areinclinedtotryagainiftheirattemptsfail.
Ifanagent'sfirstattempttoachieveφfails,thenallother
thingsbeingequal,itwilltryanalternativeplantoachieveφ.
31
Agentsneednotintendalltheexpected
sideeffectsoftheirintentions.
IfIbelieveφ⇒ψandIintendthatφ,Idonot
necessarilyintendψalso.(Intentionsarenotclosed
underimplication.)
Thislastproblemisknownasthesideeffectorpackage
dealproblem.Imaybelievethatgoingtothedentist
involvespain,andImayalsointendtogotothedentist
-butthisdoesnotimplythatIintendtosufferpain!
32
Intentions-Summary
Means-EndsReasoning
Intentionsdrivemeans-endsreasoning
Intentionspersist
Intentionsconstrainfuturedeliberation
Intentionsinfluencebeliefsuponwhich
futurepracticalreasoningisbased
Given:
arepresentationofgoal/intentiontoachieve
arepresentationofactionsitcanperform
arepresentationoftheenvironment
generateaplantoachievethegoal.
33
AgentControlLoopVersion1
whiletrue
observethe
3
update
internalworld
4
deliberateabout
whatintentions
toachievenext
5
usemeans-ends
reasoningtoget
aplanforthe
intention
6
executethe
7
endwhile
1
2
34
Deliberation(1/2)
whiletruedo
getnext
perceptp;
3B:=brf(B,p);
4
I:=
deliberate(B);
5
P:=
plan(B,I);
6execute(P);
7
endwhile
Howdoesanagentdeliberate?
1
2
beginbytryingtounderstandwhattheoptions
availabletoyouare
choosebetweenthem,andcommittosome.
Chosenoptionsarethenintentions.
35
36
AgentControlLoopVersion2
Deliberation(2/2)
whiletruedo
2
getnext
perceptp;
3B:=brf(B,p);
4I:=
deliberate(B);
5P:=
plan(B,I);
6execute(P);
7endwhile
Thedeliberatefunctioncanbedecomposedinto
twodistinctfunctionalcomponents:
optiongeneration
1
inwhichtheagentgeneratesasetofpossiblealternatives
representoptiongenerationviaafunction,options,whichtakesthe
agent’scurrentbeliefsandcurrentintentions,andfromthemdetermines
asetofoptions(=desires).
filtering
inwhichtheagentchoosesbetweencompetingalternatives,and
commitstoachievingthem.
Inordertoselectbetweencompetingoptions,anagentusesafilter
function.
whiletruedo
getnextpercept
p;
3
B:=brf(B,p);
4
D:=options(B,I);
5
I:=
filter(B,D,I);
6
P:=plan(B,I);
7
execute(P);
8
endwhile
1
2
37
ExampleofLogicBasedAgent(1/3)
38
ExampleofLogicBasedAgent(2/3)
Cleaningrobotwith:
BeliefsBare:
Perceptsp={dirt,X,Y}
ActionsA={turnRight,
forward,suck...}
{dirt,0,2}{dirt,1,2}
{pos,0,0,East}
OptionsD:
Start:(0,0,North)
Goal:searchingand
cleaningdirt
{clean,0,2}
{clean,1,2}
39
40
ExampleofLogicBasedAgent(3/3)
CommitmentStrategies
Afterfilteringthe
intentionis:
Blindcommitmentablindlycommittedagentwillcontinue
tomaintainanintentionuntilitbelievestheintentionhas
actuallybeenachieved.Blindcommitmentisalsosometimes
referredtoasfanaticalcommitment.
Single-mindedcommitmentasingle-mindedagentwill
continuetomaintainanintentionuntilitbelievesthateither
theintentionhasbeenachieved,orelsethatitisnolonger
possibletoachievetheintention.
Open-mindedcommitmentanopen-mindedagentwill
maintainanintentionaslongasitisstillbelievedpossible.
{clean,0,2}
PlanP:
{turnRight,forward,forward,
suck}
41
42
AgentControlLoopVersion3
Commitment
whiletruedo
2
getnextpercept
p;
3
B:=brf(B,p);
4
D:=
options(B,I);
5
I:=
filter(B,D,I);
6
P:=plan(B,I);
7
execute(P);
8
endwhile
Anagenthascommitmentbothtoends(i.e.
ofwishestobringabout),andmeans(i.e.,
themechanismviawhichtheagentwishes
toachievethestateofaffairs).
Currently,ouragentcontrolloopis
overcommitted,bothtomeansandends.
Modification:replanifeveraplangoes
wrong.
1
43
whiletruedo
getnextpercept
3
B:=
4
D:=
5
I:=
6
whilenotempty(P)do
7
a:=first(P);
8
execute(a);
P:=rest(P);
9
getnextperceptp;
10
B:=brf(B,p);
11
ifnotsound(P,B,I)then
12
P:=plan(B,I);
13
endwhile
14
endwhile
1
2
44
AgentControlLoopVersion4
Commitment
whiletruedo
getnextperceptp;
3
B:=brf(B,p);
4
D:=options(B,I);
5
I:=filter(B,D,I);
6
whilenotempty(P)do
7
a:=first(P);
execute(a);
8
P:=rest(P);
9
getnextperceptp;
10
B:=brf(B,p);
11
ifnotsound(P,B,I)
then
12
P:=plan(B,I);
13
endwhile
14
endwhile
1
Stillovercommittedtointentions:never
stopstoconsiderwhetherornotits
intentionsareappropriate.
Modification:stoptodetermine
whetherintentionshavesucceededor
whethertheyareimpossible(singlemindedcommitment).
2
whiletruedo
getnextperceptp;
3
B:=brf(B,p);
4D:=options(B,I);
5
I:=filter(B,D,I);
6
whilenotempty(P)or
succeeded(B,I)or
impossible(B,I)do
7
a:=first(P);execute(a);
8
P:=rest(P);
9
getnextperceptp;
10B:=brf(B,p);
11
ifnotsound(P,B,I)then
12 P:=plan(B,I);
13
endwhile;
14
endwhile
1
2
45
46
AgentControlLoopVersion5
IntentionReconsideration
whiletruedo
getnextperceptp;
3
B:=brf(B,p);
4D:=options(B,I);
5
I:=filter(B,D,I);
6
whilenotempty(P)or
succeeded(B,I)or
impossible(B,I)do
7
a:=first(P);
execute(a);
8
P:=rest(P);
9
getnextperceptp;
10B:=brf(B,p);
11ifnotsound(P,B,I)then
12
P:=plan(B,I);
13
endwhile;
14
endwhile
1
Ouragentgetstoreconsideritsintentionsonce
everytimearoundtheoutercontrolloop,i.e.,
when:
2
ithascompletelyexecutedaplantoachieveitscurrentintentions;or
itbelievesithasachieveditscurrentintentions;or
itbelievesitscurrentintentionsarenolongerpossible.
Thisislimitedinthewaythatitpermitsanagent
toreconsideritsintentions.
Modification:Reconsiderintentionsafter
executingeveryaction.
47
whiletruedo
getnextperceptp;
3
B:=brf(B,p);
4
D:=options(B,I);
5
I:=filter(B,D,I);
6
whilenotempty(P)or
succeeded(B,I)or
impossible(B,I)</font>do
7
a:=first(P);execute(a);
8
P:=rest(P);
9
getnextperceptp;
10
B:=brf(B,p);
11
D:=options(B,I);
12
I:=filter(B,D,I);
13
ifnotsound(P,B,I)thenP
:=plan(B,I);
14
endwhile;
15
endwhile
1
2
48
AgentControlLoopVersion6
IntentionReconsideration
Butintentionreconsiderationiscostly!Adilemma:
anagentthatdoesnotstoptoreconsideritsintentionssufficiently
oftenwillcontinueattemptingtoachieveitsintentionsevenafterit
isclearthattheycannotbeachieved,orthatthereisnolongerany
reasonforachievingthem;
anagentthatconstantlyreconsidersitsintentionsmayspend
insufficienttimeactuallyworkingtoachievethem,andhence
runstheriskofneveractuallyachievingthem.
Solution:incorporateanexplicitmeta-level
controlcomponent,thatdecideswhetherornotto
reconsider.
whiletruedo
getnextperceptp;
3
B:=brf(B,p);
4
D:=options(B,I);
5
I:=filter(B,D,I);
6
whilenotempty(P)or
succeeded(B,I)or
impossible(B,I)do
7
a:=first(P);execute(a);
8
P:=rest(P);
9
getnextperceptp;
10
B:=brf(B,p);
11
D:=options(B,I);
12
I:=filter(B,D,I);
13
ifnotsound(P,B,I)then
14
P:=plan(B,I);
15
endwhile;
16
endwhile
1
2
whiletruedo
getnextperceptp;
3B:=brf(B,p);
4D:=options(B,I);
5I:=filter(B,D,I);
6whilenotempty(P)or
succeeded(B,I)or
impossible(B,I)</font>do
7a:=first(P);execute(a);
8P:=rest(P);
9getnextperceptp;
10 B:=brf(B,p);
11 ifreconsider(B,I)then
12 D:=options(B,I);I:=
filter(B,D,I);
13 ifnotsound(P,B,I)then
14 P:=plan(B,I);
15 endwhile
16 endwhile
1
2
49
OptimalIntentionReconsideration
50
KinnyandGeorgeff'sResults
Ifɣislow(i.e.,theenvironmentdoesnotchangequickly),
thenboldagentsdowellcomparedtocautiousones.This
isbecausecautiousoneswastetimereconsideringtheir
commitmentswhileboldagentsarebusyworkingtowards
-andachieving-theirintentions.
Ifɣishigh(i.e.,theenvironmentchangesfrequently),then
cautiousagentstendtooutperformboldagents.Thisis
becausetheyareabletorecognizewhenintentionsare
doomed,andalsototakeadvantageofserendipitous
situationsandnewopportunitieswhentheyarise.
KinnyandGeorgeff'sexperimentally
investigatedeffectivenessofintention
reconsiderationstrategies.
Twodifferenttypesofreconsideration
strategywereused:
boldagentsneverpausetoreconsiderintentions,and
cautiousagentsstoptoreconsideraftereveryaction.
Dynamismintheenvironmentisrepresented
bytherateofworldchange,ɣ.
51
52
Therepresentation/reasoningproblem
CritizismofSymbolicAI
Howtosymbolicallyrepresentinformationaboutcomplex
real-worldentitiesandprocesses.
Howtotranslatetheperceivedworldintoanaccurate,
adequatesymbolicdescription,intimeforthatdescription
tobeuseful
…vision,speechrecognition,learning.
Howtogetagentstoreasonwiththisinformationintime
fortheresultstobeuseful
…knowledgerepresentation,automatedreasoning,planning.
Duringcomputation,thedynamicworldmightchangeandthusthesolutionnot
validanymore!
Howtorepresenttemporalinformation,e.g.,howasituationchangesovertime?
Therearemanyunsolvedproblemsassociatedwith
symbolicAI.
“Mostofwhatpeopledointheirdaytodaylivesisnotproblem-solving
orplanning,butratheritisroutineactivityinarelativelybenign,but
certainlydynamic,world.”(Brooks,1991)
Theseproblemshaveledsomeresearcherstoquestion
theviabilityofthewholeparadigm,andtothe
developmentofreactivearchitectures.
Althoughunitedbyabeliefthattheassumptions
underpinningmainstreamAIareinsomesensewrong,
reactiveagentresearchersusemanydifferenttechniques.
53
54
Brooks'DesignCriterias
ReactiveArchitecture
Anagentmustcopeappropriatelyandina
timelyfashionwithchangesinitsenvironment.
Anagentshouldberobustwithrespecttoits
environment.
Anagentshouldbeabletomaintainmultiple
goalsandswitchbetweenthem.
Anagentshoulddosomething,itshouldhave
somepurposeinbeing.
56
Brooks-BehaviourLanguages
Situatednessandembodiment:Theworldis
itsownbestmodelanditgivestheagenta
firmgroundforitsreasoning.
Intelligenceandemergence:“Intelligent”
behaviourarisesasaresultofanagent's
interactionwithitsenvironment.Also,
intelligenceis“intheeyeofthebeholder”;
itisnotaninnate,isolatedproperty.
Brookshasputforwardthreetheses:
1
2
3
Brooks-KeyIdeas
Intelligentbehaviourcanbegeneratedwithout
explicitrepresentationsofthekindthatsymbolicAI
proposes.
Intelligentbehaviourcanbegeneratedwithout
explicitabstractreasoningofthekindthatsymbolic
AIproposes.
Intelligenceisanemergentpropertyofcertain
complexsystems.
57
TheSubsumptionArchitecture(1/2)
58
TheSubsumptionArchitecture
Traditionaldecompositionintofunctionalmodules:
Toillustratehisideas,Brooksbuiltsome
robotsbasedonhissubsumptionarchitecture.
Asubsumptionarchitectureisahierarchyof
task-accomplishingbehaviours.
Eachbehaviourisarathersimplerule-like
structure.
Eachbehaviour“competes”withothersto
exercisecontrolovertheagent.
Decompositionbasedontaskachievingbehaviors:
59
60
TheSubsumptionArchitecture
TheSubsumptionArchitecture
Lowerlayersrepresentmoreprimitivekindsof
behaviour,(suchasavoidingobstacles),andhave
precedenceoverlayersfurtherupthehierarchy.
Theresultingsystemsare,intermsoftheamount
ofcomputationtheydo,extremelysimple.
Someoftherobotsdotasksthatwouldbe
impressiveiftheywereaccomplishedbysymbolic
AIsystems.
61
62
ExampleofReactiveArchitecture
StateMachines
63
FiniteStateMachines
Example:lightswitch
Itisamachinethatisinonestate
amongafinitenumberofstates
Afinitestatemachineisdefinedby
AfinitenumberofstatesS
AfinitenumberoftransitionsTbetweenstates
Aninitialstates₀∊S
Thecurrentstates∊S
65
Example:ambulance
66
Benefitsanddrawbacks
Benefits:
Simple
Predictable
Flexible
Fast
Verifiable/Provable
Drawbacks:
Complexityincreasefasterthanthenumberofstates
andtransitions
67
68
Statecharts
FSMCalculator
ExtendFiniteState-Machines:
Hierarchicalstates
ConcurentStates
Dataflow
69
70
AdvantagesOfReactiveSystems
StateChartCalculator
Simplicity,i.e.moduleshavehigh
expressiveness
Computationaltractability
Robustnessagainstfailure,i.e.
possibilityofmodelingredundancies
Overallbehavioremergesfrom
interactions
71
72
ProblemsWithReactiveSystems
Thelocalenvironmentmustcontainenough
informationtomakeadecision.
Hardtotakenon-localinformationintoaccount.
Behavioremergesfrominteractions⇒Howto
engineerthesysteminthegeneralcase?
Howtomodellong-termdecisions?
Howtoimplementedvaryinggoals?
Hardtoengineer,especiallylargesystemswith
manylayersthatinteracts.
HybridArchitecture
73
HybridArchitecture
HybridArchitecture
Ahybridsystemisneitheracompletelydeliberativenor
completelyreactiveapproach.
Anobviousapproachistobuildanagentoutoftwo(or
more)subsystems:
Inalayeredarchitecture,anagent'scontrolsubsystems
arearrangedintoahierarchy,withhigherlayersdealing
withinformationatincreasinglevelsofabstraction.
Akeyprobleminlayeredarchitecturesiswhatkindof
controlframeworktoembedtheagent'ssubsystemsin,
tomanagetheinteractionsbetweenthevariouslayers.
adeliberativeone,containingasymbolicworldmodel,whichdevelopsplans
andmakesdecisionsinthewayproposedbysymbolicAI;and
areactiveone,whichiscapableofreactingtoeventswithoutcomplex
reasoning.
Horizontallayering-Layersareeachdirectlyconnectedtothe
sensoryinputandactionoutput.Ineffect,eachlayeritselfactslikean
agent,producingsuggestionsastowhatactiontoperform.
Verticallayering-Sensoryinputandactionoutputareeachdealtwithby
atmostonelayereach.
Often,thereactivecomponentisgivensomekindof
precedenceoverthedeliberativeone.Thiskindof
structuringleadsnaturallytotheideaofalayered
architecture.
75
76
HybridArchitecture
HybridArchitecture
77
78
AgentArchitecturesSummary
Summary
Originally(1956-1985),prettymuchallagents
designedwithinAIweresymbolicreasoningagents
Itspurestexpressionproposesthatagentsuseexplicit
logicalreasoninginordertodecidewhattodo
Problemswithsymbolicreasoningledtoareaction
againstthis—theso-calledreactiveagents
movement,1985–present
From1990-present,anumberofalternativesproposed:
hybridarchitectures,whichattempttocombinethebest
ofreasoningandreactivearchitectures
80
ReactiveAgentArchitectures
DeliberativeArchitectures
Properties
Properties
Internalstate(usingsymbolicrepresentation)
Search-baseddecisionmaking
Goaldirected
Noexplicitworldmodel
Rule-baseddecisionmaking
Benefits
Benefits
Efficient
Robust
Niceandclear(logics)semantics
Easytoanalyzebyprovingproperties
Problems
Problems
Thelocalenvironmentmustcontainenoughinformationto
makeadecision.
Easytobuildsmallagents,hardtobuildagentswith
manybehaviorsorrules.Emergentbehavior.
Can’treactinatimelymannertoeventsthatrequiresimmediateactions.
Intractablealgorithms.
Hardtocreateasymbolicrepresentationfromcontinuoussensordata.
Theanchoringproblem.
81
HybridAgentArchitectures
Properties
Triestocombinethegoodpartsofbothreactive
anddeliberativearchitectures.
Usuallylayeredarchitectures.
Benefits
Attackstheproblemondifferentabstractionlevels.
Hasthebenefitsofbotharchitecturetypes.
Problems
Harddocombinethedifferentparts.
83
82
Download