A computational theory of the mind - HEC

advertisement
Encoding Higher-Level Cognition as
Concurrent Communicating Processes
Pierre Bonzon
University of Lausanne, Switzerland
pierre.bonzon@unil.ch
Abstract
We present a computational encoding of higher-level cognition
that equates mental activity with concurrent communicating
processes within a model of social agents. We instantiate this
encoding under the form of a virtual machine interpreting
dialogs. This machine can interpret itself, thus allowing for
introspective processes. We argue that the resulting layered
model enjoys a property it shares with a conscious mind i.e., it
is shown that a proto-form of consciousness can be achieved
through the closure of two mental activities i.e., deliberation and
introspection.
1. Introduction
Briefly stated, the information processing hypothesis of
cognitive science postulates that the human brain
basically behaves like a computer. But then, the objection
goes, many of the brain’s capabilities, such as thinking,
consciousness, and so on, cannot be reduced to
computable functions, and thus are out of the reach of
computers. J. Searle’s well known “Chinese room”
argument exemplifies this objection by pointing out
rightfully that syntax in itself is insufficient to produce
meaning. An algorithm, defining a static functional
relation between input and output sets, is not to be
confused however with its embedding within an
interactive system embodying multiple dynamic (i.e.,
time-varying) non-functional relations with its
environment. What matters then about syntax is not the
output string resulting from a computation, but the
concurrent interactive processes it allows to define
(something similar can be said of the human discourse,
whose meaning is not in the utterances themselves, but for
example in the speech acts they imply). In other words,
systems enjoy properties they are not traceable and/or
reducible to their constituents, but emerge from their
overall behaviour. For example, life is considered an
emergent property of naturally embodied systems, namely
biological systems. Similarly, emergent properties could
well arise out of artificially embodied systems comprising
sensors and effectors connected to a computer mimicking
the human brain. The behavioural properties of these
systems would then follow from concurrent interactive
processes (as opposed to a single function computation).
Their evolution would be essentially unpredictable i.e.,
their futures states could not be derived beforehand, even
though their behaviour might be reproducible and
deterministic. This “process” approach can be contrasted
with the traditional “analytical” approach, based for
instance on differential equations. In the analytical
approach, a future state can be determined without
computing the intermediate states the system will have to
go through. In the process approach, one must necessarily
compute all those intermediate states: in other words, we
cannot determine future states without unfolding the time.
As a consequence, running processes are systems that
embed time as a concrete (although possibly simulated)
dimension: there is no possible substitute for it (unlike we
have for instance for ordinary space, let it be in 1D, 2 D or
3D representations, which allow for selected properties to
be computed at any point).
There might be at least two diametrically opposed
approaches towards purely computational embodied
systems (and a whole spectrum in between) i.e., one
might start with the simplest of models, and look for the
emergence of formalisms, or take the opposite approach
i.e., start from the most formal of systems, and look for
the emergence of the simplest of behaviour (e.g.,
memory). While connectionist models seem to follow the
former approach, we favour the latter one for the
following reasons. First of all, and similarly to theories in
the physical sciences that require advanced mathematical
concepts to account for the complexity of real
phenomena, artificially embodied systems may have to
rely on sophisticated computational abstractions (e.g.,
concurrent communicating systems). Secondly, similarly
to biological systems where the functioning of a living
cell cannot be explained in terms of its atomic
constituents but necessitates the intermediate level of
molecular structures with definite functions, artificial
embodied systems should have a layered structure with
explicit interlayer relationships and processes.
We will take further inspiration from a philosophical
tradition, namely the so-called layered model [8]. Simple
layered models such as the Oppenheim and Putnam’s
hierarchy [11] (fig. 1), tries to elicit the regularity of
entities of increasing complexity through a level-to-level
relationship (i.e., a kind of aggregation property) that is
uniform from top to bottom.
2
2. A model of social agents with plans
Social groups
Multi-cellular living things
Cells
Molecules
Atoms
Fig. 1: Oppenheim and Putnam’s layered model
The recurring debate about reduction v/s emergence can
then be expressed by the following question: to what
extend can higher levels be reduced to lower ones or, on
the contrary, enjoy properties not found in their lower
parts? Cells, for example, truly enjoy a property that
cannot be reduced to that of molecules, namely life. Life
therefore is an emergent property of cells. Mind, at first
sight, similarly seems to be an emergent property of some
multi-cellular living things, namely humans. However, as
pointed out by J. Kim, nothing in the idea of emergence
prohibits mind’s emergence out of non biological
processes. This is also well in line with R. Brooks vision,
which can be summarized by the following motto:” the
behaviors are the building blocks, and the functionality is
emergent” [5].
Our layered model will comprise various relationships
i.e., a level can be interpreted, compiled or simply
represented by its lower level. Our framework will thus
take the form of the successive embedding of
computational processes (fig. 2).
Concurrent virtual machines
Concurrent agent dialogs
Agent plans
Virtual agent machine
Physical machine
Fig. 2: Layered model of a social agent
We shall distinguish, from the bottom to the top
- a physical machine
- a virtual agent machine interpreted by this machine
- agent plans interpreted by this virtual agent machine
- agent dialogs that can be compiled into agent plans
- concurrent virtual machines defined by agent dialogs.
As a result of this construction, the agent machine can
execute itself (i.e., it interprets compiled models of itself).
While this framework does not constitute stricto sensu a
model of the mind, it will allow us to depict one of its
most intriguing properties i.e., the possibility to operate
on (e.g., to think about) itself via introspection. This will
be implemented using reflective processes driven by selfexecuting machines. Before that, we shall give a formal
description of each level.
We start the construction of our layered model with a
description of its second and third level i.e., by showing
how a social agent’s plans can be interpreted by a virtual
machine. Towards this end, we shall use a simplified
machine obtained from a more general machine for
classes of communicating agents based on deduction [3].
A physical machine with deductive capabilities is then all
we will need in order to get an executable model. This can
be obtained by formulating the virtual machine as a
logical program and by inserting an additional layer above
the physical machine, namely the compilation of this
program into machine code (e.g., a Prolog compilation).
According to a well established theory [14], autonomous
intelligent agents (thereafter called agents in short) are
entities enjoying reactive, proactive and social (i.e.,
communicative) capabilities. Reactivity means that they
are able to respond to changes in their environment, proactiveness refers to their capacity to act towards a goal,
and finally sociability allows them to interact with other
agents, and possibly with humans. Our model of an
agent’s behavior comprises two specific components,
namely
- a core model, defining the reactivity and pro-activeness
of individual agents
- a communication model, representing the social ability
of agents in a multi-agent system.
2.1 The core model
In order to achieve pro-activeness, we extend
Wooldridge’s definition of a reactive agent to include the
notion of plans. Intuitively, an agent’s plan can be
described as an ordered set of actions that may be taken,
under certain conditions, to meet a certain objective.
Formally, let us assume
- a set of actions A = {a1, a2,…}
- a set of plans P = {p1, p2,…}
and two predicates plan and act. To each plan pP we
associate first a set of implications
“condition”  plan(p)
where “condition” is a condition on the state of the agent
that defines the applicability of plan p. In turn, another set
of implications
“condition” do(p, a)
defines the executable actions a of plan p.
Example: a vacuum cleaner robot
To illustrate this, let us consider a vacuum cleaner robot
that chooses to work i.e., to execute actions move and
suck, if it senses some dirt, or otherwise goes home (e.g.,
at location 0) by executing action back. These two
behaviors correspond to two possible plans, i.e. work and
home. The robot's overall behavior can be defined by the
following implications:
3
∃X dirt(X)

plan(work)
∃X dirt(X)

plan(home)
∀X in(X)  dirt(X)

do(work,suck(X))
∀X in(X)  dirt(X)

do(work,move(X))
∀X in(X)

do(home,back(X))
where the predicates in(X) and dirt(X) mean "the agent
is at location X" and "the agent has sensed dirt at location
X", actions move(X) and back(X) mean “move one step
forward from location X" and "back one step from
location X", and action suck(X) has an obvious meaning.
Let us further assume
- a set of states E = {e, e2,…} for the environment
- a set of states L = {l, l2,…} for the agent.
The core model will then be given by the following
procedures defining the run of an agent i.e., the
successive states it will go through, given its initial states
e0 and l0 , where l
p means “p can be deduced from
l”.
procedure run(e,l)
loop sense(e,l);
react(e,l)
procedure sense(e,l)
if l
percept(p)  process(p,a)
then l  l(l,a)
procedure react(e,l)
if l
plan(p)  do(p,a)
then e  e (e,a)
Fig. 3: The core agent model
An agent’s run is thus defined as a loop alternating sense
and react steps. In the first step, the deduction of
percept(p)  process(p,a) means that the percept p has
been captured and that an action a must be done to
process this percept, and then the state transformer
function l calculates the transition l 
l(l,a)
representing the change in the agent’s state. In the next
step, the deduction of plan(p)  do(p, a) first selects an
agent’s plan (thus accounting for its pro-activeness), then
picks up an applicable action, and finally the transition e
 e (e,a) represents the effect of this action on the
environment.
A careful examination of the above procedures show that
plans are not executed blindly. As the agent state l is
possibly updated at the beginning of each cycle, the
deductions that follow can lead each time to a different
plan. This mechanism allows an agent to adopt a new plan
whenever a certain condition occurs, and then to react
with an appropriate action. Plans and actions are thus
selected and executed one at a time, as it is required from
truly reactive systems.
2.2 The communication model
Agent communication models are commonly based on
speech act theory [12], and thus rely on the so-called
“mental attitudes” of agents, such as goals, beliefs, and so
on. As a result, communicative actions require strong
preconditions on the part of the agents. Recently, a
completely new approach has been advocated [6]. It
defines logical communicative primitives as “neutral”
actions, so that core agent models do not need to
distinguish between goals and beliefs. An important
feature in this new approach is the use of synchronized
communication. Two agents whishing to communicate
must be ready for an exchange i.e., each of them must
have issued a predefined message. When put together,
these two messages must constitute one of two possible
pairs exchanged between a sender and a receiver. In the
tell/ask pair, the sender proposes some data that the
receiver uses to make a deduction of his own. In the
req/offer pair, the sender proposes a conclusion that the
receiver will try to deduce using means of his own. The
processing of these two pairs of messages thus defines
deductive (in the case of the tell/ask pair) or abductive
(for the req/offer pair) tasks that are delegated to the
receiver.
All that is required in order to implement concurrent
communicating processes is simple data communication
without any need for either deduction or abduction. We
shall therefore give up the req/offer pair and define a
simplified ask/tell pair. In what follows, r designates the
receiver, s the sender. The simplified pair tell/ask is
defined in three steps:
- message tell(r,) from s provides r with the data 
- message ask(s,) from r requires s to provide data 
- when synchronized with s, r then computes a most
general substitution  such that  =.
2.3 The integrated multi-agent system
To integrate a core agent model with a communication
model one essentially needs to extend the individual agent
model of section 2.1 into a multi-agent system of identical
agents. The pair of primitive messages just introduced
above will be considered as possible agent actions, and
incorporated into plans. To achieve synchronization,
agent plans will have to include explicit synchronization
conditions. Defining these conditions is quite an intricate
task, so we will not dwell into that (see [4] for an
example). As human do engage in communication
without referring to such conditions, we shall step up one
level in our layered model, and present a language for
agent dialogs that relies on compiled synchronizations.
Let us just mention here however that the synchronization
of each tell/ask pair requires the victual machine to raise
a pair of corresponding ack flags. As a result, messages
can be traced across layer boundaries.
4
3. A language for concurrent agent dialogs
The syntax of this language is defined by the following
grammar, where [m1|[m2|…[]]]=[m1,m2,…]. We do not
reproduce the usual syntax for first order expressions
(note though that our variables start with capital letters).
<dialog> ::= dialog(<dialogName>(<dialogParams>),
<varList>,<mesTree>)
<varList>
::= [] || [<varName>|<varList>]
<mesTree>
::= [] || <seq> || [<alt>]
<seq>
<alt>
::= [<mes>|<mesTree>]
::= <guardMes> || (<guardMes>;<alt>)
<guardMes> ::= <mesTree> || (<guard>|<mesTree>)
<mes>
::=
<messageName>(<messageParams>)
<messageName> ::= ask || tell || start || end || execute
Fig 4: BNF productions for agent dialogs
Each dialog consists of a tree structure whose sequences
contain messages separated by a conjunctive “,” and end
with alternatives containing guarded messages (i.e.,
messages that might be subject to conditions) separated
by a disjunctive “;”. Besides the ask/tell communicating
pair introduced in 2.2, the possible messages are start/end
(to create and delete a dialog thread) and execute
(allowing for the execution of any non-communicating
actions i.e., including start/end). Each dialog thread is
re-entrant and will be automatically resumed at the end of
each sequence and/or alternative.
The language just briefly reviewed can be compiled into
agent plans comprising synchronizing conditions, and
thus executed (i.e., indirectly interpreted) by the virtual
machine. Its precise operational semantics can be then
given by compiling functions [4]. As dialogs can contain
execute messages, this language constitutes a general
model of concurrent communicating processes. As an
example (of yet non-communicating processes), consider
the following two dialogs that, when executed as
concurrent threads, will lead to the same behaviour as the
core agent machine defined in section 2.1
dialog(sense, [P,A],
[((percept(P),
process(P,A)
dialog(react, [P,A],
[((plan(P),
do(P,A)
| [execute(A)]))])
| [execute(A)]))])
Fig. 5: The core agent model as concurrent dialogs
To establish the equivalence, the effect of executing the
actions A selected by predicate process(P,A) and do(P,A)
must represent the transitions l  l(l,a) and e  e(e,a)
defined in section 2.1. As a result, an agent can be
considered as a multi-threaded entity interleaving
concurent dialogs. Note that messages can be exchanged
between different threads of the same agent, as between
different agents. Furthermore, as dialogs are compiled
into plans, an agent can execute itself.
4. Introspective agents
A self-executing agent run is defined as follows: the
procedures defining the core agent model of section 2.1
will first interpret the sense/react dialogs of fig. 5, and
these in turn will interpret an agent’s plans, with sensing
and reacting occurring concurrently. Introspective agent
models will then follow from self-executing runs
involving dialogs that represent extensions of the core
machine. Towards this end, we shall first define an
extension that allows for an explicit plan deliberation.
4.1 Deliberative agents
In contrast to reactive agents (whose plan selection occurs
through the deduction
l
plan(p)
and is thus
conditional on their current state only), deliberative agents
can take into account their current action in order to select
their plan. Towards this end we shall define plan(p) as
servers i.e., as a set of dialogs, each of which will
repeatedly tell react a particular plan p. Furthermore,
servers will be started and ended through a new dialog
deliberate. This new dialog will in turn be told by both
sense and react the last action they performed. Dialog
deliberate is thus a communicating process that will
execute actions controlling the servers. Similarly to react,
these actions will be deduced from deliberative plans
defined by implications “condition” intend(p, a) or
assertions intend(p, a), where p represents the last action
performed by either sense or react, and a is a new action.
Suppose that predicate process is defined uniformly as∀P
process(P,store(P)). Then
∃X dirt(X)

plan(work)
∃X dirt(X)

plan(home)
might be represented by the following deliberative plan
(other similar plans might refer to move and back actions)
∀X intend(store(dirt(X)), switch(work))
∃X dirt(X)  ∀Y intend(suck(Y),switch(home))
where action switch(p) ends the current plan server (if
there is any) and starts an new server plan(p). This leads
to the following extension, where act now replaces react
dialog(sense, [P,A],
[((percept(P),
process(P,A)
dialog(deliberate, [X,P,A],
[ask(X,P),
((intend(P,A)
dialog(plan(P), [],
[ tell(act,P)])
dialog(act, [P,A],
[ask(plan(P),P),
((do(P,A)
| [execute(A),
tell(deliberate,A)]))])
| [execute(A)]))])
| [execute(A),
tell(deliberate,A)]))])
Fig. 6: Dialogs for defining a deliberative agent
5
In contrast to act, sense still receives its data via a
predicate i.e., percept(p), an indication of its nondeliberative interface with underlying sensors. Note that
in message ask(X,P), the use of variable X allows for
deliberate to communicate with any dialog.
4.2 Introspective agents
An introspective agent is defined as an agent that can
reflect on (i.e., remember and then use) its past
deliberations. We propose to implement this process as a
meta-deliberation i.e., to consider introspection as yet
another communicating process introspect that, similarly
to act, will communicate with servers driven by
deliberate. Whereas in the basic deliberation the servers
plan(p) were controlled using deliberative plans and
actions, the servers for meta-deliberation will be direct
reflections of the successive deliberations i.e., will be
implemented as detached threads reflect(p), each of which
will reflect a past deliberative action p. Furthermore, new
introspective plans will allow for the deduction of
introspective actions e.g., for simply reporting
remembered past actions. For example, having the agent
do this reporting when idle i.e., after having backed home,
might be defined as follows
∀P in(0)  think(P, report(P)).
We define this extension as
dialog(deliberate, [X,P,A],
[ask(X,P),
((intend(P,A)
dialog(reflect(P), [],
[tell(introspect,P)])
dialog(introspect, [P,A],
[ask(reflect(P),P),
((think(P,A)
| [execute(A),
start(reflect(A))]))])
| [execute(A)]))])
Fig. 7: Dialogs for defining an introspective agent
Threads being continuously resumed, this introspection
process, unless it is reset by ending its reflective servers
reflect(P), will loop on past actions. As a possible way to
control introspection, one might consider defining a metameta-level deliberation by simply letting introspect tell in
turn its introspective actions to deliberate as follows
dialog(introspect, [P,A],
[ask(reflect(P),P),
((think(P,A)
| [execute(A),
tell(deliberate,A)]))])
Fig. 8: Closing introspection and deliberation
4.3 Proto-conscious agents
Having introspect tell in turn its action to deliberate
amounts to an indirect recursive call closing the two
processes i.e., allowing for meta-processes, meta-metaprocesses, and so on. Having the agent introspect on his
introspection, and then as a result stop introspecting, can
be considered as equivalent to saying: “the agent will stop
introspecting as soon as he gets conscious of his
introspection”. In order to account for this hypothetical
form of proto-consciousness, let us introduce a last
consciousness process defined as follows:
dialog(consciousness, [P,A],
[((conscious(P),
realize(P,A)
| [execute(A),
tell(deliberate,A)]))])
Fig. 9: Dialog for defining proto-consciousness
Similarly to sense, this process does not receive its data
through communication, but via a predicate conscious.
This truly reflects the non-deliberative origins of
consciousness, which like sensing seems to have deep
physiological correlates [1]. Translated into our layered
model, this means having to step down to some lower
level. As mentioned at the end of section 2.3, tell/ask
messages can be traced across layer boundaries using
ack(s,tell(r,a)) flags that are raised by the virtual
machine, where r and s are the sender and receiver
threads, respectively. The hypothetical source of protoconsciousness we just identified (i.e., having introspect
tell its actions to deliberate) leads us to define
∀A ack(introspect,tell(deliberate,A))  conscious(A).
thus taking into account the physiological origins (i.e., a
physical signal) as well as the mechanism (i.e., the
coupling of two mental processes) of this consciousness.
As an example of associated consciousness plans, let us
assume that the agent should end his reflective servers and
thus reset his introspection when he gets conscious of
having no more plans for act. Towards this end, let us
consider the following extension of his deliberative plan
that will allow him first to drop his last plan:
in(0)
 ∀Y intend(back(Y), end(plan(home))).
When the agent shall then report this action through
introspection, he will get conscious of it, reset his
introspection and report in turn this last action by using
∀A realize(report(end(plan(home))),
(end(reflect (A)),
report(reset(introspect)))).
Example: the vacuum cleaner robot
Let us suppose that the robot has just sensed and sucked a
dirty spot. After backing home, it should report
switch(work)
switch(home))
end(plan(home))
reset(introspect).
To effectively get this result, our Prolog implementation
must ensure that threads introspect and consciousness are
scheduled one after the other, in that order. This is
required in order to render the true parallelism of
physiological processes through the pseudo-parallelism of
our concurrent threads implementation.
6
5. Conclusion
A computational theory of thoughts relying on our layered
agent model is now summarized below. It is based on the
following grounding principle:
Thinking is a mental activity that occurs concurrently
with physical activity and follows the same reactive
pattern i.e., the deduction of mental actions from
various mental plans.
More precisely, a basic mental activity (e.g., plan
deliberation) is implemented as a communicating process
(e.g., deliberate) that is synchronized with concurrent
communicating processes representing physical activities
(e.g., sense and act). Similarly to physical actions, mental
actions (such as the activation and deactivation of plans)
are deduced from mental plans, such as “condition”
intend(p,a), and give rise to plan server communicating
processes. Furthermore, mental actions are reflected into
concurrent reflective processes i.e., deliberate(a). A nonbasic mental activity, such as introspection, is
implemented as yet another communicating process (i.e.,
introspect), which uses its own introspective plans and
actions and is coupled with deliberation through the
reflective
processes.
Finally,
proto-forms
of
consciousness can be achieved by closing deliberation and
introspection i.e., by indirect recursive calls to both
deliberate and introspect implemented by having process
consciousness dig across layer boundaries. Unlike
previous computational models of consciousness [2], this
last point explicitly reflects the physiological origins of
consciousness. Our model is also well in line with the idea
that humans are not directly conscious of their thoughts
but rely on intermediate representations and processes [7].
Consciousness comes in a variety of forms [1]. Roughly,
access consciousness refers to the direct availability of
mental content; monitoring (also called reflective)
consciousness means direct availability of the mental
activity itself; finally, phenomenal consciousness refers to
the qualitative nature of feelings or sensations, and selfconsciousness is the possibility to think about ourselves.
Our model allows for functionalities that are related to
both access and monitoring consciousness. It also relates
to a correlate of consciousness, namely the so-called
episodic memory [13] that comes along with it.
Deliberations we do perform consciously, or events that
hit our consciousness, get somehow memorized i.e., we
are able to remember them for some period of time
afterwards. The vacuum cleaner robot works in a similar
manner. This similarity does not allow us however to
argue that it acts consciously. The only claim that we
make is that it does enjoy an emergent property it shares
with a conscious mind.
Our layered model together with its implementation thus
provides us with an operative model of proto-forms of
both access and monitoring consciousness as well as of
episodic memory. Thinking machines encompassing
similar and extended functionalities might become
ordinary tools for the cognitive scientist of tomorrow.
Their performances in accomplishing various cognitive
tasks could be benchmarked against that of humans, and
the methods they implement hypothesized as operative
models and explanations for our own capabilities. This
methodology has already been adopted in experiments
conducted with integrated cognitive architectures like
ACR-T or Soar. These architectures have been used for
modeling higher level cognitive capabilities such as
adaptive communication [9] or episodic memory [10].
Because they manipulate low level concepts only (i.e.,
productions rules), they do not bring about the kind of
insight that might be achieved with layered models. We
thus expect next generation cognitive architectures to be
based on higher level concepts e.g., the concurrent
communicating processes included in our model.
References
[1] Atkinson, A.P., Thomas, M.S.C. & Cleermans, A.,
Consciousness: mapping the theoretical landscape, Trends
in Cognitive Sciences, Vol. 4, No10, 2000.
[2] B.J. Baars, A cognitive theory of consciousness, Cambridge
University Press, 1988.
[3] Bonzon, P., An Abstract Machine for Classes of
Communicating Agents Based on Deduction, in: J.J. Meyer
and M. Tambe (eds), Intelligent Agents VIII, LNAI vol.
2333, Springer, 2002.
[4] Bonzon, P., Compiling Dynamic Agent Conversations, in:
M. Jarke, J. Koehler & G. Lakemeyer (eds), Advances in
Artificial Intelligence, LNAI vol. 2479, Springer, 2002.
[5] Brooks, J., Integrated Systems Based on Behaviour,
SIGART Bulletin, vol. 2, No 4, 1991
[6] Hindricks, K.V., de Boer, F.S., van der Hoek, W. & Meyer,
J.J., Semantics of Communicating Agents Based on
Deduction and Abduction, in: F. Dignum & M. Greaves
(eds), Issues in Agent Communication, LNAI vol. 1916,
Springer, 2000.
[7] Jackendoff, R., Consciousness and the Computational
Mind, MIT Press, 1987
[8] Kim, J., The Layered Model, Metaphysical Considerations,
Philosophical Explorations, 5, 2002.
[9] Matessa, M. & Anderson, J. R., An ACT-R model of
adaptive communication, in: Proceedings of the Third
International Conference on Cognitive Modeling, 2000
[10] Nuxoll, A., Laird, J., A Cognitive Model of Episodic
Memory Integrated With a General Cognitive Architecture,
Proceedings of the Fifth International Conference on
Cognitive Modeling, 2004
[11] Oppenheim, P. and Putnam, H., Unity of Science as a
Working Hypothesis, Minnesota Studies in the Philosophy
of Science, vol. 2, 1958
[12] Searle, R., Speech Acts, Cambridge University Press, 1969
[13] Tulving, E., Episodic vs Semantic Memory, in: F.Keil &
R. Wilson (eds), The MIT Encyclopedia of Cognitive
Sciences, MIT Press, 1999
[14] Wooldridge, M. . Intelligent Agents, in: G.Weiss (ed.),
Multiagent Systems, MIT Press, 1999
Download