Manufacturing mind A feedback loop? Anders Hedberg Magnusson October 27, 2004

advertisement
Manufacturing mind
A feedback loop?
Anders Hedberg Magnusson
October 27, 2004
Contents
1 Introduction
2
2 The mismeasure of mans mind
4
3 A new, old idea
6
3.1
Normans distributed intelligence . . . . . . . . . . . . . . . . . . . . . . . . .
4 Making the connection
4.1
7
9
Oscar the Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 Some feedback
10
11
5.1
Manufacturing Mind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
5.2
A philosophical question . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
1
Chapter 1
Introduction
The search for a theory of consciousness is more than just a scientific discussion, it is the holy
grail of science. And perhaps, like the mythical grail, it will never be found. It is however a
worthy quest and one that has sparked many advances in science and scientific theory. The
most important of these are arguably the philosophies of dualism and functionalism, two
diametrically opposed theories of the origin of consciousness. Dualism states that consciousness, ”the mind”, is separate from the body, that it is something wholly different than the
substance that our body is made of. Functionalism on the other hand sees consciousness
as an effect of the very structure and composition of the human body. Such a theory also
implies that consciousness is not something uniquely human. It is perhaps this very idea that
is the basis of the field of artificial intelligence. But there are those who oppose the view that
an artificial consciousness is possible, even amongst AI researchers. The foremost argument
against AI is that the human brain is unique in its structure and that it can not be emulated
in a computer, that it’s very composition is what makes consciousness possible. This I will
show is not a valid argument. Another argument against an artificial consciousness is that
we do not have the tools and understanding to make it possible, this I can not argue against,
we do not have what is needed to make a mind. There is however nothing that says that
we never will. I often get the impression that those who oppose strong AI have no really
good arguments against it and that those that believe in it have no real way of showing that
it is possible. I have taken a strong interest in this field because I believe it is possible to
construct a mind and I intend to propose a working theory of how it might be possible in
this paper.
It is impossible to do research in the field of strong AI without getting involved in the
philosophical side of the issue. One question that I think is very important to answer, as it
is in all research, is why we want to do this? Why do we want to create an artificial mind?
AI was originally conceived of as a way of modelling the human mind, with a good model of
our cognitive processes we could begin to understand even more of our own psychology. But
making a conscious mind is bigger than a simple model for psychology. Bill Joy for example
believes that a super intelligent being of our own creation will make us obsolete (Joy, 2000).
Other vision like HAL in 2001 and Skynet in the movie Terminator paint a dystopian picture
2
of what our future will be like if we construct artificial minds. My vision is not so bleak, I
believe that if we construct artificial minds they will be much like ourselves and I will show
why in this paper.
Traditional AI has failed to live up to it’s lofty goals, 50 years after Dartmouth we do not
have a perfect model of the human mind and we do not have intelligent machines helping
us out in our daily lives. Machines are smarter than they were in the fifties but they are
still not even as intelligent as insects. I believe that this is a symptom of a flawed theory of
mind, that the premise that AI was started on does not supply us with the tools we need to
make a mind. But there are other theories, some of which I will discuss. And, arrogant as it
may seem, I will present my own amalgamate of these theories as a theory of how to make
a mind.
3
Chapter 2
The mismeasure of mans mind
When Turing wrote: ”I propose a question: can machines think?” (Turing, 1950) he was
hardly the first to ask such a question. As early as during the 17th century ReneĢ Descartes
proposed that automata could imitate the body (Gardner, 1987). Turing was however first
in asking it from an, at the time, unique point of view. Turing was not a philosopher but
a computer scientist, indeed he was one of the first in that field. In a paper published in
1936 he proposed the idea of the universal computer (Turing, 1936), which has been the
foundation for all modern computers. Turing’s point of view was that of the programmer,
he saw that a machine capable of some simple, basic operations was in fact capable of
emulating all possible logical systems. Turing argued that if the human mind is a logical
system, it then follows that it is possible to emulate a human brain inside a computer. This
is the very premise that cognitive science was started on. The idea started a revolution in
psychology, philosophy and computer science. With the possibility of emulating a human
mind, it suddenly became possible to understand that very mind, it perhaps even became
possible to construct the ”grand theory of mind” that had been evading psychologist for so
long.
Bold predictions were made and the projects undertaken reflect that. The US military for
example wanted a computer that could translate from Russian to English with no or little
loss of meaning and context. The project failed but advances were made, the general problem
solver of Newell and Simon is an impressive example of what could be done with the hardware
of that time. Other projects like SHRDLU were equally impressive. It seemed at the time
that computer science was but a breakthrough away from making a mind. After all, these
machines were performing seemingly intelligent acts and seemed to be reasoning much like
it was thought that a human did. All of these systems however shared a similar problem,
they performed very well in the domain they were constructed for but either very poorly or
not at all outside it. Systems like ELIZA which is an excellent therapist as long as you do
not ask any specific questions.
This was disconcerting, the systems showed none of the flexibility of the human mind. It
seemed that the model for emulating a mind was imperfect or maybe even terminally flawed
4
in some way. It also seemed like maybe the dualists had a point, constructing a mind perhaps
required something that was not present in machines. Other avenues of research like neural
nets and genetic programming were explored and while they provide some very impressive
results, they did not provide the breakthrough that was hoped for. These days not many
scientist actually believe that a true artificial intelligence will ever be constructed inside a
computer. Most research today is aimed at providing machines that work well with humans,
not machines that work like humans. It would seem that AI has failed to provide a artificial
intelligence.
5
Chapter 3
A new, old idea
Psychology and cognitive science in particular has for a long time been dominated by the
viewpoint that internal processes can be isolated and measured in some way and that reasoning and consciousness are such internal processes. Because of this, almost all of the attempts
at machine intelligence have been constructed inside machines, whatever interaction there
has been with an environment have either been simulated or greatly restricted. But there
have been other viewpoints, one such is the work of Lev Vygotsky in Russia during the
1920s. Vygotsky studied the early development of children and made some interesting observations and conclusions that have just now begun to be integrated into AI. He argued
that the human mind could only be understood if all dimensions of it were studied, not
just through introspection (Vygotsky, 1925/1979). Vygotsky thought that the cause of consciousness must be sought in some other dimension than the mind itself and proposed that
socially meaningful activities play this role. In his own words:
”We are aware of ourselves in that we are aware of others; and in an analogous manner, we
are aware of others because in our relationship to ourselves we are the same as others in
their relationship to us. (Vygotsky, 1925/1979, p. 29)”
According to Vygostsky, intelligence developed through interactions with the environment in
general and through interactions with other humans in particular. He believed that external
interactions could become internalised in the child’s mind:
”Every function in the childs development appears twice: first, on the social level, and later,
on the individual level; first, between people (interpsychological), and then inside the child
(intrapsychological). ... All the higher functions originate as actual relations between human
individuals. ... The transformation of an interpersonal process into an intrapersonal one is
the result of a long series of developmental events. ... The internalisation of socially rooted
and historically developed activities is the distinguishing feature of human activity, the basis
of the qualitative leap from animal to human psychology. (Vygotsky, 1934/1978, pp. 5657,
original emphasis)”
6
One example presented to support this theory is that of the development of pointing in
a child (Vygotsky, 1934/1978). In the beginning he claimed it is simply an incomplete
grasping motion. When however a caretaker come to help the child and bring it the object
of its desire, the meaning of the motion changes because the motion provoked a reaction not
from the object but from the other person. The motion changes meaning from ”grasping
for” to ”pointing at” and at the same time changes perspective. Instead of being something
that the child does it self, it is now a socially meaningful communicative act. The child
is of course not aware of this at the time and it is only through repeated experience that
it realises the shift in meaning. When it does however it begins addressing its pointing
towards other people, thus indicating that the the grasping movement changes to the act of
pointing (Vygotsky, 1934/1978, p. 56). This is important because it constitutes a beginning
of self-awareness.
Vygotskys views are very different from the views of traditional AI which has paid little
attention to social, biological and developmental factors. There are however contemporary
AI researchers who have integrated Vygostkys thoughts and created the concept of ”situated
cognition” (Clancey, 1997; Clark, 1999). Situated cognition implies that any form of ”true”
intelligence requires a social situatedness and that ”an agent as situated is usually intended
to mean that its behavior and cognitive processes first and foremost are the outcome of a
close coupling between agent and environment” (Lindblom and Ziemke, 2003). This ”New
AI” has initially resulted in a great number of situated robots which could freely interact
with their environment and could be said to be ”physically situated”. These robots however
have no social or cultural situatedness which its has been argued that humans have and
because of this an increasing interest in taking in this into account when researching AI has
arisen. Dautenham et al expresses it thusly: a socially situated agent acquires information
about the social as well as the physical domain through its surrounding environment, and
its interactions with the environment may include the physical as well as the social world
(Dautenhahn et al. 2002, p. 410).
3.1
Normans distributed intelligence
Another thinker that has influenced modern AI in great ways is Donald Norman. In his book
”Things that make us smart” he proposes amongst other things that cognition is distributed.
He is very critical of the ”disembodied intelligence” that he feel has been the working theory
of many of his colleagues in AI. This view is that of isolating the mind and simplifying the
task of studying it, hoping all the while that being able to understand an isolated mind
will eventually lead to the understanding of a real mind. His own view of things is instead
that cognition is something we do together and that the physical world is a prerequisite for
cognition. In his own words:
”Humans operate within the physical world. We use the physical world and one another
as sources of information, as reminders, and in general as extensions of our own knowledge
and reasoning systems. People operate as a type of distributed intelligence, where much of
our intelligent behavior results from the interaction of mental processes with the objects and
7
constraints of the world and where much behavior takes place through a cooperative process
with others.” (Norman, 1993, p. 146)
One of the problem of the ”disembodied intelligence” that Norman criticise is that it requires
immense processing power, knowledge and planning capability to be able to perform a meaningful task. This he feel is not need in the physical world, where the world can take some
of the burden of memory and computational power of the thinking human (Norman, 1993).
The human mind does not require complete knowledge of the world in order to function in
it, the knowledge is there for us to use when we need it. Whether it be in the form of a
book, a computer or another person. Another interesting consequence of the physical world
is that as Norman puts it ”Impossible things are impossible” (Norman, 1993). This means
that some thing, a robot or human operating in the physical world is constrained by it and
shaped by it. We do not need to calculate whether a ball will drop to the floor when we let
go of it because the world will take care of it for us. A computer simulation of the physical
world is not similarly constrained. Norman even goes so far as to suggest that our mind
really is not capable of handling the kind of calculations need to simulate the real world
(Norman, 1993). This he implies relieves us of a great computational burden, we do not
need to perfectly understand the world, we just need to function in it. Much of Norman’s
ideas about cognition centre around the proposition that our mind only does what it needs
to do, nothing more. The human mind does not need to calculate a width-first search, such
a search would be to time consuming. Instead the human mind works by approximation and
in relation to the physical world. The is an important idea because it implies that for a mind
like a human’s to function it must function within the physical world and in relation to it.
8
Chapter 4
Making the connection
Situated cognition gives us a context, a world if you will for making a mind. It tells us that
we need to make a machine that not only can interact with the world be also be a part of
it, socially and culturally. It does not tell us how this machine would look on the inside
though. But because the old theories of AI have failed to provide such a machine or a theory
for constructing such a machine we must look elsewhere. One very interesting theory was
provided by Douglas Hofstadter, who suggested that a mind is a consequence of millions of
very simple, basic operations (Hofstadter, 1999). The basic operations are not intelligent
but the sum of their functions is. This is called an emergent phenomena and seems to be the
basis of the human mind. Our mind is constructed of millions of simple neurones which seem
to have no inherent intelligence but together generate a mind. Other emergent phenomena
can be seen in flocks of birds or even evolution, where simple operation lead to complex
results. Hofstadter refutes the argument that a machine based on basic units with simple
rules can not have a will of its own, as a human does, thusly:
”[...] both machines and people are made of hardware which runs all by itself, according to
the laws of physics. There is no need to rely on ”rules that permit you to apply the rules”
because the lowest-level rules - those without any ”meta”’s in front - are embedded in the
hardware, and they run without permission.” (Hofstadter, 1999, p. 685)
This implies that an artificial system is no more rule-bound or rule-free than a human is.
So even though it is not possible to say that a machine constructed to resemble a human
brain has free will, it is certain that it has as much free will as we do. This gives us
an underlying system of a mind but we are still lacking a very important part. With a
substrate, the emergent phenomena, and an environment, situated cognition, we still need
a process. Hofstadter addresses this as well, his theory on the subject is that of recursive
metarules (Hofstadter, 1999). At the very basic level of the human mind there are rules, and
these rules govern other rules, which in turn govern other rules in a hierarchy of rules. This
differs not greatly from the theories of traditional AI but Hofstadter introduces the concept
of ”strange loops” which reach back from higher levels of the rule hierarchy and change the
lower levels while at the same time being influenced by those very same rules:
9
”My belief is that the explanations of ”emergent” phenomena in our brains - for instance,
ideas, hopes, images, analogies, and finally consciousness and free will - are based on a kind
of Strange Loop [...] In other words, a self-reinforcing ”resonance” between different levels
[...]The self comes into being at the moment it has the power to reflect itself.” (Hofstadter,
1999, p. 709)
Such an idea of recursive feedback gives us a theory of mind that while it does not explain
exactly how a mind is made it gives us the possibility of constructing a mind. Hofstadter
does not give definitive explanation of the origin of mind but instead tells us that if we could
make a machine that resembles us on a very basic level, it would work like us (Hofstadter,
1999). He asks himself a question in the book and answer it himself:
”Question: Will there be a ”heart” to an AI program, or will it simply consist of ”senseless
loops and sequences of trivial operations” (in the words of Marvin Minsky)? Speculation:
If we could see all the way to the bottom, as we can a shallow pond, we would surely see
only ”senseless loops and sequences of trivial operations” - and we would surely not see any
”heart”. [...] the pond of an AI program will turn out to be so deep and murky that we
won’t be able to peer all the way to the bottom. [...] When we create a program that passes
the Turing test, we will see a ”heart” even though we know it’s not there.” (Hofstadter,
1999, p. 679)
4.1
Oscar the Robot
One scientist that has tried to implement a system similar in some ways to Hofstadter’s ideas
is John L. Pollock. He posits a hypothetical robot called Oscar (Pollock, 1989). The first
version of Oscar is called Oscar I and is a very simple robot. It has a sensor that detects
danger and can avoid it. It also has a function for planning and reasoning about its situation.
This gives Oscar I the ability to avoid danger if it is directly exposed to it, it can not however
predict danger and thus stay away from situations where it might be in danger. In order
to be able to predict danger it must only know when it is in danger, it must also be aware
of it. The next version of Oscar, which is called Oscar II has a danger sensor-sensor, this
sensor tells Oscar when it’s danger sensor is activated, it gives Oscar II a feeling of danger.
This gives Oscar II the ability to connect the feeling of being in danger with the context of
the situation. The danger sensor might be called an external sensor and the sensor-sensor
an internal sensor. Oscar II is till pretty stupid because it takes everything at face value,
when it’s sensor goes of it accepts it as a fact that it is in danger. The next version of Oscar,
called Oscar III gets a sensor which senses when it’s sensor sensor is activated, this gives it
a ability of distinguishing between ”false” and ”true” cases of danger in the context of the
world surrounding it. In this way, Oscar achieves a from of consciousness in that it is aware
of it self in some ways and even of the fact that it is fallible. The realisation that its sensors
are fallible is very important because it means that other methods of avoiding danger, such
as planning, are more efficient.
10
Chapter 5
Some feedback
I started this paper by making some bold statements and it is time to draw the various
threads together in order to construct a theory and answer some questions. As I have tried to
show traditional AI will not bring artificial intelligence any closer to reality, instead we must
look at alternative theories and make connections across disciplines. Vygotskys theory tells
us that social interaction is essential for human development, that the feedback generated
by caretakers of a child are needed for the internalisation of external actions and events,
that this is how we begin to become self-aware. Normans thoughts on distributed cognition
shows that the social and physical environment is not only important but essential to our
higher cognitive functions. Instead of wasting valuable energy on memory and processing
power in the brain we use the world as a tool. Because of this, trying to create a mind like
ours without a social and physical context seems extremely difficult. Hofstadters ”Strange
Loops” give an explanation as to how our mind is constructed, physically. It also solves a
problem of traditional AI, that of writing the rules, and rules being based on rules. The
explanation is that the only rules that are immutable are the laws of physics and that these
govern the creation of other rules. Because the laws of physics do not change or are different
for different kinds of material this means that the same must apply to a machine. The basic
building blocks of human minds follow the same rules as everything else in the universe.
The second part of Hofstadters theory is the feedback loop of tangled hierarchies, rules that
reach back and change the rules that shape them. This is mirrored in Pollocks work where
Oscar becomes self-aware when it begins to sense that it is sensing, a feedback loop and more
importantly it begins to change what it senses. Instead of simply relying on its sensations,
it changes their meaning according to their context. This constitutes a ”Strange Loop”.
These theories share an important core, they are based around the concept of feedback. All
though they are based in incredibly varied domains and have been shaped by scientists from
wildly varying fields, they have a lot in common. The feedback loop is what ties them to
together and maybe gives us a clue that they may be describing the same process, but on
different levels. Vygotskys feedback is external, it requires someone else to give an action
meaning. Hofstadters loops on the other hand are very much internal, they are functions
performing acts on other functions in the mind. But these levels overlap and interact, in
11
order for the child in Vygotskys experiment to change its behaviour based upon the feedback
given to it, it must change the rules in its mind as well as the rules governing those rules,
thus taking us from Vygotsky to Hofstadter.
5.1
Manufacturing Mind
With these theories it seems all the needed parts are available. What needs to be built is
a machine that interacts with the world like a human, that has a substrate similar to ours
or a simulation of a substrate and that has introspective sensors. Is it really that simple?
No, of course not. Humans seem to have some built in functions such as facial-recognition
and a predisposition for learning language and humans look like humans. This might not
seem like a big thing but it might be more important than previously thought, a human
might find it hard to interact with a machine like a human if it did not at least resemble
a human. Also, as Norman would remind us, our physical world has been adapted to suit
human needs. A machine of different proportions and with different ways of interacting with
the world might not be able to use the feedback provided by its caretakers. The problems are
numerous and perhaps even insurmountable. It might not be possible even if the theory is
valid. But I believe that experiments should be made, beginning with machines like Kismet
that resemble humans. Give such a machine a simple neural network and the ability to sense
its own sensations and then let people interact with it. I do not think that we will ever
learn the secrets of our own mind if we are not brave enough to give our models of ourselves
some freedom. Perhaps this approach will never produce a full grown human-like mind but
it might tells us a lot about the development of children.
5.2
A philosophical question
The human mind is an amazing phenomena, unlike anything in the known universe. The human brain is perhaps the most complicated mechanism there is. But even more importantly,
if we create a mind based upon the research presented here it will be something very similar
to us. In order for a mind like our to arise, it requires the world we live in. Reconstructing
ourselves means that the reconstruction is us. A human mind need not be the only kind of
mind there is but it is the kind of mind we are trying to recreate. This is why I do not think
that an artificial intelligence will become super intelligent and suddenly decide that we are
obsolete. We know of only one type of consciousness and it is our own, any machine made
to that specification will be like us. The ethical and moral dilemmas surrounding this are
enormous but not necessarily any bigger than those of creating a child. Every time a child is
born, a new being with the substrate and environment needed to become sentient is created.
A being that is not born in the traditional way but with the same preconditions should be
treated in the same way. Whether its main component is carbon or silicon is not important.
Creating life is a good thing
12
Bibliography
W. J. Clancey. Situated Cognition: On human knowledge and computer representations.
Cambridge University Press, New York, 1997.
A. Clark. Being there - putting brain, body and world together again. The MIT Press,
Cambridge, MA, 1999.
K. Dautenhahn, B. Ogden, and T. Quick. A framework for the study of socially embedded
and interaction-aware robotic agents. Cognitive Systems Research, 3(3):397–428, 2002.
H. Gardner. The Mind’s New Science. Basic Books, paperback edition, 1987.
D. Hofstadter. Gdel, Escher, Bach: an Eternal Golden Braid. Penguin Books Ltd, Middlesex,
20th-anniversary edition edition, 1999.
B. Joy. Why the future doesn’t need us. Wired, 8, 2000.
J. Lindblom and T. Ziemke. Social situatedness of natural and artificial intelligence: Vygotsky and beyond. Cognitive Systems Research, 3(3):397–428, 2003.
D. Norman. Things that make us smart. Perseus Books, Reading, Massachusetts, 1st edition
edition, 1993.
J. L. Pollock. How to build a person. The MIT Press, Cambridge, Massachusetts, 1st edition
edition, 1989.
A. M. Turing. On computable numbers, with an application to the entscheidungs-problem.
Proceedings of the London Mathematical Society, Series 2, 42:230–65, 1936.
A. M. Turing. Computing machinery and intelligence. Mind LIX, pages 433–60, Oct. 1950
1950.
L. S. Vygotsky. Consciousness as a problem in the psychology of behavior. Soviet Psychology,
16(4):3–35, 1925/1979.
L. S. Vygotsky. Mind in society: The development of higher psychological processes. Harvard
University Press, Cambridg, Massachusetts, 1934/1978.
13
Download