Uploaded by terencelu123

1 ~ Artificial Intelligence

advertisement
Philosophy 1130G: Big Ideas
Prof. Robert DiSalle (rdisalle@uwo.ca)
Stevenson Hall 4142, 519-661-2111 x85763
!
!
Turing’s idea of artificial intelligence:
I propose to consider the question, “Can machines
think?” This should begin with definitions of the
meaning of the terms “machine” and “think”. The
definitions might be framed so as to reflect so far as
possible the normal use of the words, but this
attitude is dangerous. …Instead of attempting such a
definition I shall replace the question by another,
which is closely related to it and is expressed in
relatively unambiguous words.
Turing, “Computing Machines and Intelligence” (1950)
The “imitation game”:
A = a man
B = a woman
C = an interrogator, who knows A and B only as X and Y,
and who gets to ask questions of A and B..
!
C’s object is to determine which is which: either X is A
and Y is B, or vice-versa.
A’s object is to make C misidentify A and B.
B’s object is to make C identify A and B correctly.
!
Turing’s new question: “What will happen when a
machine takes the part of A in this game? Will the
interrogator decide wrongly as often when the game
is played like this as he does when the game is played
between a man and a woman? These questions replace
our original, “Can machines think?”
!
“We are not asking whether all digital computers
would do well in the game nor whether the
computers at present available would do well, but
whether there are imaginable computers which would
do well.”
Basic elements of a computer:
“Store”: a store of information, e.g. the human
computer’s memory or calculations on paper.
“Executive unit”: that which carries out the
operations in a calculation
“Control”: that which constrains the computer to
carry out the instructions exactly.
!
“Discrete state machine”: A machine that can be in a
finite number of definitely distinct states, eg. “On” or
“Off,” “Open” or “Closed”.
!
A simple Turing machine: A device capable of
reading, printing, and erasing symbols at defined
places on a strip of paper or tape.
This special property of digital computers, that they
can mimic any discrete state machine, is described by
saying that they are universal machines. The existence
of machines with this property has the important
consequence that, considerations of speed apart, it is
unnecessary to design new machines to do various
computing processed. They can all be done with one
digital computer, suitably programmed for each case. It
will be seen that as a consequence of this all digital
computers are in a sense equivalent.
!
(Turing, 1950)
Objections to Turing’s account
!
The Theological Objection: Thinking is a
function of man’s immortal soul, so machines could
never think.
!
Reply: If theological arguments are allowed, it must be
argued that God could not give a soul to an
unthinking thing, or that he could not give our soul
the same machinery for thinking that a computer
uses. But there is no such argument. In any case,
theological arguments have generally hindered
science.
The “Head in the Sand” Objection: “The
consequences of machines thinking would be too
dreadful. Let us hope and believe that they cannot do
so.”
!
Reply: This is a feeling rather than a substantial
argument requiring refutation.
The Mathematical Objection: There are noncomputable functions, and therefore there are limits
to the powers of discrete-state machines.
!
Reply: It is not proven that humans are capable of
computing the non-computable functions, either.
The Consciousness Objection: A
machine can never have consciousness, which
is a feature of human thought.
!
Reply: We don’t know that other people think,
since we can’t feel what their consciousness is
like. We only think that they think because they
pass the Turing test.
The Disability Argument: There are too
many things that human thought can do that
machines can’t do (e.g. self-reflection,
appreciation of humor, etc.)
!
Reply: We are not fully aware of the capacities of
machines or people. It is not hard to foresee
machines that are aware of their own states.
The Originality Objection: Machines don’t
have the capacity to originate anything, or to do
anything other than what they are told.
!
Reply: It is foreseeable that there will be computers
capable of learning. Moreover, it is not clear how
original humans are, since human creativity is always
manipulation of available ideas or images
The Continuity Objection: The human
nervous system is continuous, unlike a digital
computer.
!
Reply: Digital computers can closely match the
behaviour of continuous machines (e.g. in
calculating irrational numbers).
The Informality Objection: Human behavior
is informal, not subject to general rules like the
behavior of a computer.
!
Reply: Human behaviour is more subject to laws
than we realize.
Leibniz on mind and machine:
!
One is obliged to admit that perception and what depends
upon it is inexplicable on mechanical principles, that is, by
figures and motions. In imagining that there is a machine
whose construction would enable it to think, to sense, and
to have perception, one could conceive it enlarged while
retaining the same proportions, so that one could enter
into it, just like into a windmill. Supposing this, one should,
when visiting within it, find only parts pushing one another,
and never anything by which to explain a perception. Thus it
is in the simple substance, and not in the composite or in
the machine, that one must look for perception.
More Leibniz on mind and machine:
!
Furthermore, by means of the soul or form, there is a true
unity which corresponds to what is called the I in us; such a
thing could not occur in artificial machines, nor in the
simple mass of matter, however organized it may be.
!
But in addition to the general principles which establish the
monads of which compound things are merely the results,
internal experience refutes the Epicurean [i.e. materialist]
doctrine. This experience is the consciousness which is in
us of this I which apperceives things which occur in the
body. This perception cannot be explained by figures and
movements.
Weak vs. Strong Artificial Intelligence:
!
Weak AI: The computer is a “powerful tool” for the
investigation of the human mind, enabling us to
formulate and test hypotheses about cognitive
capacities
!
Strong AI: The computer is a mind (at least
potentially). Computers are capable of cognitive
states (understanding, belief).
!
Computer programs are not merely tools for the
explanation of human thought; they are the
explanation.
Philosophical questions about the
computational model of mind:
!
Is the implementation of a computer program
equivalent to a mind?
!
Does satisfying the Turing test suffice to establish
understanding of a human language?
!
Can any purely syntactical procedure reproduce the
language ability of a human being?
!
(Syntax: Structural rules of language
Semantics: Association of language with meaning)
!
The “Chinese Room” argument: A device that executes the
syntactical rules of a language does not necessarily understand
the language.
Replies to the Chinese Room argument:
!
The systems reply: Understanding is a property
of a system, not of any individual part.
!
Defense: It is absurd to think that if one human
doesn’t understand Chinese, the conjunction of
that human and a bunch of slips of paper does. One
can imagine a human internalizing all the parts of
the system (by memory) without getting any closer
to understanding the language.
!
“Mentality” is a basic fact that the theory of mind
must understand; a theory that attributes mentality
to a thermostat is ipso facto refuted.
The robot reply:What a computer program can’t
understand, a robotic body connected to the world
with sensory and motor systems can.
!
Defense:
!
Even with sensory inputs, the computer
manipulating the actions according to the inputs
has no understanding of content, therefore no real
intensional states.
!
!
The brain simulator reply: A computer that mimics
all the connections of the human brain-- all of the
brain’s formal structure can understand what a
human mind understands.
!
Defense:
!
This goes against the basic idea of strong AI: that
the computational character of thinking is
independent of the material implementation of it.
But if a computer mimics only the formal
connections, it will have the same lack of
intensionality that the Chinese room has.
The combination reply: A robot with all the
capacities of the previous objections would
understand what a mind understands.
!
Defense: Of course if a robot appeared to
duplicate a wide range of human behavior, we
would be tempted to think that it had
intentional states. But if we knew it was only
following formal instructions, then we would
know that this appearance is false.
The other minds reply: How do we know that
other people are different from Chinese
rooms?
!
Defense:
!
Cognitive science has to assume the
knowability of the mental just as physics has to
assume the knowability of the physical. “The
question isn’t how I know that other people
have mental states; the question is what I am
attributing to them when I attribute mental
states to them.”
!
The “many mansions” reply: If present
computers are incapable of real
understanding, future computers won’t be.
!
Defense:
!
There’s no a priori reason why one couldn’t
give genuine mental states to a machine, if
one could construct the right machine. But
this trivializes the question: an AI machine is
whatever satisfies the requirements for
intentionality. What we can’t do is give
mental states to a machine whose cognitive
operations are defined in a purely formal,
computational way.
Dennett: Did HAL commit murder?
!
Deep Blue is an intentional system, with beliefs and desires about
its activities and predicaments on the chess board, but in order to
expand its horizons to the wider world of which chess is a
relatively trivial part, it would have to be given vastly richer
sources of "perceptual" input--and the means of coping with this
barrage in real time. Time pressure is of course already a familiar
feature of Deep Blue's world. As it hustles through the multidimensional search tree of chess, it has to "keep one eye" on the
clock, but the problems of optimizing its use of time would
increase by orders of magnitude when it had to juggle all these
new concurrent projects (of simple perception and selfmaintenance in the world, to say nothing of more devious
schemes and opportunities).
Dennett: Did HAL commit murder?
!
For this hugely expanded task of resource management, it would need
extra layers of control--above and below its chess playing software.
Below, it would need to be "innately" equipped with a set of rigid
traffic control policies embedded in its underlying operating system,
just to keep its perceptuo-locomotor projects in basic coordination.
Above, it would have to be able to pay more attention to features of
its own expanded resources, always on the lookout for inefficient
habits of thought, strange loops (Hofstadter, 1979), obsessive ruts,
oversights, and dead-ends. It would have to become a higher-order
intentional system, in other words, capable of framing beliefs about its
own beliefs, desires about its desires, beliefs about its fears about its
thoughts about its hopes, . . .
Higher-order intentionality is a necessary condition for moral
responsibility.
The Lightening of Solutions Passing By!
!
yesterdays efforts!
winter days!
the virtuous universe!
beyond elegy!
come back some more!
!
the happy day!
when all love is revealed!
!
yesterdays efforts!
winter days!
the virtuous universe!
for your eternal benefit!
the happy day!
when all love is revealed
yesterdays efforts!
summer dreams!
seasoned experienced!
for your eternal benefit!
religious experiences!
fallen together!
!
friendlier!
planet Earth!
mono-unified!
caught and enjoyed!
is remembering
everything set up!
what friends are for!
!
yesterdays efforts!
planet Earth!
exercises in!
the sunlight on leaves!
what friends are for!
!
happy to repeat myself!
the image of God!
mono-unified!
then and now
Seeing through these eyes!
!
I see hope die, seeing through these eyes !
Some would say we lived our lives a lie !
Another face, another place to hide !
Empty souls - no truth to set us free !
As we reflect on what we'll never be !
Don't want to believe !
There's nothing inside !
I find myself - between the lines !
It's made of blood !
It's made with heart !
Is all we built falling apart? !
Never thought of losing all i believed !
Deep depression taking hold !
Slowly killing me !
I see hope, seeing through these eyes
Download