Powerpoint

advertisement
In a now-famous research paper published in 1950 called Computing
Machinery and Intelligence, the inventor of the digital electronic computer,
Alan Turing, asked whether a suitably programmed computer could think
like a human.
An academic discipline, Artificial Intelligence (AI), came into being to
answer that question.
This lecture outlines the history of AI, its current state, and future
prospects. It is best read after reviewing Lecture 8: Language and
Computation.
The discussion is in three main parts:
•The aim of AI research
•History of AI research
•Issues in AI
1. The Aim of AI research
As its name indicates, the aim of AI is to design and physically implement
machines whose intelligence is indistinguishable from that of biological
humans.
When setting out to do something, it's advisable to be clear about what that
something is.
AI sets out to design and implement intelligent machines, so it's
reasonable at the outset to be clear about what intelligence is.
1. The Aim of AI research
We all have commonsense intuitions about intelligence based on
experience --that most people are roughly equally intelligent, that some
people are more intelligent than most, and that some are less intelligent or
even stupid.
When asked to say what we mean by 'intelligence', though, we generally
flounder; there's definitely something called 'intelligence' and some people
have more of it than others, but we can't say precisely what it is.
1. The Aim of AI research
After much philosophical and psychological debate over many centuries,
moreover, there's no agreed definition of 'intelligence'.
In 1994 a group of 52 academics involved in intelligence-related research
published a statement on what they considered it to be, and it is quoted
here both because it represents a consensus of more or less current
academic opinion on the subject, and because it shows how imprecise that
consensus is even among professionals.
Intelligence, according to the 52, is
...a very general mental capability that, among other things, involves the ability to reason,
plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn
from experience. It is not merely book learning, a narrow academic skill, or test-taking
smarts. Rather, it reflects a broader and deeper capability for comprehending our
surroundings—"catching on," "making sense" of things, or "figuring out" what to do.
1. The Aim of AI research
IQ tests which purport to measure this elusive human characteristic exist,
and these are often criticized for endowing an ill-defined concept with a
spurious scientific legitimacy.
Special-purpose tests focused on specific human abilities such as
mathematics or verbal articulacy are better because the things being
measured are well-defined, and it turns out that there is a positive
correlation between such special-purpose tests and the much-criticized
tests of general intelligence, which lends some support to our intuitions
about intelligence.
In general, though, 'intelligence' remains an ill-defined concept.
1. The Aim of AI research
A less controversial and more productive way of understanding what AI
sets out to do is to replace 'intelligence' with 'cognition' in its title, that is,
to call the discipline 'Artificial Cognition' (AC).
Needless the say, there's little prospect of that happening because
'Artificial Intelligence' is historically well entrenched.
Why would that help? 'Cognition' is a general term for the collection of
human mental capabilities such as sensory interpretation of the world
(by, for example, vision and audition), memory, reasoning, learning, and
language.
1. The Aim of AI research
These aspects of mind are readily definable: we know and can say
explicitly what vision is, we know and can say explicitly what language
is, and so on.
Artificial Intelligence, or rather Artificial Cognition, would then be the
discipline whose remit is to design and implement machines which
emulate human cognitive capabilities.
In practice, that's exactly what AI does.
2. History of AI research
People have always been fascinated by the possibility of creating
intelligent machines, as the 'History of Artificial Intelligence Research' link
at the end of the lecture shows.
In the past this fascination has been articulated in the form of myths,
fiction, philosophical speculation, and the construction of automata, that is,
mechanical models of creatures which, in a very limited way, emulated
their living counterparts.
One example is the silver swan at Bowes Museum in Barnard Castle,
County Durham, which can be seen at this link.
2. History of AI research
AI research in the modern scientific sense begins with Alan Turing's
invention of the computer and his proposal for a program of research into
whether or not this type of computer might be programmed to emulate
human intelligence.
This proposal set the agenda for the early development of AI. Specifically:
•The
artificially intelligent machines that AI aimed to design and
implement would be Turing computers, and
•The
design for an artificially intelligent Turing computer would be an
algorithm, that is, a computer program.
2. History of AI research
AI became a scientific discipline at a conference held at Dartmouth
College, USA, in 1956, and many of those attending, such as Marvin
Minsky and John McCarthy, became important figures in the field.
The possibilities discussed there caused great excitement, both in the
scientific community and, not long thereafter, in the general public,
which was gradually becoming aware of what at the time were thought
of as 'thinking machines', that is, computers.
Ground-breaking research was done in areas such as reasoning and
natural language, and as work progressed optimistic claims were made
for AI.
2. History of AI research
Wikipedia has collected a few examples of such claims made by prominent
scientists in the field:
•1958,
H. A. Simon and Allen Newell: "Within ten years a digital computer
will be the world's chess champion" and "Within ten years a digital
computer will discover and prove an important new mathematical
theorem.“
•1965,
H. A. Simon: "Machines will be capable, within twenty years, of
doing any work a man can do.“
•1967,
Marvin Minsky: "Within a generation ... the problem of creating
'artificial intelligence' will substantially be solved.“
•1970,
Marvin Minsky: "In from three to eight years we will have a machine
with the general intelligence of an average human being."
2. History of AI research
The popular media featured speculations on topics such as whether
switching off an artificially intelligent computer would be murder, and robots
with human-level cognitive abilities became the staple of the science fiction
literary genre and of movies and television; a few of the better films are:
Description
Clip
Forbidden Planet
http://en.wikipedia.org/wiki/Forbidden_Planet
http://www.youtube.com/watch?
v=ukOil-Lo92Y
2001. A Space
Odyssey
http://en.wikipedia.org/wiki/2001:_A_Space_
Odyssey
http://www.youtube.com/watch?
v=HwBmPiOmEGQ&feature=rel
ated
Dark Star
http://en.wikipedia.org/wiki/Dark_Star_%28fil
m%29
http://www.youtube.com/watch?
v=qjGRySVyTDk
Star Trek
http://en.wikipedia.org/wiki/Star_trek
http://www.youtube.com/watch?
v=z56m_roQFzc&feature=relate
d
Blade Runner
http://en.wikipedia.org/wiki/Blade_runner
http://www.youtube.com/watch?
v=YPuRvOLWsWM&feature=rel
ated
2. History of AI research
Alas, it was not to be.
Despite substantial governmental and commercial research funding,
progress was slow and ambitious projects failed to live up to expectations.
By the end of the 1970s the nature of the problem of emulating human
cognitive capabilities had become much clearer, and most AI researchers
had realized that initial expectations had been hopelessly over-optimistic.
Emulating human cognitive capability had turned out to be a very difficult
problem indeed.
Because of poor existing results and daunting prospects for future
success, research funding was scaled back, and, from about 1980, the AI
community looked for alternative ways to proceed.
2. History of AI research
The second phase of AI research since about 1980 is characterized by the
following developments:
2. History of AI research
i. The field has fragmented.
AI began as a relatively small, coherent research community focused on a
specific goal, and for two decades it remained pretty much that way.
Post-c.1980 there has been increasing specialization in the sense that
researchers now tend to work on specific cognitive functions such as
vision, speech, and language.
There have been few if any recent attempts to build machines with general
human-level cognition.
The aim, rather, is to better understand individual cognitive functions and
to design and implement computational systems which emulate them,
leaving integration into a single system as a future goal.
2. History of AI research
ii. There has been an increasing emphasis on learning as an alternative to
explicit design.
In the first phase of AI the emphasis was on explicit design of cognitive
algorithms: the designer specified the algorithms and the data structures
that they manipulated and implemented them on computers by writing
computer programs.
Some researchers felt that the lack of success using this approach was
due to the difficultly of the problem --that understanding cognitive functions
and designing algorithms to implement them computationally was too
difficult for humans to solve explicitly.
Instead, they began to develop algorithms which would allow computers to
learn human cognitive behaviour by interaction with the real-world
environment.
The inspiration for this came from two directions.
2. History of AI research
•
Neuroscience
The brain is the organism in the body that implements our cognitive functions:
sensory processing, reasoning, memory, language, and that sense of selfawareness which we call consciousness.
By analogy with a computer, if cognition is human software, then the brain is its
hardware.
An earlier lecture described the structure of the brain and how that structure
changes in response to sensory input via learning.
Taking brain structure and the brain's learning mechanism as a guide, AI
researchers have developed neurally-inspired computational models called artificial
neural networks (ANN) that learn cognitive functions from examples presented to
them as input.
2. History of AI research
•
Neuroscience
A famous example is the past-tense learning model.
A series of (present tense / past tense) pairs like (jump / jumped), (kick /
kicked) and so on was presented to an ANN; after learning a sufficient
number of examples, the network correctly supplied the correct past tense
to present tense forms it had not previously seen.
In other words, it had learned a small part of the cognitive function of
language without needing to be explicitly programmed.
This idea has been extended to other aspects of language as well as to
other cognitive functions.
2. History of AI research
•Embodied
cognition
Until the late 1980s cognitive science had been concerned pretty much
exclusively with the structure and mechanisms of the mind, as one would
expect.
By the late 1980s, however, it was increasingly argued that, to understand
cognition, one has to take account of human interaction with the real-world
environment to which the human responds, in which he or she acts, and
from which he or she learns.
This innovation is known as 'situated cognition' because it studies
cognition within its real-world context or situation, or 'embodied cognition'
because it studies cognition as the control mechanism for the body's
interaction with the environment.
3. Issues in AI
Many important issues have still to be resolved before AI can have a
realistic hope of achieving its goal, some theoretical and some technical,
and there is no way we can review all or even most of these here.
This section therefore focuses on what is regarded by many if not most AI
researchers as the most fundamental issue of all: meaning.
3. Issues in AI
Turing's 1950 paper Computing Machinery and Intelligence opens with the
words: 'I propose to consider the question, "Can machines think?" '.
He recognized that 'think' is an ill-defined word, like 'intelligence', and that
any argument for artificial intelligence based on it would generate
interminable discussion about what thinking is, so he proposed instead an
'imitation game'.
This game would be set in two rooms A and B separated by a wall without
windows or doors. In the room A would be a human, and in B would be
either a computer or another human; the human in room A would know it
was one or the other, but not which at any given time.
The human in room A can communicate with the computer / human in B
by writing and receiving messages on a computer just as we communicate
using email or instant messaging today.
3. Issues in AI
The game consists of the human in A having a series of text-based
conversations with the computer / human in B; sometimes this would be
with the computer and sometimes with a human, but the human in A would
not be told which.
If the human in A cannot reliably tell when s/he is conversing with the
computer or with a human in B, then, in Turing's words 'If a machine acts
as intelligently as a human being, then it is as intelligent as a human
being'.
3. Issues in AI
The essential question that the above Turing Test raises is this: Is a
computational simulation of human intelligence equivalent to biological
human intelligence?
That question has generated extensive philosophical discussion; for an
overview see the 'Philosophy of Artificial Intelligence' link at the end of the
lecture.
The discussion is sometimes enlightening, sometimes confusing, and
occasionally confused, but a highly influential answer has been and
continues to be that given by John Searle, who maintains that the
computational simulation of intelligence is definitely not equivalent to
biological human intelligence.
Searle's argument is based on the Chinese Room thought experiment,
which has already been discussed in an earlier lecture.
3. Issues in AI
Imagine an auditorium full of Chinese speakers, and on the stage a box
the size of a typical room in a house.
The box has only two openings on opposite walls. One opening is a slot
into which slips of paper containing Mandarin Chinese text may be
inserted, and the other is a slot from which slips of paper containing the
Cantonese Chinese translation of the input slips emerges.
In other words, the box is a Mandarin-to-Cantonese translation device.
Members of the audience are invited to input slips in Mandarin and, when
the output slip emerges, to judge whether or not the translation is accurate.
3. Issues in AI
Let's say that, after many trials, the audience concludes that the
translations have been correct in every case.
One could then say that the device has simulated a human cognitive
function --the ability to translate from one natural language dialect to
another.
The room is clearly a version of the Turing Test.
3. Issues in AI
The question Searle asks, though, is whether the translation device is
intelligent in the same sense that a human translator is intelligent.
To answer this, he allows us to look inside the box.
It contains the necessary paper slips and a pen to produce the output slips,
a shelf on which sit a Mandarin-Cantonese dictionary and some books on
Chinese grammar, and John Searle himself.
When a slip arrives at the input slot, Searle takes it, and using the
grammar books and dictionary, writes the translation onto a slip and
outputs it.
3. Issues in AI
But here is the important point: Searle knows no Chinese.
All he does is follow rules for transforming one string of symbols into
another string of symbols.
From this he concludes that a simulation of a human cognitive function is
not equivalent to the human cognitive function it is simulating, and that the
device is consequently not intelligent in the same sense that a human
translator is intelligent.
Why?
3. Issues in AI
The room is clearly a Turing Machine, a computer that moves physical
symbols around according to an algorithm which allows it to simulate a
human cognitive function.
But there is a crucial difference between the computer and the members of
the audience: the humans understand natural language symbol strings in
the sense that they attribute meaning to them, whereas the computer does
not.
Moreover, given that computers as Turing defined them have no capacity
for including meaning in their design, they can never be intelligent in the
sense that a human is intelligent.
3. Issues in AI
Searle's Chinese Room has generated a huge amount of discussion,
which is summarized by the 'Chinese Room' link at the end of the lecture.
The debate goes on, much of it concentrating on what exactly the elusive
human capacity for meaning might be, whether or not it is possible in
principle to incorporate it into the capability of computers, and, if so, how to
do it.
One approach, described here, is to address the problem of symbol
grounding introduced in Lecture 6: The Computational Theory of Mind.
3. Issues in AI
Recall that symbol grounding with reference to language involves
associating linguistic symbols, that is, words and word strings, with bodily
experience of the world; the association is what gives the linguistic
symbols their meaning.
At a simple level, the word 'cat' means the sum total of one's individual
experience of cats in the real world, and 'love' the emotional experience of
closeness to another human.
How can such experience of the world be incorporated into computational
AI systems?
3. Issues in AI
Section 2 above has already given the answer: provide sensory input
mechanisms as part of the computational architecture of AI systems, and
replace the Turing Machine architecture of traditional AI systems with an
artificial neural network architecture so that the system can learn from its
sensory input and thereby acquire the association between symbols and
their referents, that is, meaning.
This symbol grounding approach to AI is a complex topic, and is further
developed in one of the seminars associated with this lecture.
3. Where next?
Emergent behaviour in complex nonlinear systems:
http://www.youtube.com/watch?v=gdQgoNitl1g
http://www.youtube.com/watch?v=S5NRNG1r_jI
http://www.youtube.com/watch?v=Lk6QU94xAb8
Download