lecture10-weak

advertisement
Strong vs. Weak AI
ECE 847:
Digital Image Processing
Stan Birchfield
Clemson University
The coming takeover
Common theme:
- Robots become intelligent
- Robots become independent
- Robots get out of control
- Robots must be subdued
Why all the fuss? Need we fear?
The central question
• This is not just for entertainment
• It has far-reaching implications
• Two camps:
– strong AI: There is no fundamental difference
between man and machine
– weak AI: Only people can think, machines cannot
• Central question:
Is there a fundamental difference between
man and machine, or is it only a difference in
computing power?
The question restated
• Stated another way,
– Can computers think?
• What does it mean to think?
In favor of “Strong AI”
• Strong AI argument #1: Look at what
machines can do
clean
(Roomba vaccuum cleaner)
play soccer
(Robocup)
They play music
trumpet
(Toyota’s trumpet-playing robot 2008)
organ
(Ichiro Kato’s WABOT II
reads music and
plays the organ 1984)
flute
(Atsuo Takanishi’s flute-playing robot)
conductor
(Honda’s Asimo robot conducts
Detroit Symphony Orchestra 2008)
They even compose music
MySong: http://research.microsoft.com/~dan/mysong/
… and have emotions
Rity sobot
(from Kim Jong-Hwan at Korea's Institute of
Advanced Science and Technology)
In favor of “Strong AI”
• Strong AI argument #2: Look at what
people said machines will never do
Hubert Dreyfus, Berkeley philosopher:
No computer will ever beat me at chess
1967: Richard Greenblatt's MacHack’s
program beat him
Then Dreyfus:
Well, no computer will beat a nationally
ranked player
But it did
Then Dreyfus:
Well, no computer will beat a world
champion
1997: Deep Blue beat Garry Kasparov
Kasparov vs. Deep Blue
The clincher
• Many people are fond of saying,
“They will never make a machine to replace
the human mind --- it does many things
which no machine could ever do.”
• J. von Neumann gave talk in Princeton (1948)
– Question from audience:
“But of course, a mere machine can't really think,
can it?”
– Answer:
“You insist that there is something a machine
cannot do. If you will tell me precisely what it is
that a machine cannot do, then I can always make
a machine which will do just that!''
[from E. T. Jaynes, Probability Theory: The Logic of Science]
“Weak AI” responses
• Hubert Dreyfus, Berkeley philosopher:
Nonformal aspects of thinking cannot be reduced to
mechanized processes
• John Searle, Berkeley philosopher:
Chinese room experiment – blindly translating is not
the same as thinking
• Thomas Ray, Oklahoma zoologist:
carbon medium and silicon medium are
fundamentally different
• Roger Penrose, British physicist and mathematician:
Consciousness arises from mysterious force of
quantum effects
• David Chalmers, UC Santa Cruz philosopher:
Basis of consciousness may be mysterious new
type that has not yet been observed
One thing in common: All appeal to materialistic (even if mysterious) explanations
An alternative view
• Thesis:
– “Strong AI” is fundamentally wrong
• There is a fundamental difference
• Machines can never be equivalent to humans
in all respects
– “Weak AI” arguments are – well – weak
because they all assume materialism in
their foundation
– The dilemma is solved by recognizing the
role of the spirit (or soul)
The “Strong AI” model
Computer
input
Human
output
deterministic algorithm
(running on silicon)
input
output
deterministic algorithm
(running on carbon)
The proposed model
Computer
Human
immaterial spirit
input
output
deterministic algorithm
(running on silicon)
input
output
deterministic algorithm
(running on carbon)
Turing machine
• Recall the Turing machine:
• This simple abstract device (Universal Turing
machine) can simulate the behavior of any
digital computer – past, present, or future!
Benjamin Schumacher, http://physics.kenyon.edu/coolphys/thrmcmp/newcomp.htm
The proposed model
Computer
Human
contingency
mechanism
deterministic algorithm
(running on silicon)
deterministic algorithm
(running on carbon)
and immaterial decision maker
Let’s zoom in
Computer
http://gs.fanshawec.ca/tlc/math270/images/2_7_Bi2.jpg
Human
0
1
contingency mechanism –
decision made by immaterial spirit
affects physical outcome
How can this model be tested?
0010001010101…
device
input
output
010111010001…
• Kolmogorov complexity of output string s
is the length of the shortest program that
outputs s
• Example:
K( 22/7 ) < K(p)
• Define: Complexity of device is the maximum
Kolmogorov complexity that it can output,
when no input is given
• For Turing machines, K(output) ≤ K(input) + C(device)
Conservation of information
device
input image
lossless
compression
algorithm
(e.g., LZW)
output image
number of bits ( output ) < number of bits ( input )
complexity ( output ) = complexity ( input )
Process is reversible
(Note: This complexity is entropy, not Kolmogorov complexity)
Conservation of information
device
input image
lossy
compression
algorithm
(e.g., JPEG)
output image
number of bits ( output ) < number of bits ( input )
complexity ( output ) < complexity ( input )
Process is NOT reversible
Conservation of information
device
input image
downsample
output image
number of bits ( output ) < number of bits ( input )
complexity ( output ) < complexity ( input )
Process is NOT reversible
Conservation of information
• Furby (1998)
– speaks Furbish off-the-shelf
– learns English over time
• How does it do this?
– Pre-programmed to speak English
– Program causes more English to be used
over time
– Nothing is learned
– No new information
Information generation
• We do not expect computers to generate new information
• Rather, they only process existing information
• This limitation is NOT dependent on the speed / computational
power of the computer
“You have illegal files
on your computer!”
“No officer. You see, I just bought
this new processor, and it’s so
powerful that it decided to create those
files.”
http://digitalclonesrus.com/assets/images/happy_man_at_computer.jpg
http://www.clipartof.com/images/clipart/xsmall2/4205_motorcycle_policeman_filling_out_a_traffic_citationticket_form.jpg
Information generation
• We DO expect people to generate new information:
– The basic requirement for a PhD is a contribution to human
knowledge
– Intellectual property assumes that knowledge is created by the
inventors
– Plagiarism is detected when one person’s work is similar to
another’s
– There is a distinction between original work and derivative work
• Example:
– In 2005, students at MIT (Stribling et al.) wrote a computer program
to generate research papers
– The automatically generated paper was actually accepted for
publication by the World Multi-Conference on Systemics,
Cybernetics and Informatics (WMSCI)?
– Why was this such a scandal?
– Why did people complain that the conference organizers had not
reviewed submissions thoroughly rather than conclude that
computers had now reached human intelligence?
Generating information
0010001010101…
device
input
output
010111010001…
If complexity ( output ) > complexity ( input ),
then the complexity
must arise from the device itself
(cf. Noam Chomsky’s black box for studying children’s innate ability to learn language)
But the human brain is not complex enough. Back-of-the-envelope calculation:
• Library of Congress has approx. 20 TB of written information
• Human genome contains 3 billion DNA rungs for a total of 6 billion bits of data
20,000,000,000,000 >> 1,000,000,000
Free will
• Contingency mechanism enables human to make free
decisions
• This is necessary for
–
–
–
–
moral responsibility
ethical standards
laws of justice (e.g., was the act intentional?)
self-improvement (Covey’s gap between stimulus and response)
• In historic Christian theology, people are defined as “rational
creatures”, which implies
– free will
– immaterial, immortal soul
“So God created man in his own image, in the image of God created
he him; male and female created he them.” Gen. 1:27
• Without free will, we are either
– deterministic, or
– random
Either way leads to irrationality
Consciousness
• Common view is that consciousness arises from
materialistic processes in body
• Why? Not because of evidence, but because of prior
philosophical commitment to naturalism
• Kurzweil proposes to produce exact replica of brain
– Then he will be automatically transported to the copy
– Why should we think that, even if our brain could be copied,
our consciousness should go with it?
– What if the brain is copied multiple times? Will we have
multiple consciousnesses?
– This is a vain attempt at immortality (Salvation by computer
upload)
– Note that all proponents of this idea predict that the
technology will conveniently be in place by the time they are
70 years old – see Pattie Maes, "Why Immortality is a Dead
Idea", 1993
Creativity
• Consider song as point in high-dimensional
space
• Interpolating between songs may be possible
(blending)
• Creating new songs in local neighborhood
may be possible
• Claim: Making meaningful macro-jumps is
not possible
• Even if it were possible, who would be the
judge of quality? Computer or human?
What’s wrong with the Turing
test?
• Turing test:
– one computer, one person, one judge
– All communication via terminal
– Goal: Judge tries to tell which is computer and which is
person
• Turing test can never be used to tell whether there is
a fundamental difference between computer and
human
• Reason: Judge is required to be a human
• In other words,
– Suppose computer = human
– Then human judge can be replaced by computer judge
– But now test does not make any sense
More…
• Chess revisited:
– Sure, computers can play chess, but can
they invent a new game to replace chess?
– Can they invent new rules?
• Artificial life started with promise, then
fizzled out as hopes were not realized
• Captchas: Reverse Turing tests
• Church-turing thesis
• Complex specified information
One final thought
• Similarity between computers
and animals:
– Animals act by instinct
– Animals do not have free will
• Learn the lesson of Grizzly Man
(Timothy Treadwell):
– He thought bears were his friends
– He thought they were
misunderstood
– He ignored warnings about
getting too close
– They killed him
Is computer vision possible?
• Distinction between
– information processing: the information is changed from one
form to another, or is lost
• algorithms change information
– information generation: the information is created
•
•
•
•
natural processes cannot create information
there is no algorithm to create information
information generation requires a contingency mechanism
supernatural or metanatural process -> spirit or soul
• Will a computer ever be able to enjoy an aesthetically
pleasing painting?
if (painting_is_pretty)
{
printf(“I love this painting”);
}
Download