Can a toaster feel love? - Department of Physics | Oregon State

advertisement
Can a toaster feel love?
PH 407H
by Annamaria Tadlock
Could we ever create a robot with human feelings? At first this seems like an
absurd question, and many people will respond with of course not, how can something
artificial ever be “alive” or “feel”? If a computer is built that responds in a humanlike fashion, it is only because it has been programmed to do so by its creator, not
because it is experiencing those feelings or thoughts on its own.
Science fiction provides a large number of examples of sentient robotic beings
of various forms. Examples from popular media include Star Trek‟s Data, a sentient
android with human body that seems to lack human emotion, and the Star Wars robots
C3P0 and R2D2 which despite their outward machine-like appearance have their own
personalities and quirks. The Ender‟s Game series by Orson Scott Card has an
interesting character named Jane, which is a conscious bodiless being that arose
from the immense artificial network set up to allow intergalactic communication.
Perhaps one of the most emotional stories of artificial beings is the story of Andrew,
the main character in the movie Bicentennial Man, who is an android servant. Both
Andrew and his owners begin to realize that he is able to not only understand but also
feel human emotions, and throughout the story Andrew becomes more human—
learning and growing as a person, falling in love, as well as choosing to undergo
procedures that make him physically more human, including giving up his immortality.
Not all fictional robot characters are friendly, however. The computer HAL from 2001:
A Space Odyssey , the Terminator robots, and the all-powerful machines that control
the world in the Matrix are examples of frightening robotic creations. Perhaps these
stories show both our hopes and our fears about artificial intelligence technology.
But will these stories always exist just in the realm of fantasy? Or one day will
artificial brains be just good, or even better than, „real‟ brains? To explore this
subject, we must first ask, what does it mean to be human? Then, can a machine ever
develop those qualities?
The definition of what it means to be human has changed throughout history
and is heavily influenced by the culture of the time. We take it for granted today
that people are equal, but this has not always been the case. Many Greek
philosophers, for example, considered men to be at a higher, more human status than
women-- some did not consider women fully human. Plato wrote, “It is only males
who are created directly by the gods and are given souls. Those who live rightly
return to the stars, but those who are „cowards or [lead unrighteous lives] may with
reason be supposed to have changed into the nature of women in the second
generation”. Aristotle also thought that men were superior to women, and this
opinion stemmed from his ideas that women were passive in reproduction and that
men were the source of all of an offspring's traits.
At one point, slavery and racism were justified because of the belief that
whites were superior humans. In the book “Negroes and Negro Slavery”, written in
1861, Dr. John Van Everie argues with a twisted combination of science and religion
that God created the races as biologically distinct, with whites being superior, and
that it would violate both nature and God's will to give equal rights to all people. He
claims, “The normal condition of the negro necessarily involves the protection as
well as subordination of the inferior race. The two things are in fact inseparable, as
in the case of parents and children, or the relations of husband and wife, or indeed
any condition of things resting on the basis of natural law."
Looking back, we cannot even imagine how such sexist are racist viewpoints
could exist. We do not consider someone less human based on their sex, biology,
culture, or any other factor; we see something in all humans that makes them
deserving of respect and equal rights.
But what is the thing that makes us all 'human'? Why are we special when
compared to other forms of life? The answer may depend on who you ask. Ask a
biologist, a preacher, and an anthropologist and each will give you a different answer
that each focuses on a different aspect of what it means to be human.
The biologist might say humans are unique because of their genetic makeup.
Our evolutionary history is responsible for selecting for genes and traits that enabled
us to survive in prehistoric times. A preacher would most likely emphasize it is our
soul, and our ability to understand morality, that separates us from animals.
It turns out that there is no one quality that is agreed upon as being uniquely
human. However, there are a number of qualities that, together, make up a large
portion of our humaneness. Almost all of these qualities deal with aspects of the
human mind. Emotions, self-awareness, theory of mind, consciousness, and
intelligence are all important. Other qualities, such as empathy, morality, our ability
to make logical decisions, use tools, be creative, and make art emerge from our
mind.
It is true that our biology is different from other species on earth. We have
already seen how racism can be justified by using biology to define mankind. Homo
sapiens are the only living species of human today, but it is conceivable that other
human species could have evolved as well. If that were the case, or if we were to
make contact with an alien species of human, we would likely accept them as
“human” if they shared similar intelligence and behavior. Even today, when people
are born with a different number of chromosomes or physically different than what
we consider “normal” we would do not consider them any less human. Biology alone
seems to be a poor test of humanness.
Interestingly, any 'human' quality, taken by itself, can either be found in other
species or absent from some human. For example, archeologists in the 19th century
often described humans as tool-using animals, but we now know that other animals
such as birds and primates use tools. Empathy, emotions, self-awareness, language,
and culture were once thought to be purely human but scientific testing has revealed
that some animals have these qualities. Theory of mind is considered a particularly
important quality of humans that is necessary for being able to feel empathy. This is
the ability to think about what another person is thinking about, and emerges around
the age of two or three years in humans. Being able to experience empathy and to lie
are results of theory of mind. This was thought to be an exclusively human quality
until recent studies showed that many social animals would attempt to deceive others
by changing their behavior, moving a food stash for example when they knew another
animal had seen them bury it in the first place.
You can find humans that vary in these abilities or sometimes lack them.
Different people may have different degrees of artistic ability, creativity, and
intelligence. People with Aspergers syndrome don't understand or express human
emotions well and have problems with social interaction. For that reason, they are
sometimes seen as rude or insensitive-- but never as inhuman. Cultures and languages
differ in different human groups. For a long time, creativity and art were considered
completely human qualities-- even though not all humans are creative or artistically
inclined. However, there are now computer programs that can create and write
symphonies.
Humanity obviously lies in the mind, not in the body. But what is the mind
itself? Is it a purely physical a property that emerges from the billions of neural
connections in our brain? Or is there a component of the mind that is beyond the
physical realm-- a spiritual essence beyond our material bodies? There are two main
philosophical views on the subject.
Monism is the belief that a human is a purely physical being-- our emotions,
thoughts, religious experiences, morality, even our culture, are ultimately the result
of physical interactions in the brain, and someday will be explained by science. There
are two main subcategories of monism-- reductive and non-reductive. Reductive
monism states that there is no immaterial or spiritual element to a person. Nonreductive monism still maintains that humans are physical, but says that an
immaterial or spiritual element can emerge as a result of a persons' interaction in
society, culture, and their relationship with God. While reductive monism is
incompatible with religion, non-reductive monism can be compatible with some forms
of Christianity.
Dualism is the more popular belief accepted by most religions. This is the belief
that a person is composed of both a physical and an immaterial element. The
immaterial part is sometimes called the soul, spirit, or mind. This is the consciousness
or “self” of a person that survives death. Some dualists believe the soul must be
integrated with the body to function, and after death the soul sleeps or is unconscious
until it is later placed into a body. Others believe the soul can exist on its own-- it is
our “self” that survives and remains alive beyond death. Some say that animals have
souls, and some believe souls are a strictly human attribute.
There are many different beliefs when it comes to souls or spirits in Christian
theology. The most fundamentalist belief is “soul creationism”, the idea that God
creates a soul for every person each time a person is conceived, born, or at some
point in between. There is also the belief of traducianism; that God created a soul
for the first humans (Adam and Eve), which can then propagate new souls for each
new offspring. The third view is emergence, that the soul is something that develops
as a body develops into a person; rather than being a mystical thing, it is an
observable property of complex systems; in the case of humans, a property of the
interaction of billions of neurons.
There are three types of artificial intelligence research. Applied AI aims to
produce “smart” systems to be used commercially, for example, systems that can
provide medical diagnoses or trade stocks. The goal is to create machines that are
better at humans at specific tasks. The next type is cognitive AI, which are systems
that test theories about how the human mind works. Simulations are designed to help
understand how a human recognizes faces or solves problems. The final type of AI
research is strong AI, which has the goal of ultimately creating a machine that is
indistinguishable from a human being.
Monism presents no problem for strong AI. It may be years, decades, or even
centuries before we understand the human mind enough to create one, but there is
no reason why it cannot be done. Dualism, however, can be a problem for strong AI
because if there is a part of the human mind or a soul that cannot ever be created,
then we can never create an artificial human. We can create a machine that mimics a
human in intelligence, emotions, and behavior, but like Pinocchio, it will not be
“real”.
Some people find the idea of strong AI as a challenge to their religious beliefs.
Thomas Georges, author of “Digial Soul: intelligent machines and human values”,
believes the main concern from religious groups is not that we cannot do it-- but that
we should not. He says this is an important distinction: “The questions we want to ask
challenge religious traditions, as well as New Age thinking. Both assert that the
human mind—soul, spirit, what have you—lies beyond the limits of scientific
examination. If you ask why it can’t be studied scientifically, they say it just can’t!
there are some things you just can’t analyze!... Whenever someone doesn’t want you
poking into something, it’s generally because the answers would threaten cherished
beliefs or entrenched power structures. Which is fine. That’s an honest reason. But
there’s a big difference between saying something can’t be studied and something
shouldn’t be studied. We will proceed with the belief that ignorance doesn’t solve
any problem, that mysticism explains nothing, and that there is nothing that can’t be
studied. We won’t find all the answers, but this will not stop us form raising the
questions.”
The main idea behind strong AI is that if you can break a task down into details
that can be accurately described, then you can teach a machine to do it. If abstract
ideas like beauty, love, and joy can be reduced into logical steps, then they can be
expressed in a machine. The idea of taking complex feelings and emotions and
expressing them in binary terms may seem absurd at first. Georges explains that
everything you see on your computer-- photos, music, movies, websites, books-- don't
really exist except as binary code, which is then projected into your screen or through
your speakers. He claims there is nothing that cannot be encoded by bits, including
human qualities. “Every sensation, everything we see, hear, say, write, even taste,
smell, and touch, can all be reduced to a collection of ones and zeros. So can all the
works of Shakespeare, the music of Mozart, and every movie ever made. With the
completion of the Human Genome Project, we can now represent the sequence of
nucleotides in our DNA-- the instructions for making a human being-- as a symphony
of ones and zeros!”
He goes on to explain that this seems like reductionism, but it is not. Breaking
everything down into parts is simply the first step to understanding. The next step is
understanding how all of those parts function together to create something as
complex as a human being.
Some people do find it unnerving to think of computer systems being compared
to the human mind. But Marvin Minsky, a cognitive scientist and the author of Society
of the Mind, takes the comparison a step further-- he says a mind and a machine are
not just similar, but the human mind is a type of machine! “Are minds machines? Of
that, I’ve raised no doubt at all but have only asked, what kind of machines? And
though most people consider it degrading to be regarded as machines, I hope this
book will make them entertain, instead, the thought of how wonderful it is to be
machines with such marvelous powers.”
Even those without strong religious beliefs may find this to be a disturbing idea
that threatens our notions of human specialness. Science has revealed that humans
are not in the center of the universe; are not created separately from animals; and
now, it is saying we are nothing more than machines!
It is true that both computers and brains, at a fundamental level, work in a
binomial manner. Computers “think” in 1's and 0's. Human neurons think by using
action potentials across membranes, using positive and negative charge. There is an
episode of the television show Futurama in which a robot is arguing for his religious
beliefs, and says “I choose to believe what I was programmed to believe!” It's meant
to be funny but it also makes you think. If a computer is a machine that only “thinks”
the way it is programmed to, what about us? If we are machines, does it imply that
we have no free will-- and only believe what we're biologically programmed to?
It turns out that a computer is not limited by its program, or its programer.
There is a tenancy to think that if a computer is 'smart', it's only because the
programer has taught it to be so, and it's limited by what its programmer knows. This
simply isn't true. Every day, machines can perform tasks or answer questions that
their programmers are unable to. Even a machine as simple as a calculator can give
you an answer that its programmer could not. Today, AI systems are being designed to
learn, just as humans learn.
Researchers at Carnegie-Mellon University have created a computer system that
is designed to learn language in a way similar to humans. The Never-Ending Language
Learning System, or NELL for short, was primed with some basic knowledge of words
by its creators in the same way a parent might teach a child their first words. Then, in
January 2010, it was set loose on the internet to read and learn what it could on its
own. It is attempting to read the Internet and classify words as things-- for example,
it knows J.K. Rowling is an author, and wrote the Harry Potter books. This is just one
example, but it shows that a computer system can know how to learn, and it is not
merely limited by what it is programmed to know. Humans, similarly, are not just
“programmed” to have certain opinions or behave in a certain way-- they are able to
modify their “programs” by learning.
But how good can a computer get at imitating language? Can it ever “think”
well enough to hold a conversation? Alan Turing, a mathematician and computer
scientist that worked on creating the first computers, proposed a thought experiment
in 1950 that would answer that question. If you had a human in one room that was
told to talk, via text messages, to either a human or a computer in the other room,
could they tell the difference? If not, then the computer is said to have passed the
Turing test. In 1991, the first actual Turing Tests were held, at an annual event called
the Loebner Prize. While the Turing Test itself has never been won, prizes are
awarded for the “most human” computer programs every year. The current winner is
a program called ALICE, and a past winner is the online “chatbot” is called
Jabberwacky. Both are available online for the public to chat with.
While the Turing test may be interesting, it still only tests one aspect of
humanity. Language and conversation is important, but it doesn't mean that ALICE or
Jabberwacky is self-aware or deserving of human rights. Many people will say a
computer is great at doing one thing-- beating humans at chess, diagnosing medical
problems, or regulating the temperature in your home-- but it can never do some
things.
A common objection to strong AI is, “Yes, but a computer can never
___________.” That blank could be any number of things-- feel love, enjoy poetry,
worship God, reproduce itself, be creative, be self-aware, etc. Thomas Georges says
that this argument is weak because the reasoning is essentially, “I haven't seen a
computer that can do this, therefore, no computer ever could.” That view may even
give us some comfort-- to know that no matter how smart a computer is, it can never
be 'human'. Gary Kasparov, after losing to the Deep Blue computer, said he was
“rather gleeful that despite its win, it did not enjoy winning or gain any satisfaction
from it.” A computer can never feel joy. Or can it?
The number of things a computer cannot do has been rapidly shrinking. Gordon
Moore in 1965 predicted that computer processing abilities would double every
eighteen months. This is known as Moore's Law and has been true since the history of
computing. This exponential increase in computer power means that by 2020 or 2030,
machines should have a storage capacity that is larger than the human brain.
However, even the most powerful computer cannot “think” without software. It must
have software to be able to learn rules and create thoughts.
Even complex religious ideas such as morality can be learned as a set of rules.
Humans learn what is right and wrong by observing what others in their culture do-the few “feral” children that are raised by animals or grow up in severe isolation have
a hard time learning or understanding morality at all. Many religious people believe
“good” and “evil” really exist, and that morality exists as separate from a culture,
some saying they are set rules derived from God. Others believe that morality is
simply what's considered good and bad by the majority. In Iran, a person can be
stoned to death for adultery, while in Singapore, it is an offense to posses chewing
gum.
The most accepted modern view of morality is that some basic moral codes are
genetically hardwired, such as caring for young, whereas others are learned from
parents and culture and morality depends on what is acceptable at a certain place
and time.
Regardless of where morality comes from or if it exists outside of culture, it
can be learned as a set of rules. Some go one step further and say, not only can an
artificial system learn morality, but it will be able to surpass human morality because
a machine will be free of our ugly biological history. We use violence to solve
conflicts, we deplete natural resources, we have little concern for other species and
the environment, we believe things without evidence, we act out of fear, jealousy,
and revenge. All of these negative aspects are wired into us from our evolutionary
history; we are programmed to compete and survive, regardless of the consequences.
Even altruism itself, according to the scientific explanation, is a way to help us pass
on our genes. A moral computer could be built free of our innate flaws.
Computers have already come a long way since their invention. Even AI
researchers at one point thought it would be impossible for a computer to ever beat
a human at chess. The same was said for creativity, but there are programs that are
now able to write original musical symphonies. If computers are unable to do some
things, this is not their fault, it's ours-- we just don't haven‟t taught them how to yet.
It may only be a matter of time before computers can outperform humans in every
aspect. So where does that leave us?
The consequences of strong AI are far-reaching. We already feel the effects of
technology in our lives; compared to 50 years ago, people today spend more time with
technology and less time interacting with other humans. We have more “facebook
friends” but fewer real friends. Technology has brought us increased productivity, but
at a price. When machines become better than humans at diagnosing disease or
making judgments, will we have a need for doctors and judges-- or human workers at
all? When every aspect of humanity can be explained and replicated, will religion
become obsolete? If our minds work just like machines, would immortality be
achievable by downloading our consciousness into a machine?
The claim that we can create an artificial human has religious significance, but
like any scientific discovery, religion can view this as a threat to established ideas, or
as a tool to further understand ourselves. The reaction to strong AI is as varied as the
reaction to any other controversial scientific advancement has been. Some welcome
it, many are wary of it, and a few will always vehemently oppose it. Fundamentalists
reject the belief completely and say it cannot be done. People that find it unnerving
to think that humans are descended from animals are just as disturbed at the thought
of us being a type of machine. Fundamentalist dualists should, ironically, be
supporting AI research, because if they believe there is some innate human quality
that cannot ever be reproduced that God created in humans alone, then strong AI
research should prove this by failing to be able to produce a human consciousness.
Some will make the claim these ideas prove ideas of religion or God to be
fictional. Certainly, if we can recreate a human, that shows there is nothing
immaterial or divine about a human being. Georges says of religion, “Our reliance on
myth is is slowly being displaced by verifiable facts.”
But the ideas of strong AI do not necessarily conflict with religion. It depends
on how religious scriptures are interpreted. Russel Bjork, a professor of computer
science at Gordon University, a Christian college, claims strong AI does not conflict
with Christianity. He believes in the idea of emergence, which claims the human
“soul” is an emergent property of the physical body and a result of the interactions of
our neurons. He says the fear of humans being considered “just machines” is an
irrational one, because human worth is not conditional upon our uniqueness. If an
animal, a computer, or an alien can do the same things we can-- so what? Human
worth is about our relationships and our purpose, not our biology, he claims. And if
another species of human were to occur, even if it was artificial, it would not make us
any less “special”. If we are not the “only” humans that does not mean we are less
special, any more than a parent having a second child would consider the first less
special for it. If we are accused “playing God”-- it is only because were created in his
image.
And if we succeed in creating humans-- what next? Are there any limits on
intelligence? If we develop machines that surpass our own abilities, could they then in
turn create something even more intelligent than themselves? Georges says the result
of the creation of “superintelligence” could be a runaway intelligence explosion-similar to the biological Cambrian explosion, or even the universe's Big Bang. This idea
is known as “technological singularity”. At that point, when intelligence booms
toward infinity, Georges says, “it would make as much sense to speak of people using
computers as tool as it would to say that cows use humans as tools.” Such an
intelligent being would appear to us as a God-- an inconceivable, omnipotent, allknowing consciousness.
Even as AI helps us to understand ourselves, it creates far more questions than
answers and forces us to look at ourselves in a different way. We have made
tremendous progress as a species in what is merely the blink of an eye in the
geological timescale, but we have a long way to go. The questions that make us the
most uncomfortable are the ones that are most likely to hold the key to our future
growth. Social, technological, and moral progress cannot occur if we continue to think
and act in the same way. Artificial intelligence research may challenge our current
ways of thinking, but we will emerge with a better understanding of ourselves and
what it really means to be human.
Resources:
Bjerg, Greg. “Feral Children.” Damn Interesting. 15 May, 2006.
<http://www.damninteresting.com/feral-children>
Bjork, Russel. "Artificial intelligence and the soul". Perspectives on Science and
Christian Faith. FindArticles.com. 06 Dec, 2010.
http://findarticles.com/p/articles/mi_7049/is_2_60/ai_n28529975/
Copeland, Jack. “What is artificial intelilgence?” AlanTuring.net. May 2000.
<http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%2
0is%20AI02.html >
Daily, Joanna, Nathan Emery, and Nicola Clayton. “Avian Theory of Mind and counter
espionage by food-caching western scrub-jays (Aphelocoma californica)” European
Journal of Developmental Psychology. Vol 7 Issue 1. January 2010: pg 17-37
Georges, Thomas. Digital Soul, intelligent machines and human values. Boulder:
Westview press, 2003.
Leuck, Phil, PhD. “What does it mean to be human?” CS Lewis College.
<http://www.cslewis.org/programs/oxbridge/2008/workshops/Lueck_What_It_Means_to_Be_
Human.pdf >
McKerrow, Phillip. “A Christian Perspective on Intelligent Robots”. University of
Wollongong. July 2006.
<http://www.uow.edu.au/~phillip/rolab/ChristianPerspectiveSmal.pdf >
Minsky, Marvin. Society of the Mind. New York: Simon and Shuster, 1985.
Mitchell, et all. “NELL: Never-Ending Language Learning.” Carnegie Mellon University.
January, 2010. <http://rtw.ml.cmu.edu/rtw/>
Van Everie, MD. Negro and Negro Slavery. New York: Van Everie, Horton & Co. 1861.
Vjork, Russel. “Artificial intelligence and the soul: Perspectives on Science and
Christian Faith.” June, 2008.
ttp://findarticles.com/p/articles/mi_7049/is_2_60/ai_n28529975/
Wijngaards, John. “Greek Philosophy on the Inferiority of Women”. Womenpriests.org.
<http://www.womenpriests.org/traditio/infe_gre.asp >
Download