Animal and Machine Consciousness

advertisement
University Studies 15A:
Consciousness I
Animal and Machine
Consciousness
First things first: back to our basic question.
What is consciousness? What are we talking about?
Remember John Locke?
“Consciousness is the perception of what passes in a man's own mind.“
This seems like a good, simple, intuitive definition. However, what did he mean
by it?
Sight is the perception of what comes before our eyes.
Hearing is the perception of what passes before our ears.
Consciousness is the perception of what passes in one’s mind.
So, by this definition, how does visual consciousness differ from vision?
However, people grew unhappy with this definition.
“Consciousness is the perception
of what passes in a man's own mind.“
Who is doing the perceiving?
This image of watching what comes before the mind leads us to the
homunculus:
The effort to think of consciousness as something other than an
inner theater has led to the popularity of Ned Block’s proposal
for two distinct types of consciousness.
1. Access Consciousness: the sort of awareness of thoughts and
sensations that we can report.
2. Phenomenal Consciousness: the qualitative experience.
As an aside, if you google “access consciousness,” you may learn
that:
“Access Consciousness™ is about creating oneness, which
includes everything without judgment.”
Apparently popular culture has appropriated the term in a rather
different manner from what Block intended. However, back to
Block.
Access Consciousness refers to mental states that are
“(1) … poised to be used as a premise in reasoning,
(2) poised for control of action, and
(3) poised for rational control of speech.”
That is, it’s a rainy Sunday afternoon. You’ve been studying in your room.
You’ve been aware, in the back of your mind, that it is raining.
Your roommate announces, “I’m bored; let’s go out and do something.”
Now you look out the window, and the torrential rain enters your access
consciousness:
You think, “If I go out, I’ll get drenched.” (“rain” -> reasoning)
You grab your (water-resistant) coat. (“rain” -> action)
You say, “What’s a little rain?” (“rain” -> speech)
Access consciousness is that limited-access focal attention that Baars modeled
with his Global Workspace Theory.
Not everything that you are phenomenally aware of enters access consciousness,
only that which as the greatest saliency.
What then are the features of Access Consciousness?
1.
2.
3.
4.
5.
Awareness
Attentiveness
Specific focus
Connection to possibility of analysis, action, and speech:
semantic systems
decision systems
language
Ownership:
Embeddedness
Specificity of this time, this place, this mind
Phenomenal Consciousness:
When we are running in the rain, we are aware that we are running, but the
phenomenal consciousness is all that is involved in what if feels like to be
running in the rain.
We encountered “phenomena” earlier in the term as “that which appears
before us” as possible objects of experience.
Phenomenal consciousness is how we “take in” experience.
Philosophers usually think in terms of the qualia of experience: the private,
subjectively experienced qualities of a cloudless blue sky on a summer
afternoon or drinking coffee on a cold, drizzly morning: what was that coffee
like?
As we discussed at the beginning of the term, qualia are very hard to convey
because they go to the heart of the private, subjective character of
consciousness.
So, instead, we largely have focused on the general idea of phenomenal
consciousness being “what it is like to be…”
The phrase became famous when Thomas Nagel posed the question of
whether we could understand “What it is like to be a bat?”
I stressed that this is not the metaphysical question of what it would be like to
have the soul of a bat. It is about what sort of consciousness a creature that
inhabited a bat-body with a bat-brain would have.
What then are the features of Phenomenal Consciousness?
1.
2.
3.
Global Awareness
“Dynamic, multisensory feature binding:”
perceptual systems
semantic systems
decision systems
Ownership:
Embeddedness
Specificity of this time, this place, this mind, this body
If Phenomenal Consciousness is one’s on-going subjective engagement with
the world, what do we mean by “subjective”
We use the term all the time. What does it mean?
“Of, or pertaining to the subject of experience rather than the object”
Are subjective judgments:
1. Self-generated and an end in themselves?
2. Causally opaque?
3. Open to discussion, contextualization, comparison, and reflection?
If we wish to claim a privilege for subjective judgments, what is that
privilege and what is its basis?
Animal Consciousness
If we think of the question of “bat consciousness” as the subjectively engaged
world (which includes the self) as experienced by a bat with its particular body,
senses, and brain, and, perhaps, mind, then we are ready to turn to today’s
topic, animal and machine consciousness.
Once again I follow the issues as Blackmore sets them out. Once again,
remember that Blackmore tries to provoke questions more than answer them.
She begins by asking whether human consciousness reflects an evolutionary
continuity or a big break.
That is, can we expect that animals with brains similar to ours will experience
consciousness in a way that is similar to how we do? Or was there a major
evolutionary break such that our consciousness, like our language ability is
unique?
The question of language is actually quite relevant to that of consciousness.
A central question in why there the primates do not reveal a seeming
continuity in language development—with some species using transitional
forms of communication—rather than the large gap between us and the great
apes.
One argument has been co-evolution: that the sorts of changes in the body
(control over the mouth and larynx) and the changes in neocortical
organization needed for language ability evolved together and pushed one
another, leaving the other hominids behind.
The changes in the neocortex have to do, in part, with the arbitrariness of the
relation between sound and meaning, and in part, with the way words take
their meaning from their place in a large system of words/referents
The thought is that this way of making connections required a great
expansion of the high-order multimodal association areas of the posterior
parietal cortex, precisely the areas that are perhaps the key regions for
making phenomenal consciousness possible.
In any case, the question remains, “How do we assess animal consciousness
to see if it resembles ours?”
What can behavior tell us? If my cat looks guilty (slinking and shifty-eyed), is
he feeling guilty? If your dog is looking doubtful and hesitant, is it?
There is no good way to answer these questions and respond to criticism that
we are just applying human attributes of mind without justification.
So, once again, how can one test for animal consciousness?
One thought was, “Do animals have a sense of self, since a self is necessary
for consciousness?”
Hence the “mirror self-recognition” (MSR) test: can an animal realize that
what it sees in the mirror is its own reflection, since this would require
drawing on some sort of inner self-representation?
Scientists have been cheerfully inventing ways to subject a wide range of
animals to the test:
Chimpanzees were the first: an ethologist gave young chimpanzees mirrors to
play with. Soon they were examining parts of their bodies they otherwise
could not see and used the mirror to pick their teeth.
The real test, though, was what would happen if the chimpanzees saw red
paint on the face in the mirror: would they realize it was their own face and
try to examine it by touch? It turned out, yes, they would.
Variations of this experiment have
been successfully repeated with
dolphins, elephants, and magpies.
However, what does the test tell us?
These species can create forms of internal self-representation.
If they are conscious, can they represent the fact of consciousness itself and
use it in their engagement with the world?
More particularly, can they conceive of “other minds,” of other creatures who
also have inner states shaped by individuated experience?
Once again, chimpanzees proved an excellent test group. Could a chimpanzee
conclude that a person wearing a blindfold could not see it (and that therefor
there was no point trying to get food from the person)? Apparently not.
Blackmore also introduces learning through “true imitation,” which requires a
theory of mind to see behind the imitated gestures. However, the results of
experiments seeking to elicit imitation have been fairly murky.
The more important question is, “Does one need a theory of mind to be
conscious?”
Can one be conscious without being aware that one is conscious (Descartes
and Locke reject this) or without a concept of consciousness that one can
apply to others?
So, the question of animal consciousness is apparently wide open.
What is at stake?
Would you be more reluctant to kill and eat animals that were
conscious?
What level of consciousness (if there are levels of consciousness)
would be the threshold to permit killing?
The pragmatic Mencius concludes, “The noble man in relation to animals is such that,
having seen them alive, he cannot bear to see them die; having heard their cries, he
cannot bear to eat their flesh. Therefore he keeps away from the slaughterhouse and
kitchen.”
Machine Consciousness
What can we mean by “machine consciousness?”
http://www.youtube.com/watch?feature=player_detailpage&v=N-o-4txOiVE#t=13s
Blackmore spends a good deal of time giving the background history of
machine intelligence.
But I believe I do not need to spend much time making the point that
although IBM’s “Deep Blue” could beat Kasparov at chess, it is
computationally powerful, but the intelligence is in its design.
The more central question for machine consciousness is when do machines
understand “meanings” rather than process symbols.
Meaning: Machine and Human Understanding
We are on difficult terrain here: much effort has been expended trying to
define what we mean by “understanding meaning.”
For example, what is the meaning of “water?”
Or, to rephrase, how do you understand what water means?
Qualia matter in our account of understanding meaning.
The hard question is, “Must qualia count for any decent theory of what is
required to properly understand meanings?”
The argument has been that machines process symbols but do not understand
meanings.
The usual examples of this distinction are
1.
2.
The Turing Test: can a machine make you believe it is human after a five
minute interview?
The Chinese Room: a person in a room uses a big book to look up proper
responses to inputs in Chinese and passes these responses to a recipient.
The person does not know a word of Chinese, but to the recipient, the
person appears to know Chinese.
This problem of understanding has led to a “new robotics” that stresses selfassembling embodied semantics and embodied cognition.
These are robots that use artificial neural networks to build their perceptual,
cognitive, and response systems based on a few homeostatic (self-stabilizing)
rules.
If they succeed, will these robots have phenomenal consciousness? Access
consciousness?
SkyNet, here we come.
Download