Artificial Intelligence

advertisement
Artificial Intelligence and Philosophy
Joel Luis Carbonera
Gélio José da Silva Júnior
Introduction
The Interest in the development of intelligent artificial entities participating in the
imagination of mankind for a long time.
At the same time, attempts to build intelligent robots are also quite old and occurred at various
points throughout history, illustrating the intellectual interest in accessing the underlying principles
of intelligence. This imagery drew a long way, culminating in the consolidation of a scientific
discipline that investigates the issue from the scientific point of view contemporary.
Turing's paper in 1950, "Computing Machinery and Intelligence", discusses the possibility of
thinking machines, explicitly putting the matter to the relevant scientific community modern and
discussing it. This work crystallizes the idea of possibility of a machine behave intelligently,
provided that is adequately programmed for this.
The name "Artificial Intelligence", and the formal establishment of a research area eat this name
occurred in 1956 at a conference at Dartmouth College (Buchanan, 2005). At this conference,
research proposals were discussed which orientate the early work of this scientific discipline.
Artificial Intelligence Weak and Artificial Intelligence Strong
The philosopher John Searle distinguishes two attitudes toward IA:
• Strong AI Hypothesis: states that artificial intelligence systems can actually think and have a
genuine mind. In this case, these systems would not only be simulated intelligence, but really
intelligent entities.
• Hypothesis weak Artificial Intelligence: It states that artificial intelligence systems can act wisely,
or act as if they were smart (or as if they had minds). In this case, these systems, although they act
wisely, would not be genuinely intelligent entities, but in most simulations of intelligent behavior.
In this view, computers can be seen as useful tools to study the mind, allowing the empirical testing
of models of the mind.
Currently, the two views coexist in the area of Artificial Intelligence. A considerable number of
researchers in the field adopts the hypothesis of weak AI. On the other hand, there are studies that
assume the hypothesis of strong AI, and dedicate themselves to discuss the feasibility of an artificial
intelligence general, pursuing the goal of artificial entities that genuinely think, elaborating and
proposing theories of how concretely could be done.
1. Nine objections of Alan Turing
1.1:
In the article "Computing Machinery and Intelligence", Turing discusses nine possible objections to
the possibility of making an artificial intelligence.
The teologic objection
This type of objection assumes that human beings are endowed with an immortal soul, which is this
immortal soul which enables one thought possible and that God has endowed men and women with
this strip, but did the same with animals and machines . Thus, neither animals nor machines could
think.
Turing feels unable to accept any part of this argument, since for some dogmatic views nor women
have a soul, which is absurd.
Turing also outlines an ironic response, stating that the argument can question the omnipotence of
God. If it is God who gives the ability to think, providing a being with an immortal soul, nothing
would prevent God to confer with the machines, artificial lighting giving an address for this
hypothetical God can fill with a hypothetical immortal soul.
Turing points out that, historically, the theological arguments often prove unsatisfactory to the
advancement of knowledge and cites the theological objections in relation to the heliocentric theory.
1.2:
The “hand in sand” objection
Objections of this kind would have the following form "The consequences of machines
thinking would be terrible, so we can only hope and believe that this can not be done."
Turing sees this argument as popular among intellectuals, proud of their "superior intelligence",
who see his position threatened by the possibility of a machine that thinks and what could be more
intelligent than themselves.
It should be noted that this argument is fallacious because the fact we do not want anything to
occur, does not mean it can not occur.
The mathematical objection
Various results in mathematical logic can be used to show that there are limitations in
the discrete state machines (such as modern digital computers). The most well-known results of this
type are Gödel's incompleteness theorems, which prove that in any sufficiently powerful logical
system, it is possible to construct statements that can not be proved true nor false within the same
system.
Turing points out that, although it is known that there are limitations on the machines, we can not
say that the human intellect has no limitations. Humans make mistakes and it does not shake the
confidence that they are intelligent. Even if the human intellect is superior to the machine, it is
unclear whether this means that the machine is not smart.
1.3:
The argument from conscience
In a passage of a prayer, Jefferson said that we could only agree that machine equals
brain, when she could write a sonnet or compose a concerto, as the consequences of thoughts and
emotions actually felt by the provision and not accidental symbols. That is, to consider an
intelligent machine, it would not be enough to write a sonnet, she should also know who wrote it.
Jefferson goes on to say that no mechanism could feel pleasure in their successes, or feel
pain when its valves fuse, feeling comforted by flattery, feel humiliated by their mistakes, be
charmed by sex, be angry or it is depressing when you can not get what you want.
Turing see this statement as an assumption of awareness of the need for intelligence.
Turing said, according to the argument, the only way we would have to check if a machine is really
intelligent, involve being the machine itself and feel yourself thinking. Just as we can but consider
another human being as an intelligent being that if we were human.
The general position of Turing on this point is that for it is undeniable that there are mysteries to the
phenomenon of consciousness, but which, however, is not necessary to solve these mysteries to
solve the question of whether machines think. This suggests that Turing see intelligence and
consciousness as being independent.
1.4:
The inability
These arguments imply an inability of the machine, to conclude that, due to this
inadequacy, a machine will never be intelligent. Turing raises some of these disabilities, how to be
gentle, beautiful, friendly, have initiative, sense of humor, tell right from wrong, make mistakes, fall
in love, enjoy strawberries and cream, etc.
He points out that, generally, no evidence is offered to support these arguments. Also comments that
the reason that people believe these are the disabilities that they had previous experience with
limited machinery, special purpose.
The Lady Lovelace's objection
It is based on reports of Ada Byron (Lady Lovelace) on the analytical engine of Charles
Babbage, that the Analytical Engine has no pretensions to originate anything new and what it can do
only what we know how to order it to make .
However, this does not imply it is not possible build a machine that think for herself, or that, in
biological terms, is not possible endow a machine conditioned reflexes that can serve as foundation
for learning. The feasibility of this alternative is another question that should be investigated.
Turing contrasts this placement of Ada Byron with other issues. How to be sure that an apparently
unique and creative work done by a subject was not simply the growth of a seed planted in him by
the effect of learning or the fact that it follows the general principles well known?
1.5:
The argument of continuity in the nervous system
The brain is not a discrete state machine, with characteristics very different from this type of
machine. For example, a small difference in signal received by a neuron generates large differences
in output signal. Turing was aware of this fact and predicted that it could be used as an argument
against the possibility of thinking machines, assuming that these characteristics of the brain thought
to be necessary and would not be possible to reproduce these features in discrete state machines.
Although Turing agree with the fact that there is this fundamental difference between brains
(continuous systems) and digital computers (discrete systems), he disagrees that this fact has
consequences that can not a thinking machine. Turing assumes that any system (including
continuous systems) can be simulated in a reasonable degree of accuracy in discrete machines,
provided they have enough computing power for this
.
1.6:
The argument from informality of behavior
This kind of argument based on the assumption that it is not possible to determine the entire set
of rules that describe what a person should do in every possible circumstance, and assumes an
additional assumption that there is a set of rules that determines what a machine should do in every
situation possible.
Turing states that have the behavior governed by a set of rules of behavior implies being a machine,
and at the same time and in the opposite way, being a machine implies having the behavior
controlled by rules of behavior. Turing notes that it is not possible to say that this set of rules does
not exist and therefore we can not say with certainty that human behavior is not controlled by a set
of general rules of behavior.
The argument of extrasensory perception
In 1950 the area of extrasensory perception (telepathy, clairvoyance, precognition,
psychokinesis, etc.) was a pretty active area of research. Arguments along these lines would argue
that these skills could provide advantages in the Turing test. Supposedly, an interrogator could
telepathically identify the man, in a given test.
Turing discussed the possibilities to circumvent this. A possible machine sensitive to telepathy, for
example, nullify this advantage. Another possibility would be the isolation of the speaker in a room
as possible proof of telepathy.
2. John Searle and the Chinese room argument
2.1
John Rogers Searle is an American philosopher who was notorious for his criticism of
the projection that the project took IA, mainly in the form of the "Chinese room argument." This
argument was presented to the article "Minds, brains and programs" in 1980. This article makes it
clear that Searle has no objection to the hypothesis of weak AI, but focuses on outlining a scathing
critique of the hypothesis of strong AI.
The focus of the criticisms of Searle is the human ability to understand (or understanding). Searle
strives to show that computer programs can not display this ability. The philosopher presents a
thought experiment that became widely known in the literature as "the Chinese room experiment."
• Within a room, there's a human being who speaks English (it could be Portuguese), but does not
speak Chinese.
• This room has one input channel and output language sentences written
• Inside this room, there is a book with rules in English (or Portuguese), which instructs the human
being as certain judgments relate to other sentences in Chinese in Chinese. These rules consider
only the form of symbols.
• It is requested that the subject inside the room, look at the sentences in Chinese entering the room,
identify this sentence in the book entry, follow the instructions of the book on this input, generate
the corresponding sentence, according to the instructions of the book and put the corresponding
sentence in the output of the room.
2.2
In the view of Searle's point of view of an outside observer, the outputs of the room
become indistinguishable from the responses of a native Chinese speaker. People who are out of the
room can call some of the sentences in Chinese entering the room "questions" and call other
sentences in Chinese leaving the room "answers" without the guy who is actually in the room know
that comes to questions and answers. Who is inside the room are handling these sentences just
comparing the shape of the symbols that come with the symbols of the form provided by the rules
of the book, without knowing the meaning of any of that manipulates symbols.
With this experiment, Searle wants to show that, even if they are reasonable responses generated in
natural language, indistinguishable from a native Chinese speaker would generate, there is no
genuine understanding of Chinese in the room, since, just meaningless symbols are manipulated .
As a result, even if a machine could pass the Turing test, this does not mean that he truly
understands what you are doing.
The human ability to understand due to what he called intentionality, a capacity that would be
presented by living beings, by which our mental states relate to objects and states of affairs in the
world.
Searle argues that intentionality can only manifest itself in biological organisms, because it depends
on the causal powers of the brain. Searle argues that machines could think if they have causal
powers equivalent to the biological brain.
3. Hubert Dreyfus and critique of the assumptions underlying the design of the IA
3.1:
Hubert Dreyfus is an American philosopher and in his book "What Computers Can not
Do" (1979) he identifies four problematic assumptions that are assumed by researchers in the field,
guiding all of AI research.
The assumption biological
Research at the Beginning of neurology, scientists have assumed that the neuron fired in
pulses "All or nothing" (IE, fired or not). This Standard Shooting allowed scientists to see the
neuron as logic gates similar to the Digital Computers. This similarity suggested that the brain could
be seen as a handler discrete symbols (zero or one). However, Dreyfus shows evidence that a time
and the time action shooting neural components have analog components having to do neuron That
can not be performed by discrete machines.
The assumption psychological
From the standpoint of the philosopher, the AI researchers assume, incorrectly, that the
mind operates on the information according to formal rules (or at least formalized). For him there is
a mass of common sense knowledge which we are unconscious and hard to turn it into an explicit
linking of discrete symbols. In view of Dreyfus, this knowledge is not in the brain as a set of
individual symbols with meanings, such as AI researchers usually take.
3.2:
The assumption epistemological
This assumption states that all knowledge can be formalized. This assumption is
epistemological, because in Philosophy, Epistemology is the discipline that studies knowledge.
The philosopher says that even when the AI researchers agree that psychological assumption is
false, may still assume that it is possible that a machine processing of symbols representing all
knowledge, regardless of the fact that human beings represent knowledge in this way or not.
Dreyfus claims that there is justification for this assumption, since a considerable part of human
knowledge would not be symbolic, the point of view him.
The assumption ontological
This assumption states that the world consists of independent facts that can be
represented in separate symbols. This assumption is called ontological, because Ontology is the
philosophical discipline that studies the basic categories of things that exist in the world.
In view of the philosopher, the AI researchers generally assume that there are limits to formal
knowledge, because, like other scientists, assume that any phenomenon of the universe can be
described by symbols or scientific theories. Thus, we take everything that exists can be described as
objects, object properties, object classes, relationships between objects, etc. Dreyfus casts doubt on
this assumption, stating that there is controversy regarding this aspect.
4. Roger Penrose / John Lucas - The mathematical objections
The objection is based on mathematical proofs that certain mathematical questions are
in principle insoluble to specific formal systems. The Gödel's incompleteness theorems are the best
known example of this type of evidence. In summary, these theorems show that, given a formal
system (such as a Turing machine), it is possible to make statements within this formal system that
can not be proven true or false within the system itself.
John Lucas says that Gödel's theorem shows that machines are mentally inferior to humans, because
machines are limited by the formal systems incompleteness theorem - can not establish the truth of
certain sentences that can express these machines - as humans do not possess such limitation.
Penrose says that, unlike the machine, human beings are able to prove such theorems with the use
of intuition and creativity and that these phenomena are caused by quantum phenomena that occur
in the brain. For Penrose, a machine can not play these quantum phenomena and therefore is not
able to think like humans.
5. Philosophical issues
• What is intelligence?
• How is intelligence?
• Machines can think of?
• You must have a soul to think?
• We must build intelligent machines?
• Machines can have free will?
• Machines can have emotions?
• Machines can be creative?
• The brain is a computer?
• Used only display the intelligence of its programmers?
• The Turing test is crucial to judge the intelligence?
• The intelligence requires a body?
• Hardware and software are analogous to the brain and mind?
• Machines can learn as humans?
• Machines can adapt to new situations?
• Machines can be conscious?
• Awareness is necessary for thought?
• Gödel's theorem proves that machines can not think?
• Machines can understand natural language?
• Machines can make art?
Download