Humans, Computer, and Computational Complexity

advertisement
Humans, Computers, and Computational Complexity
J. Winters Brock
Nathan Kaplan
Jason Thompson
PSY/ORF 322
5/9/05
1
The course of human knowledge would be very different if scientists were able to devise
a formal mathematical system that could provide answers to any well-defined question. It would
be even more significant if the steady increase in computing resources and efficiency led to
machines that could solve any computational problem, and solve it quickly. Human beings are
clearly limited in their mental capacity in a way that does not apply to computers. Living beings
are subject to memory constraints that make large-scale computations and deductive problem
intractable. However, the algorithmic processes of machines and formal mathematical reasoning
are subject to constraints as well. There are problems that are easy to state and understand which
have been proven unsolvable. Mathematical proofs have demonstrated the limits of
mathematical reasoning. As we develop more sophisticated computers, there are still problems
that take so many steps to solve algorithmically, that machines cannot feasibly solve them.
Around a century ago it was widely believed that there was a mathematical theory that
could provide answers to any question. David Hilbert proposed a “Theory of Everything… a
finite set of principles from which one could mindlessly deduce all mathematical truths by
merely tediously following the rules of symbolic mathematical logic”i. Such a theory would
demonstrate the unbounded power of mathematical thinking. Computers would be of increased
significance if such a theory existed because of their demonstrated capacity as symbol
manipulators. However, Hilbert’s dream will never be realized. Two of the most significant
theorems of mathematical logic are Gödel’s Incompleteness Theorems, proven by the Austrian
logician Kurt Gödel in 1931. The result shows that we can never find a formal axiomatic system
that can prove all mathematical truth.
He demonstrated that there exist statements within a formal axiomatic system that are
true but not provable. Gödel highlights an important distinction within human knowledge. A
2
proof is a syntactical object that exists within a formal axiomatic system, but truth is a semantic
notion. The technical details are difficult however the key ideas within the proof demonstrate the
power of paradox in showing the limits of formal mathematics. The key example to consider is
the statement: “This statement is unprovable”. Within a formal mathematical system, a
statement can only have a proof if it is true. So if a statement is false, then it is unprovable. So if
we assume that this statement is false, then it is unprovable, which is exactly what the statement
claims. This gives us a contradiction, so we must conclude that the statement is true.ii
Self-referential statements often create these types of paradoxes within mathematical
logic, for example consider: The set of all sets, which do not contain themselves. This set cannot
contain itself as an element, because then it would contain itself and could not be in the set.
Therefore the set does not contain itself. However, by definition it must be in the set. This is
known as Russell’s paradox and it has prompted much work within mathematical logic and set
theory.iii One of the truly genius ideas within Gödel’s proof is that he is able to construct selfreferential statements by using the concept of Gödel Numbers. He develops a method of
referring to statements that is completely contained within a formal axiomatic system. Gödel’s
proof put an end to the quest for Hilbert’s Theory of Everything, but only in an abstract sense.
Gödel established the limits of formal mathematical reasoning. He showed that if you
could axiomatize a formal system and program the rules into a computer, than even after an
infinite amount of time, no matter how clever the computer was, it could not prove all of the true
statements within the system. He did not however, demonstrate the extent to which the
incompleteness of mathematics is a practical problem. He showed only isolated instances of
what is in reality a glaring weakness in mathematical and algorithmic thinking. In 1936, a
British mathematician named Alan Turing showed that there are problems that are undecidable.
3
This first example, Turing’s Halting Problem, is a problem of practical as well as theoretical
significance. Suppose you are trying to write a computer program. The code quickly becomes
too complicated to keep it all in your mind at once. There are too many conditions and too many
loops. You want to know whether when you run the program, it will run to completion and
produce an output, or whether it will get caught in an infinite loop and run forever without
halting. It would be a huge advantage to have an algorithm that could read your code and
determine whether it will ever halt. The problem can be stated more precisely as, “Given a
description of an algorithm and its initial input, determine whether the algorithm, when executed
on this input, ever halts (completes). The alternative is that it runs forever without halting.”iv
Turing’s result is that a general algorithm that solves this problem for all potential inputs cannot
exist. This means that the Halting Problem is undecidable.
While this problem is of practical significance, the proof is abstract. First Turing had to
establish what he exactly meant by an algorithm. In order to do this, he introduced the idea of a
Universal Turing Machine. A Turing machine is an abstract machine that is not bound by limits
of memory capacity. The main property of a Turing machine is that it can simulate the processes
of any other machine. A Turing machine consists of a one-dimensional string, which is divided
into cells that contain symbols from a finite alphabet. The string extends arbitrarily far in both
the right and left directions. There is a cursor that considers one cell at a time, reads the input,
and can write symbols on the string. The Turing Machine also has a state recorder that stores the
different set of states that the Turing Machine takes, including the initial state. The states of the
Turing machine are determined by a transition function that tells the machine how to move from
one state to the next. More specifically the function determines what symbol to write and which
way the cursor should move. This function also contains all of the halting states for the machine.
4
Each aspect of the Turing machine is finite, but because of the arbitrary string length, the
universal machine can simulate an infinite number of other machines. This is an informal
description of a Turing machine, but Turing formalized all of these concepts within the
vocabulary of mathematical logic.v The Turing machine is different from real machines in that it
is not subject to constraints on the amount of data it can handle or the amount of memory that its
algorithms employs. However, since this universal machine can simulate real machines, the
limits of the Turing machine correspond to weaknesses of the algorithmic thinking of computer
programs.
There are many ways to prove that Turing’s Halting problem is undecidable, most of
which employ the same techniques and ideas of the contradictions caused by paradox. The
following argument is due to Gregory Chaitin, a computer scientist and pioneer of algorithmic
information theory (AIT). This particular argument is significant because it introduces the idea
of complexity as a measure of program size, and not as a measure of how efficient it is to run.
This is the key idea of AIT. This proof relies on the contradiction brought about by a Berry
paradox, which again depends on a type of self-referentiality.vi The easiest way to express the
idea of the paradox is to consider the example: “The smallest positive integer that cannot be
expressed in fewer than fifty English words.” The problem with this paradox is that it is not well
defined to say something cannot be expressed in fewer than fifty words. This paradox introduces
subtle linguistic issues; however it is still a useful tool for incompleteness results.
In order to show that our Halting program, which we can call H, does not exist, it is
unreasonable to check all programs and show that none of them are satisfactory halting
algorithms. Instead we assume that H exists and show that it leads to a logical contradiction. So
if H exists, then we can create a larger program P that uses H as a subroutine. We can also allow
5
P to contain some notion of its size in terms of the number of bits that it takes to write P. Let us
say that P is N bits. So when we run P, it uses H to consider all possible programs of length less
than 100N and determine which of these halt. P then runs all of these programs and stores their
outputs in some finite list. Therefore, we have a list of all outputs that can be produced by
programs of length less than 100N. The output of P is the smallest positive integer that is not in
this list. However P is length N, which is clearly less than 100N. Logically, the output of P must
be in the list. This gives us a contradiction. Therefore we conclude that the program H cannot
exist, because if it did, then we would have no problem constructing P. Turing relates his
undecidability problem back to Hilbert’s Theory of Everything. He does so by explaining that if
such a theory existed which allowed you to prove whether an individual program halts or not,
then you could run through all possible proofs and determine whether any given program halts or
not. You could then construct H, which we just showed leads to a contradiction. Therefore such
a theory that allows halting proofs cannot exist. vii
Turing’s Halting Problem relies on the abstract concept of the Turing Machine. An
actual machine has finite storage capacity and memory. The halting problem for finite machines
is a little different, and it can be shown that there is a general algorithm to determine whether a
program running on this limited machine will halt.viii Chaitin approaches incompleteness
problems a little differently. He demonstrates an undecidable question by programming real
programs on a real machine and finding a contradiction. Again, Chaitin’s proof uses the idea of
program size as a measure of complexity, and also, his result comes from a contradiction similar
to a Berry paradox. He creates a program that achieves some result, and then shows that the
program is actually too small to achieve this output.ix He does not need the kind of selfreferential statements that Gödel used in his proof.
6
Chaitin works within a modified version of the computer language LISP. This is a
natural choice of language because LISP is very mathematical, and is somewhat like “set theory
for computable mathematics.”x LISP expressions contain defined functions, which you can
apply to a set of values, and then your output of an expression is a final value. So LISP takes
expressions and gives values. In his version of LISP, Chaitin is able to code all of the rules of a
formal axiomatic system. He is also able to use a finite amount of code, which he can add to any
program to give it the property in which the program knows its own size in bits.
Chaitin defines an “elegant” program to be one for which no shorter program produces
the same outputs.xi This is a notion that is very natural in computer science or even in
mathematics. The shortest program may not be the most efficient or the easiest to understand,
however, there is some inherent beauty in expressing an idea with as few lines of code as
possible. A program or a proof, that produces the same result as an earlier attempt, can be
considered a kind of improvement. It is not difficult to see that for any type of computational
problem there is at least one elegant program because in LISP, the value of a LISP expression is
itself a LISP expression. There is a set of programs that give the same outputs, and while it
would not make sense to talk about a largest one, the smallest program in this set does exist. For
example, a formal axiomatic system coded in LISP, will have some elegant program. If an
output can only be generated by an expression with N bits or more, then we say that the output
has complexity N. So it is natural to ask whether a given program is an “elegant” program or if
it can be expressed by another shorter program. Chaitin’s incredible result is that: “A formal
axiomatic system whose LISP complexity is N cannot prove that a LISP expression is elegant if
the expression’s size is greater than N+410. So there are a finite number of LISP expressions
that can be shown to be elegant.”xii The reason for this constant 410 is that this is the number of
7
characters in modified LISP that Chaitin needs to add to his program to make it aware of its own
size.
Chaitin’s result is truly surprising. “The game in incompleteness results is to try to state
the most natural problem, and then show that you can’t do it, to shock people!”xiii His idea of an
elegant program, in this case a LISP expression, is something that is a very straightforward
mathematical idea. It is an interesting question to know whether a program can be improved in
the sense that a shorter program generates the same output. There are clearly an infinite number
of elegant expressions waiting to be discovered, but only a finite number of them can be proven
to be elegant within any formal axiomatic system. The axiomatic system is necessary because it
is what tells us what rules we can use to construct a proof. “And you can’t prove that a LISP
expression is elegant if it’s more than 410 characters bigger than the LISP implementation of the
axioms and rules of inference that you’re using to prove that LISP expressions are elegant.”xiv
Chaitin’s result shows us in a very concrete way, by programming in a real language on a real
computer, the limits of a formal axiomatic system to prove statements about the algorithms.
Hilbert’s dream of an axiomatic system that can derive all mathematical truth has been shattered.
There are a number of problems that can never be solved in any satisfactory sense within a
formal system. Human beings can achieve some overall understanding of these problems that an
algorithm can never attain.
It is unknown exactly how prevalent undecidable problems are within any particular
formal axiomatic system, but they are a cause for serious concern. Turing’s Halting Problem,
Gödel’s paradoxes, and problems such as Chaitin’s elegant LISP programs are not isolated
instances, but are examples that signify a larger class of knowledge which algorithmic thinking
can never reach. However, algorithmic methods are successful in solving a vast number of other
8
problems. Therefore, it is logical to ask whether, as computing power increases at an impressive
rate, we are approaching a time when any computationally solvable problem can be fed into a
computer that uses some algorithm to produce the desired output within a reasonable amount of
time. This is the question that motivates the theory of Computational Complexity.xv Here we
treat complexity, not as a measure of program length, but as the total number of steps, or amount
of time that it takes an algorithm to solve a problem.
It is helpful to introduce the main ideas of this theory through an example. A Boolean
statement is a statement in propositional logic that is made up entirely of variables, parentheses,
and the logical connectors AND, OR and NOT.xvi The Boolean Satisfiability Problem asks
whether given a Boolean statement if there is some way to assign truth-values to the variables in
order to make the entire statement true. The satisfiability problem can have only two answers,
Yes or No, and the problem task is to make this decision. Decision problems are the main focus
of complexity theory, as they are approachable by algorithms and most other problem types can
be rephrased as decision problems. This is a problem that quickly becomes difficult for humans.
As we saw in the lectures on mental models and logic, people have an extremely difficult time
mentally representing complex statements in propositional logic. The “Murder on the Orient
Express” problem from lecture was far too complex and had too many variables for any student
to solve mentally.xvii There are some obvious ways to approach this problem. With N variables,
there are 2n ways to assign truth-values to these variables. One could use truth tables to check all
of the possibilities and see whether any assignment of truth-values made the statement true.
However this gives us a search space of exponential size. Even with a fast computer the number
of computations required to check an exponential number of possibilities grows very fast for
large values of N. However, this problem has a special property that makes it less hopeless to
9
approach algorithmically. There are a large number of truth assignments that must be checked,
however, the number of steps required for a computer to check whether any individual truth
assignment gives an overall value of true is bounded by a polynomial in terms of the length of
the expression.xviii Even if this is a very large polynomial with very large coefficients, we say
that this is an effectively checkable process. This is because when the expression contains very
large numbers of variables, even a large polynomial will give lower values than an exponential
expression.
We can now define the two most important classes of problems in complexity theory.
We define P to be the set of all decision problems that can be solved by a deterministic Turing
machine in a number of steps bounded by a polynomial in terms of the length of the problem
expression. The class NP is the set of all decision problems that can be solved by a nondeterministic Turing machine in polynomial time. There is an important distinction to be made
between these two types of abstract machines. A deterministic Turing machine has only one set
of transition rules that determines the next machine state given the previous states. A nondeterministic Turing machine can have several sets of transition rules. We can think of a nondeterministic Turing machine as having the ability to branch as it carries out its algorithm. If any
of the machine’s branches halt and reach a final output, then we say that the algorithm halts. We
should also note that any process that can be carried out by a deterministic machine can be
simulated by a non-deterministic machine, specifically one that follows the same transition rules
without branching. Therefore it is clear that P is a subset of NP.
Interestingly, the condition of “effectively checkable” is equivalent to saying that a
problem can be solved in polynomial time by a non-deterministic Turing machine. So Boolean
Satisfiability is our first example of a problem in NP. The next question to ask is whether we can
10
do any better. Is there a polynomial time algorithm on a deterministic Turing machine to solve
the Boolean Satisfiability problem? This question is essential to understanding the relationship
between computing power and our ability to solve difficult problems. We use the existence of a
polynomial time deterministic algorithm as our definition for effectively solvable. This is a
variation of the Feasibility Thesis, which states that “A natural problem has a feasible algorithm
if it has a polynomial time algorithm.”xix If we can start with nothing and find a solution within a
polynomially bounded number of steps, then on a fast enough computer we should expect an
answer within a satisfactory amount of time.
This definition is far from perfect. It ignores the size of both the exponents and
coefficients in the polynomial. It also considers only the longest possible time to get a solution,
i.e., the worst-case scenario. Perhaps we could find an algorithm that runs quickly but is only
correct not all of the time, but a large probability of the time.xx There are other non-deterministic
approaches to solving difficult problems that should be mentioned. Quantum computers and
other probabilistic machines have shown potential in solving problems that are intractable to
normal machines. Roger Penrose, a noted physicist and philosopher, believes that these new
approaches could hold the key to understanding human thinking as well as computational
complexity.xxi
Before moving into a world of non-deterministic machines, we should understand the
limits of deterministic algorithms to quickly establish mathematical certainty. NP problems,
which offer only non-deterministic polynomial time solutions, do not necessarily fit our
definition of “effectively solvable.” We should note that due to an interesting result known as
the Time Hierarchy Theorem, harder problems than NP exist, and in fact harder problems always
exist.xxii For example, a problem that is too difficult to be considered NP is the Boolean
11
Satisfiability problem when we allow the quantifiers ‘for all’ and ‘there exists’ to be added to our
variables.
There is a subclass of NP problems, known as NP-Complete problems, which are of huge
theoretical significance. An NP-Complete problem is an NP problem that possesses the property
that any other NP problem can be reduced to it. We use the term reducible to mean that there
exists a deterministic polynomial time algorithm that transforms instances of our NP problem
into instances of the NP-Complete problem in such a way that the answers to the two instances
are always the same.xxiii We can think of NP-Complete problems as the hardest problems in NP,
in the sense that if there is a polynomial time algorithm for an NP-Complete problem, then there
is a polynomial time algorithm for every NP problem. If an NP-Complete problem is in P, then
P=NP. In 1971 a Canadian mathematician and computer scientist named Stephen Cook showed
that Boolean Satisifiability is NP-Complete. This is one of the most significant results in
complexity theory, and since then NP-Completeness has been studied extensively and hundreds
of other problems have been shown to be NP-Complete. Interestingly there is a variation of
Boolean Satisfiability which has also been proven NP-Complete called 3-SAT where the
statement is written in conjunctive normal form with three variables per clause.xxiv Before
Cook’s result, it was not even known that any NP-Complete problems existed. Now there is an
extensive list of problems which hold the key to the question of whether P=NP.
Most computer scientists and mathematicians believe that P≠NP. Attempts to find
polynomial time algorithms for NP-Complete problems have failed for the past few decades.
Theorists have found algorithms to solve the 3-SAT problem that take approximately 1.5^N
steps, where N is the number of steps in the formula, however, this is a long way from a
polynomial time algorithm.xxv Attempts to prove that P≠NP have been equally frustrating. Two
12
main approaches have been tried. The first, diagonalization and reduction, is a variation of the
method that Alan Turing first used to prove the Halting Problem undecidable. This procedure
has had applications to complexity theory by finding lower bounds for difficult but decidable
problems. However there is significant evidence to suggest that this approach will not lead to a
distinction between the two problem classes. The other main approach, Boolean circuit lower
bounds, highlights the practical significance of the satisfiability problem. Electronic circuits are
constructed and designed in a way similar to the construction of Boolean statements, and asking
whether a statement is satisfiable is equivalent to finding non-trivial properties of a circuit. It is
interesting to note that human beings have been effectively designing circuits for decades,
learning to cope with a problem within reasonable amounts of time that is difficult even for
complex computer programs. However, the search for a “super-polynomial lower bound on the
size of any family of Boolean circuits”xxvi, that solves some NP-Complete problem, has not yet
been found. Other approaches to the P vs. NP problem have been suggested, including the idea
that the question is undecidable within a formal axiomatic system. Chaitin has even suggested
that it would be reasonable to add the statement, P≠NP, as an extra axiom to our axiomatic
system.xxvii However the majority of complexity theorists are hopeful that a proof that P≠NP will
soon be found. The Clay Mathematics Institute named this question of complexity to be one of
the most significant open problems at the turn of the millennium, and is offering a $1,000,000 for
a correct solution either that P=NP or P≠NP.
Questions of computational complexity are of huge practical significance. For example,
many Internet security systems are based on the idea that it is difficult to factor large numbers, a
problem that is NP. If P=NP then there would be serious cause for alarm as our cryptography
systems would become far more vulnerable than we would hope. There is another interesting
13
consequence of a world in which P=NP. “It would transform mathematics by allowing a
computer to find a formal proof of any theorem which has a proof of reasonable length, since
formal proofs can easily be recognized in polynomial time.”xxviii The problem of finding new
proofs would be transformed into finding an algorithm to identify valid proofs within a formal
axiomatic system. On the other hand, if P≠NP, then it not only establishes that certain
algorithms will never reach a certain level of efficiency, but it also casts doubt on the ability of
computer generated proof to transform the world of mathematics.
A formal axiomatic system cannot offer humans the chance to derive all mathematical
truth. Gödel and Turing have put an end to Hilbert’s dream of a theory of everything.
Complexity theory takes a different approach. Even for decidable problems, there may not be
feasible deterministic algorithms that produce answers. Perhaps the ‘elegant’ programs that
Chaitin defines are just not elegant or efficient enough to solve many of our problems. A
deterministic algorithm may not be enough to provide the answers that we need. Quantum
computers offer one potential solution to finding efficiency where traditional computers fail, but
they are a special case of a larger question. There is a large body of evidence to suggest that
human beings are not physical representations of Turing machines. We are not completely
algorithmic in our thinking. There is something that we possess that an algorithm on a computer
does not. In many cases this is an obvious disadvantage; however, it could be possible that
human cognition has advantages to offer to the world of computational efficiency.
Searle’s Chinese room thought experiment
Searle’s Chinese room thought experiment is a philosophical argument that allows us to
recognize a fundamental difference in the reasoning capabilities of a human versus that of a
computer.
14
“Imagine that you carry out the steps in a program for answering questions in a language
you do not understand. I do not understand Chinese, so I imagine that I am locked in a
room with a lot of boxes of Chinese symbols (the database), I get small bunches of
Chinese symbols passed to me (questions in Chinese), and I look up in a rule book (the
program) what I am supposed to do. I perform certain operations on the symbols in
accordance with the rules (that is, I carry out the steps in the program) and give back
small bunches of symbols (answers to the questions) to those outside the room. I am the
computer implementing a program for answering questions in Chinese, but all the same I
do not understand a word of Chinese” (Ftrain.com).
Hypothetically, in this experiment the agent in the room becomes so adept with
syntactical agreement and verbal exchanges that his fluency is indistinguishable from that of a
fluent Chinese speaker. However, a crucial difference still remains. The agent is only making
pattern associations on the basis of syntax. So although he is able to shuffle and match characters
according to the rulebook, this is not a process equivalent to the syntactical and semantical
understanding with which a fluent Chinese speaker possesses for Chinese. This implies that no
matter what degree of seemingly intelligent output a computer may produce, the ubiquitous
difference between how it and a human reason is local to comprehension. Our human ability to
understand the semantics of our reasoning processes will always separate our cognition from that
of computers. This inherently human phenomenon has been labeled the “intentional mental
state.” 1 For example, when a computer generates a proof, it is producing a syntactical object.
However, as we saw in lecture in the “All Frenchman Are Gourmets” problem from lecture,
semantic context and understanding can provide measurable benefits in human reasoning.
What are the implications of this? First, this proposition makes headway into the
mechanistic debate on how to define the process of human reasoning. Are we Computationalist,
meaning are all human activities reducible to algorithms and could they therefore be
1
The definition of “internal mental states” includes characteristically human experiences like intention, beliefs,
desires, goal
15
implemented by a computer?(Torrence, 1998) Certainly not, as the Chinese room thought
experiment shows, there is inherently more to human reasoning than merely algorithmic
processing. Moreover, “intentional mental states” carry over into real world counterarguments to
the Computationalist perspective with uniquely human phenomenon like inconsistent preferences
or loss aversion, as in the Asian disease problem, where a human given the same situation will
favor different choices as a function of context (Kahneman & Tvesky, 1984).
Could humans be anti-computaionalist? This means that human reasoning is not
computational, and unable to be modeled computationally. This is also incorrect. Computational
thought is used everyday in a diverse array of circumstances. In fact, many scientists would
agree that general human reasoning has a preponderance of algorithmic analysis (Torrence,
1998).
Perhaps the most adequate definition of human reasoning is defined by “pluralism,”
wherein much of reasoning is computational, but some is not. The dichotomy of pluralism allows
us to maintain that much of human reasoning is computational but parts of it are still
noncomputiational, for example intentional mental states (Torrence, 1998).
As the beginning of this section suggests, and where our pluralistic definition of human
reasoning leads us, is to question whether the differences in human reasoning have something to
offer to the world of computational efficiency. That is, do the noncomputational aspects of
human cognition provide a superior method that allows humans to transcend the boundaries of
computational analysis in which computers are constrained? To answer this question, we look at
a task that is seemingly trivial for humans but engenders great difficulty in computers, language
processing.
16
Humans learn to process language with an apparently trivial simplicity where virtually all
children attain a foundational mastery of their language in a mere four or five years (Smith et al.
2003). However, the natural proficiency with which humans learn to process language
misrepresents the intrinsic complication of this task. It is through analyzing the difficulties in
modeling language processing on computers that we can truly come to appreciate this advantage.
We must further acknowledge the presence of a uniquely human proficiency in this task that is
exclusive to the human pluralistic model.
The difficulties of language process arise from feature agreement and lexical ambiguity.
Feature agreement is widespread in human language, for example in subject/verb agreement and
in adjective agreement as well. Lexical and structural ambiguities are also common. We have
multiple words that function as homonyms, for example block might take on the feature of a verb
or a noun. In Barton et al.(1987), the authors use a computational linguistic model to account for
both agreement and ambiguity. Given the features of agreement and the prevalence of lexical
ambiguity, they prove that these two in combination can produce computational intractability.
The recognition problem for the linguistic model is NP-Complete and furthermore NP-Hard2.
In summary “natural language, lexical elements may be required to agree (or disagree) on
such features as person, number, gender, case, count, category, reference, thematic role, tense,
and abstractness; if SAT, agreement ensures the consistency of truth-assignments to variables.”
(Barton et al. 1987, pp100). Lexical ambiguity offers challenges to the two-valued logic of
Boolean Satisfiability. For example, the word “can” is able to function as a noun, verb, or
modal. Representing language elements as Boolean variables is troublesome because of the
often multiple possibilities. “Thus, the linguistic mechanism for agreement and ambiguity are
exactly those needed to solve satisfiability.” (Barton et al. 1987, pp100).
2
NP-Hard is a subset of NP problems which run an infinite loop on a Turing machine. Unsolvable.
17
The performance limitations of our linguistic model arise in the case of excessive lexical
and structural ambiguity, as in example one, and elaborate agreement processes, like in statement
two.
1) Buffalo buffalo buffalo buffalo buffalo.
2) John owned and then sold hundreds of late model cars to us that he waxed all the time.
In the first case, one sequence of many possible interpretations uses the grammatical
categories of buffalo in the following sequence: adjective, subject, transitive verb, adjective, and
direct object. In the second example constituent coordination of words is unclear (Barton et al.
1987, pp100).
How can humans solve language processing so proficiently, a task which in many cases
is doomed to run an infinite loop on a computer? The implications of this paper are that the noncomputational aspects of human reasoning are responsible for this advantage. If human internal
mental states provide a method for human reasoning to overcome computational barriers, then
one may ask: Are the intentions, beliefs, desires, and goals that are inherent in the passages we
read and the conversations that we have what allows us to discern the feature agreement and
lexical ambiguity that is present? We believe this could be the case although more research into
the subject would be necessary to substantiate our thoughts as conclusive.
The search for the Theory of Everything from which all mathematical truth can be
derived will never be a success. There are important practical problem that can never be solved
by algorithms. There are even more practical problems, such as Boolean Satisfiability, that may
never be solved efficiently by algorithms. However humans possess some reasoning ability that
these deterministic algorithms lack. These special cognitive processes that are connected with
semantic understanding allow us to excel where algorithms fail; they quite possibly hold the key
18
to success in solving a whole new class of problems that are intractable for deterministic
algorithms.
19
Bibliography
1. Barton, E., Berwick, R., Ristad, E.(1987) Computational Complexity and Natural
Language. Cambridge: MIT Press.
2. Chaitin, Gregory. “Elegant LISP Programs.” [Online] Available
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/lisp.html
3. Chaitin, Gregory. “Irreducible Complexity in Pure Mathematics.” [Online] Available
http://www.umcs.maine.edu/~chaitin/xxx.pdf
4. Chaitin, Gregory. “The Berry Paradox.” [Online] Available
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html.
5. Chinese Thought Experiment
http://www.ftrain.com/ChineseRoom.html
6. Cook, Stephen. “The P versus NP Problem.” [Online] Available
http://www.claymath.org/millennium/P_vs_NP/Official_Problem_Description.pdf
7. “Complexity Classes P and NP.” [Online] Available
http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP
8. Denning, Peter J. “Is thinking Computable?” American Scientist, 78, pg 100-102.
[Online] Available http://cne.gmu.edu/pjd/PUBS/AmSci-1990-2-thinking.pdf
9. M.R. Garey and D.S. Johnson (1979). Computers and Intractability: A Guide to the
Theory of NP-Completeness. New York: WH Freeman and Company, p. 124.
10. “Halting Problem.” [Online] Available http://en.wikipedia.org/wiki/Halting_problem
11. Irvine, A. D., "Russell's Paradox", The Stanford Encyclopedia of Philosophy (Summer
2004 Edition), Edward N. Zalta (ed.), [Online] Available
12. Kahneman & Tvesky. Choices, Values, and Frames. American Psycologist, 39,106-115.
13. Lecture Notes, MAT 312, Prof. Nelson, Spring 2005.
14. Lecture Notes, PSY/ORF 322, Prof. Johnson-Laird, Spring 2005.
15. “Non-deterministic Turing Machine.” [Online] Available
http://en.wikipedia.org/wiki/Non-deterministic_Turing_machine
16. Papadimitriou, Christos. Computational Complexity. New York: Addison Wesley
Publishing Company, 1994.
17. Torrence, S. Consciousness and Computation: A pluralist perspective. AISB Quarterly,
No 99, Winter-Spring, 1998.
18. “Turing Machine.” [Online] Available http://en.wikipedia.org/wiki/Turing_machine
19. <http://plato.stanford.edu/archives/sum2004/entries/russell-paradox/>.
Gregory Chaitin, “Irreducible Complexity in Pure Mathematics”, 1.
Ibid. 10.
iii
Irvine, http://plato.stanford.edu/archives/sum2004/entries/russell-paradox/.
iv
“Halting Problem.” http://en.wikipedia.org/wiki/Halting_problem.
v
Papadimitriou, 20.
vi
Chaitin, “The Berry Paradox.”, http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html.
vii
Chaitin, “Irreducible Complexity in Pure Mathematics.”, 10.
i
ii
20
“Halting Problem.”, http://en.wikipedia.org/wiki/Halting_problem.
Chaitin, “Elegant LISP Programs.” http://www.cs.auckland.ac.nz/CDMTCS/chaitin/lisp.html, 2.
x
Ibid, 2.
xi
Ibid, 1.
xii
Ibid, 8.
xiii
Ibid, 11.
xiv
Ibid, 11.
xv
Garey, 5.
xvi
Papadimitriou, 73.
xvii
Johnson-Laird.
xviii
Papadimitriou, 78.
xix
Cook, 5.
xx
“Complexity Classes P and NP.” http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP.
xxi
Denning, 1.
xxii
Papadimitriou, 143.
xxiii
Cook, 6.
xxiv
Ibid., 7.
xxv
Ibid., 10.
xxvi
Ibid., 11.
xxvii
Chaitin, “Irreducible Complexity in Pure Mathematics”, 12.
xxviii
Ibid., 9.
viii
ix
This paper represents our work in accordance with university regulations.
21
Download