In his American Philosophical Association presidential address John

advertisement
In his American Philosophical Association presidential address John Searle argues for the
shocking conclusion that “you could not discover that the brain or anything else was
intrinsically a digital computer, although you could assign a computational interpretation
to it, as you could anything else.” (Searle, (1999, p. 266)) This is to follow from the
claim that “computation is not discovered in the physics, it is assigned to it. . . Syntax and
symbols are observer relative.” (Searle, (1999, p. 266))
If one didn’t know better, one would think one was reading the philosophical
ravings literary theorists of the deconstructive bent,1 happy to undermine objective facts
about syntax and symbols. Alas, for Searle (unlike his English department enemy) there
is much in the world besides syntax and symbols. That is, while Searle now agrees that
the textual aspects of the world are in some sense observer relative,2 much of the world is
(for Searle, here with common sense) not a text.3 Searle does not set himself the task of
defending relativism tout courte. Rather, he explicitly differentiates broad questions
concerning syntax and symbols (and in particular, the brain’s alleged computational
1
For the perfect coda and refutation of deconstructionism in general, and Derrida’s sins against
philosophy in particular, see Searle (1994).
2
Though in the above op. cit. he never explains much what this means. Fuller explication would
require not only explicating Searle’s important (1997), but also engaging in the many different
senses of “observer relativity” teased apart in Crispin Wright (1994, 2003). Albeit since we find
Searle’s arguments completely unpersuasive, we don’t embark on this journey.
3
Clever deconstructionists would protest. Many philosophers hold that symbols mediate our
knowledge of the external world. Given this, extreme relativism and nihilsim about the symbolic
naturally bleeds over into relativism and nihilism about that which is symbolized. However, in
good Hegelian fashion, this might be a reductio of such a symbolic or cognitive veil of Maya. As
these issues are perennial, we put them off for another day.
1
states) from those “settled by factual investigation,” examples of the latter being,
“whether the heart is a pump, or whether green leaves do photosynthesis.” (Searle, 256).
Moreover, Searle’s specific arguments, such as they are, specifically involve
results he claims to follow from computability theory. Unfortunately though, Searle gets
his computability theory badly wrong, saying things provably inconsistent with results
given in the very (Boolos, et. al., 2002) text he cites. Once these confusions are cleared
up, a good case can be made that Searle’s argument is in fact no different for earlier
relativistic arguments made by Hillary Putnam during his (1981) internal realism period.
Most importantly, one can show that the most plausible reaction one should have to
Putnam’s generalized model theoretic argument also disarms Searle’s argument.
I. SEARLE’S ARGUMENT
The first part of Searle’s argument is the following, claim.
1. For any object there is some description of that object such that under that
description the object is a digital computer. (Searle, 258)
This claim goes from trivial to absurd, depending upon how one interprets it. Completely
trivially, of course we can falsely describe anything in any way we want. Slightly less
trivially, I can interpret an objects doing nothing whatsoever as giving the answer zero
and limit myself to asking it what one minus one is. Albeit this a bit like ordering a
sleeping cat to stay.
Searle intends his first claim to help undermine the meaningfulness of the question of
whether the brain is a digital computer though. However, by itself, the claim clearly does
not entail this. One could hold with Searle that everything can be correctly interpreted as
a digital computer, while also holding that this is because the universe is such a
2
computer.
But then the question of what program human brains characteristically
compute would, pace Searle, remain a meaningful one.
What Searle intends with his first claim is clear from consulting his second.
2. For any program there is some sufficiently complex object such that there is
some description of the object under which it is implementing the program. Thus,
for example the wall behind my back is right now implementing the Wordstar
program, because there is some patter of molecule movements which is isomporhic
with the formal structure of Wordstar. But if the wall is implementing Wordstar
then if it is a big enough wall it is implementing any program, including any
program implemented on the brain. (Searle, 258-259)
Again, confusion lurks in the way Searle has phrased this. Clearly, for any program there
are (at least in principle) Turing machines, abaci, Von Neuman computers, and
formalized theories such that under “some descriptions” (the correct ones!) those objects
are implementing the program. But, again, this gets us no closer to Searle’s strong
conclusions.
His remarks about Wordstar help to clear this up though. Searle means to say that
the only bar to correctly interpreting an object as running a program is the complexity of
that object. With these understandings we can represent Searle’s argument thusly:
(1) Consider arbitrary object A (for example, Searle’s wall). A can be correctly
interpreted as a computer running a program.
(2) If A is sufficiently complex (for example the progressive physical states of the
molecules composing Searle’s wall), then A can be correctly interpreted as running
any program.
(3) But then there is no fact of the matter about what program A is running.
(4) Since A is arbitrary, there is no fact of the matter about what program any object
is running.
3
(5) Therefore, unlike questions of photosynthesis and circulation of blood,
questions concerning syntax and symbols are observer relative.
We hope that it is clear that charity has been observed. If Premises 1 and 2 are correct,
then 3-5 are well motivated. However, Premises 1 and 2 are about completely mistaken.
II. THE FIRST PREMISE
While Searle blithely asserts that his two premises are maintained “On the standard
textbook definition of computation,” (Searle, 258) one will in vain search standard
textbooks such as Boolos, et. al., (2002) for anything that would seem to confirm the first
premise.
Ironically, as regards Premise 1, standard textbooks of interest to philosophers
mainly concern themselves with getting the student able to understand results about
things that are in principle not computable. If, following Church’s Thesis, one means by
“computable” that which a digital computer can, in-principle do. One can prove, for
example:
A. The Undecidability of First-Order Logical Truth- There is no computational
procedure that can enumerate the set of logically contingent (such that they can be
true or false) sentences of first order logic;
B. The Non-Enumerability of Number Theory- There is no computational
procedure that can enumerate the set of truths of number theory;
C. The Non-Enumerability of Second-Order Logical Truth- There is no
computational procedure that can enumerate the set of logically true sentences of
second order logic;
4
D. The Unsolvability of the Halting Problem- There is no computational procedure
for determining of any arbitrary computer program whether that program will halt
at some point for an arbitrary input.4
Of course most of the philosophical interest in these limitation results concerns whether
anything in the universe instantiates the functions in question. If something does, then
bits of the universe are not computational (contra Searle’s premise). Recent foundational
debates in the philosophy of physics concern whether the universe itself obeys provably
non-computable functions.5 A rich tradition in the philosophy of mind wonders if the
human mind is correctly interpreted as being able to process the non-computable
functions mentioned in (A) through (D).6
While some might see the exhaustion of these traditions as a welcome result of
Searle’s argument, this will not do. Searle’s first premise is still provably wrong. Given
the way the proofs work for (A) through (D) it is always possible to imagine a possible
object that in some sense obeys the function. For example, we can enumerate the infinite
set of possible one numeral entry Turing machine programs, and provably there is a fact
of the matter concerning whether each Turing machine will halt or not if fed an arbitrary
numeral. Thus, we can think of a three place function f existing in Plato’s heaven,
consisting of a set of three-tuples <a,b,c> where a is the number of a Turing machine
program, b is the number of an entry, and c is “1” if a halts for b, and “0” if a does not
halt for b. We can imagine a god-like creature with perfect ability to tell of arbitrary
4
These claims, as well our claim about representability we go on to show, are clearly presented
and easily findable in Boolos, et. al.
5
For example, see the discussion in (Odifreddi, 1996) and (Wolfram, 2002).
6
See, for example, (Penrose 2002).
5
<a,b,c> whether it is in f or not. The unsolvability of the halting problem logically entails
that this creature we imagine is provably not a computer!7
Now there may in fact be no such god-like creatures in the universe. But this is
an empirical question. The concept of a creature that can perfectly solve halting issues is
a perfectly fine concept, and as such there is a fact of the matter about whether it is
instantiated.
III. THE SECOND PREMISE AND PUTNAM’S ARGUMENT
Proving Searle’s first premise wrong is not enough. We must understand better where he
goes wrong in his argument. Why is he so sure that textbooks on computability theory
trivially entail his two premises. Again, Searle cites no results, but we think one can
charitably reconstruct his reasoning.
One of the most important advances in logic in the previous century involved
seeing logical languages themselves as non-interpreted. For example, when determining
whether a conclusion  follows from a set of sentences , we now normally attempt to
come up with a logically possible world that makes all of the sentences in  true while
rendering  false. Interestingly, in doing so we are allowed to intepret the “non-logical”
vocabulary (predicates and proper names in standard logics) in any way possible. Thus,
if showing that Goldbach’s conjecture is independent of some suitably strong
axiomatization of arithmatic involves needing to interpret the numerals as an infinite set
of turnips, and number-theoretic predicates as various judgments of taste Elvis would
7
Again, we here presuppose Church’s Thesis in how we state this. In this context, this is merely
terminological! If you don’t agree with Church’s Thesis, replace “computer” with “recursive
machine,” which is what is at issue in any sense.
6
make about these turnips, this is fine! The independence of Goldbach’s conjecture would
have been established.
The story of how we got from Frege and Russell’s original understanding of logic
as an interpreted language with a fixed domain to our current understanding is
extraordinarily important, and (as far as we’re aware) not one that has been completely
told, so we are wary about citing many results that were independently discovered by
many people in the logical fervor of the previous century. It’s not entirely clear how our
current textbook notions became canonical.8
However, during this history humanity discovered that the only thing about a
domain of discourse relevant to determining whether logical consequence holds is the
number of objects in the domain. For example, in first order logic, every interpretation
can be converted to one where the domain is a set of natural numbers, and predicates are
sets of n-tuples of such numbers. This can always be done changing the truth-value of
any sentence. Any decent meta-logic textbook proves this claim.
In the hands of great philosophers such as Quine and Putnam, these results
seemed threatening.
Quine (1969) wondered if they showed that one could be a
Pythagorean, merely ontologically committed to numbers. Putnam (1981) wondered if
they supported a neo-Kantianism that reality was in part human constituted. As we
interpret him, Searle is squarely in this tradition.
8
Clearly, Hilbert and Ackerman’s textbook was almost certainly responsible for disseminating
many of our current notation and concepts. However, the question of who should get credit for
many important results during the foundational bonanza of the previous century is extraordinarily
difficult.
Too many geniuses were approaching interrelated and overlapping issues
simultaneously.
7
So what might the results be that motivate Searle’s second premise? First, note
that it is provable that there is a formal axiomatizable theory (Q) (Robinson’s
Artithmetic) in first order logic such that: (1) any computer program c can be represented
as an axiomatizable set of further axioms added to Q, giving us QC, (2) any possible
input i to C and output o of C can be represented as another premise, I, and conclusion O,
such that (3) O is logically deducible from Q,C,I if and only if o is the output that c gives
for i. In this manner, any computer program is representable in R.
But then, if we can do this, the problem of non-interpretability of domains comes
to for. To show that c does not yield o for i, it is sufficient to come up with model where
R,C,I are made true, and O made false. But, as a result of the non-interpreted nature of
logic, as long as there are enough (and sometimes not too many) objects in the domain,
any set of objects could be used to show that the program does not yield the anticipated
output. Thus, while we might (and almost certainly would) use something that closely
resembles a Turing machine or Von Neuman computer to model the theory, there is no
necessity in doing so. Anything with enough objects is a good enough domain for the
theory, and relations isomorphic to those of the original machine need only be specified
over the new domain. Of course Searle himself can’t specify the mapping from his
computer’s running “Word Star” to his wall’s running “Word Star.” But logic assures us
it is there.
IV. THE CASE OF PUTNAM
So as far as we can see, Searle’s argument rests upon the kinds of cardinality and
isomorphism results that follow from the non-interpreted nature of logic, the same sorts
8
of results that so captivated Quine and Putnam at various points in their careers. But if
this is true, then the reasons that undermine Quine and Putnam in these regards also
undermine Searle.
As is so often the case, if anything decisively undercut Putnam’s generalized
model theoretic argument it was growing acceptance of other positions Putnam had
defended. In particular, semantic externalism decisively undercuts the philosophical
applicability of such isomorphism results.
In his “internal realism” phase, Putnam considers isomorphic mappings from a
domain to itself. Consider say, a mapping that maps the predicate “dog” to the set of cats
and “cat” to the set of dogs. If compensatory mappings were made (e.g. “sleeps all the
goddamn time” to the set of dogs and “won’t shut the hell up” to the set of cats), we
could end up making all the same sentences true.
For Putnam such mappings were to show that metaphysical realism is false. In
particular, for him it entailed that there are an infinite number of equally good different
ways to characterize the universe, and hence that the question of absolute objectivity is
undermined.
It should be clear that this is the same argument Searle puts forward!
However, for most people today, this bit of argumentation in Putnam’s Reason, Truth,
and History has zero philosophical interest. The main reason for this is that, if (big if)
such a mapping were possible, the words in the new interpreted language would mean
something different. “Dog” in L prime, would mean what “cat” means in English, just
9
like “hund” in German means what “dog” means in English.
Putnam’s “different
versions of reality” are in fact all the same version expressed in a different language.9
Thus, the argument would only work if we could ourselves to mappings that could
arguably preserve even the coarsest notion of meaning. Otherwise the argument is
trivialized. But then one can not avail oneself to the kinds of arbitrary permutations logic
seems to allow. This restriction is exactly the sense in which a natural language is not an
interpreted language.
This follows directly from Putnam’s (1975) own classic “The Meaning of
Meaning.”10 For Putnam, if we call stuff that merely looks like gold (but lacks the
essential properties of gold) “gold,” then we’ve changed the meaning of what we’re
talking about. However, one needn’t be committed to the metaphysical essence of gold
to hold this! When we name something something, we are severely limited in what other
kinds of things can have the same appelation. Arguably, a good theory of the nature of
these limits involves conceptual analysis, developmental and psycho-linguistics,
Foucaultian study of the history of usage, and relevant empirical information. Thus, the
true theories of meaning for various terms might depart significantly from various
“semantic externalisms” put forward by philosophers. But no one would doubt that there
are limits to such permutations based on deictic interaction with the world, our history of
9
Donald Davidson (1979) and Mark Wilson (1980) each first made this point in the context of
working through different, but connected, strands of Quine’s philosophy.
10
We conjecture that this is why Putnam has been drawn to the neo-Hegelian views of John
McDowell (1996) as of late. McDowell’s views, if correct, undermine the antinomy in a very
deep way that also addresses many of Putnam’s enduring philosophical concerns. The points also
follow clearly from Kripke’s (1972) important admonitions about how not to understand
necessisty.
10
usage, and causal states in us and in the rest of the universe. And this refutes Putnam’s
model theoretic argument.
It also refutes Searle.
Contra Searle, any arbitrary subset of objects in the
universe cannot be considered the set of “zeros,” or the set of “ones.” As with all
denoting language, the use of “zero” and “one” must pass double blind tests to be
sufficiently objective. Likewise, numeric inputs and outputs must be detectable in ways
that pas double blind tests. Yes this is somewhat relative to our conceptual capacities,
but no more so than “photosynthesizing” or “circulating,” which Searle holds to be
objective terms.
Then, once the inputs and outputs of a system can be detected, for a system to
instatiate a program, the system must support all of the input-output relations of that
program.
For any interesting program, this will be an infinite number of causally
supported counterfactual conditionals of the sort, “if you put in a and b, then a + b comes
out.”
The molecular activity in Searle’s wall does not support the same counterfactual
input/output statements as the Wordstar program! If some clever scientist could program
it to (by setting up a huge machine around the wall) then it actually would be running
Wordstar. But it is not.
V. CONCLUDING THOUGHTS: IDEALIZATION AND VIEWER RELATIVITY
We close by pointing out that therea re two interesting things in the neighborhood of
Searle’s claims.
First, programs are abstract objects as are the idealized machines
11
(Turing, Von Neuman, etc.) that run them. A truly universal Von Neuman computer
must have infinite registers, for example.
So when we say an actual, non-idealized, object is running a program. There is a
sense in which the actual object is not really a Von Neuman machine, but a better or
worse approximation of one. One might sophistically argue from this fact to Searle’s
conclusion, but the temptation should be rejected.
An actual machine will do a better or worse job of running a given program.
However, this is no different than actually existing triangles being better or worse
approximations to the Platonic ideal. As with the case of the necessity of double blind
tests, for most of us the problem of idealization does not entail that facts about
triangularity and existing triangles are viewer dependent and non-objective. And it
shouldn’t, so long as there are facts about better or worse realizations of the idealized
objects. But, again, all of our objective concepts are like this.
Second, the term “program” in actual use almost always refers not to Turing
machine or Von Neuman shuffling around of zeros and ones.
Rather it refers to
uncompiled, higher-level functional characterizations of machine behavior. Interestingly
two instantiations of the “same” programs (say Wordstar on a Mac and Wordstar on a
P.C.) can compile down into completely different Von Neuman programs. “Programs” in
this broader sense are wildly disjunctive over programs in the sense of computational
theory. It is much more compelling to hold, as has been extensively argued elsewhere
(Cogburn and Silcox, 2005), that forms of viewer relativity enter into the identity
conditions of programs in this broader sense. But again, this is not Searle’s conclusion,
which concerns the syntax and semantics of programs in the narrower sense.
12
References:
Boolos, George, Burgess, John, and Jeffrey, Richard. 2002. Computability and Logic.
Cambridge University Press.
Cogburn, Jon and Silcox, Mark. 2005. “Computing Machinery and Emergence: The
Metaphysics and Aesthetics of Video Games.” Minds and Machines 15:1, 73-89.
Davidson, D. 1979. “The Inscrutibility of Reference.” The Southwestern Journal of
Philosophy 10, 7-19.
Kripke, Saul. 1972. Naming and Necessity. Harvard University Press.
McDowell, John. 1996. Mind and World. Harvard University Press.
Odifreddi, Piergiorgio. 1996. Kreiseliana: About and Around Georg Kreisel. A.K. Peters.
Penrose, Roger. 2002. The Emperor’s New Mind: Concerning Computers, The Mind, and
the Laws of Physics.
Putnam, Hilary. 1975. “The Meaning of Meaning.” In Mind, Language, and Reality.
Cambridge University Press.
Putnam, Hilary. 1981. Reason, Truth, and History. Cambridge University Press.
Quine, Willard. 1969. Ontological Relativity and Other Essays. Columbia University
Press.
Searle, John. 1990. “Is the Brain a Digital Computer?” Proceedings and Addresses of the
American Philosophical Association 64. Citations from reprinted in Cooney,
Brian (ed.). 1999. The Place of Mind. Wadsworth.
Searle, John. 1994. “Literary Theory and Its Discontents.” New Literary History 25:3.
Reprinted in Patai, Daphne, and Corral, Will (eds.). 2005. Theory’s Empire: An
Anthology of Dissent. Columbia University Press.
Searle, John. 1997. The Construction of Social Reality. Free Press.
Wilson, M. 1980. “The Observational Uniqueness of Some Theories.” Journal of
Philosophy 77, 208-233.
Wolfram, Walt. 2002. A New Kind of Science. Wolfram Media.
Wright, Crispin. 1994. Truth and Objectivity. Harvard University Press.
Wright, Crispin. 2003. Saving the Differences: Essays on Themes from Truth and
Objectivity. Harvard University Press.
13
Download