Uploaded by may.owen2

Gibson 2008

advertisement
TECHNICAL COMMUNICATION QUARTERLY
ISSN: 1057-2252 (Print) 1542-7625 (Online) Journal homepage: https://www.tandfonline.com/loi/htcq20
Analogy in Scientific Argumentation
Keith Gibson
To cite this article: Keith Gibson (2008) Analogy in Scientific Argumentation, TECHNICAL
COMMUNICATION QUARTERLY, 17:2, 202-219, DOI: 10.1080/10572250701878868
To link to this article: https://doi.org/10.1080/10572250701878868
Published online: 01 Apr 2008.
Submit your article to this journal
Article views: 210
View related articles
Citing articles: 1 View citing articles
Full Terms & Conditions of access and use can be found at
https://www.tandfonline.com/action/journalInformation?journalCode=htcq20
TECHNICAL COMMUNICATION QUARTERLY, 17(2), 202–219
Copyright © 2008 Taylor & Francis Group, LLC
ISSN: 1057-2252 print / 1542-7625 online
DOI: 10.1080/10572250701878868
Analogy in Scientific Argumentation
Keith Gibson
Utah State University
Analogical reasoning has long been an important tool in the production of scientific
knowledge, yet many scientists remain hesitant to fully endorse (or even admit) its
use. As the teachers of scientific and technical writers, we have an opportunity and
responsibility to teach them to use analogy without their writing becoming “overly
inductive,” as Aristotle warned. To that end, I here offer an analysis of an example of
the effective use of analogy in Rodney Brooks’s “Intelligence Without Representation.” In this article, Brooks provides a model for incorporating these tools into an argument by building four of them into an enthymeme that clearly organizes his argument. This combination of inductive and deductive reasoning helped the article
become a very influential piece of scholarship in artificial intelligence research, and
it can help our students learn to use analogy in their own writing.
Every one who effects persuasion through proof does in fact use either enthymemes
or examples: there is no other way. (Aristotle, 1984b, p. 26)
In 1817, Charles Babbage addressed the Royal Society of London. He argued in
favor of using analogy as a tool in mathematical discovery, but his opening remarks indicate the suspicion with which many of his colleagues viewed the idea:
The employment of such an instrument [analogy] may, perhaps, create surprise in
those who have been accustomed to view this science [mathematics] as one which is
founded on the most perfect demonstration, and it may be imagined that the vagueness
and errors which analogy, when unskillfully employed, has sometimes introduced into
other sciences, would be transferred to this. (Babbage, 1817, p. 197, original emphasis)
He then explained how a number of apparently “illusory” equations in calculus
could be greatly simplified by comparing them with similar equations in algebra;
even in his endorsement, though, Babbage was cautious: he claimed that analogy
should be used “only as a guide to point out the road to discovery” (p. 197).
SCIENTIFIC ARGUMENTATION
203
In an 1891 address before the American Association for the Advancement of
Science, Joseph Jastrow was more than just cautious—he argued that humans are
civilized to the degree that we can see through arguments by analogy: “It will appear that the progress from the attitude of the savage to that of the civilized man
with respect to the understanding of the natural and physical world, may, to a considerable extent, be regarded as a shifting of the position occupied by the argument
by analogy” (Jastrow, 1891, p. 118). Jastrow maintained that argument by analogy
is the reason “savages” believe in witchcraft and the general public believes in astrology. And it is the responsibility of science, he claimed, to avoid “such outof-the-way resemblances,” instead relying on “slow and painful steps” to ensure
theories are correct (p. 119).1
More recently, Mary Hesse (1981) reported that quantum physics is reinforcing
the belief that analogies and models are not a vital part of scientific theorizing, due
in large part to the difficulty scholars have had finding a descriptive model of the
various oddities of quantum theory (p. 348). The most popular model for quantum
mechanics has been Schrödinger’s cat, in which a cat seems suspended between
life and death due to the strangeness of quantum superposition, and one of its main
lessons is that intuitive models only make quantum theory more confusing.
Despite these apprehensions (and many others like them), the history of science
indicates that analogies are at the heart of much scientific experimental work and,
as such, are difficult to avoid. W. S. Jevons (1958) noted that “[a]ll science . . .
arises from the discovery of identity,” and this discovery “is most frequently accomplished by following up hints received from analogy” (p. 629). The paradigmatic example of discovery following analogy is the progression of our understanding of atomic structure. In 1897, J. J. Thomson discovered the electron and
theorized that atoms were pieces of positively charged material with negatively
charged electrons strewn throughout. This reminded Thomson, an Englishman, of
the raisins in a bowl of plum pudding, so, to introduce the concept to his colleagues, he dubbed it the “plum pudding” model. One of the main problems with
this model is that it does not explain how the positive material can remain in equilibrium with negative electrons. As Niels Bohr pondered this issue, he was reminded of the equilibrium achieved by the solar system: Despite gravity pulling
the sun and planets together, the motion of the planets kept them from crashing into
the sun. Thus, Bohr suggested what became known as the Planetary Model, and,
unlike Thomson’s plum pudding, this name was not simply descriptive: It was rhetorical as well. Focusing on the key similarities between atoms and solar systems
1Oddly enough, Jastrow’s own argument is heavily dependent on analogy. At one point he wrote the
following: “Just as the drum, once the terrifying instrument of the warrior, or the rattle, once the potent
implement of the medicine-man, has become the toy of children, or as the bow and arrow are maintained for sport only, so the outgrown forms of thought, the analogies, that were serious to our ancestors, now find application in riddles and puns” (Jastrow, 1891, p. 119).
204
GIBSON
(relatively heavy objects surrounded by relatively light ones; an interplay of forces
pulling the objects together and pushing them apart), the name of the model implicitly argued that the motions of the two systems were quite likely similar. It
turned out that Bohr was wrong, but his model, due largely to the analogy on which
it relied, was accepted by the scientific community for many years.
Philosophers of science have increasingly taken note of the importance of analogy
in scientific thought and sought to explain its role. Rudolf Carnap (1963), among others, cited analogy as a contributing factor in the confirmation of a scientific theory:
“the probability that an object b has a certain property, is increased by the information
that one or more other objects, which are similar to b in other respects, have this property” (p. 225). N. R. Campbell (1920) went even further, arguing that
analogies are not “aids” to the establishment of theories; they are an utterly essential part
of theories, without which theories would be completely valueless and unworthy of the
name. It is often suggested that the analogy leads to the formulation of the theory, but
once the theory is formulated the analogy has served its purpose and may be removed or
forgotten. Such a suggestion is absolutely false and perniciously misleading. (p. 129)
Bohr’s planetary model of the atom suggests that Campbell is correct: The comparison of the atomic structure to the solar system not only helped Bohr come up
with the idea but also argued for the theory’s fitness. The presence of a physical
similarity was vitally important to the theory’s acceptance, and Bohr would have
found it much more difficult (if not impossible) to convince his colleagues without
that analogy.
The importance of analogy in the history of science and scientific communication makes it important for scientists and engineers to learn to use it in their work
and in their writing. As the teachers of young scientific and technical communicators, we have an opportunity to teach them appropriate uses of analogy in their
writing, and, if done well, we may be able to counteract the uneasiness with which
so many scientists view it. To that end, in this article, I will offer a brief history of
analogy (and inductive reasoning generally) in scientific discourse and a more extended analysis of the use of this reasoning in an influential piece of scientific communication: Rodney Brooks’s (1999) “Intelligence Without Representation.” With
this discussion, I hope to show that Brooks’s rhetorical success is due at least partly
to his sophisticated use of analogy—specifically, Brooks used four analogies to
build a single enthymeme on which his argument rested—and that our students
will benefit from including these rhetorical devices in their repertoires.
CONSIDERATIONS OF INDUCTIVE LOGIC
The commonly held notions of deduction and induction—that the former is reasoning from general principles to specific applications and the latter is reasoning
SCIENTIFIC ARGUMENTATION
205
from specific instances to general principles—stem from Aristotle’s Topics and
Prior and Posterior Analytics. But a close examination of Aristotle’s writings indicates that these definitions are incomplete. In the Topics, he writes that there are
only two “species . . . of dialectical argument. . . . induction and deduction,” and
that “induction is a passage from particulars to universals” (Aristotle, 1984a, pp.
174–175). In the Posterior Analytics, on the other hand, he writes that every deduction takes place via syllogism (Aristotle, 1984a, p. 133)—that people arrive at specific insights from general principles. It does not, however, require a great deal of
imagination to think of lines of reasoning, or arguments, that fit neither of these
models. The most obvious example is reasoning from a specific case to another
specific case. Aristotle himself provides examples of such reasoning in the Rhetoric; he describes an argument against selecting public officials by lot because of its
similarity to the ridiculous notion of selecting athletes by lot. This argument is not
aimed at establishing a general principle that casting lots is never a good idea;
rather it is an argument from one specific to another. Aristotle identifies this type of
reasoning as induction, illustrating the inadequacy of his earlier definitions.
How, then, shall induction and deduction be defined? Modern philosophers of
science have tackled this problem at great length, and several have come to a conclusion that remains true to Aristotle’s notion that all reasoning is either deductive
or inductive. Arthur Pap (1962) argues that the difference between them lies not in
the direction of the causal arrow between specifics and universals but rather in the
amount of certainty one may have in the eventual conclusion:
In contemporary logic and philosophy of science “deductive inference” is used in the
sense of necessary (demonstrative) inference: the conclusion . . . is claimed to follow
with logical necessity from the premises. . . . An inductive inference . . . is an inference whose conclusion is not claimed to follow necessarily but only with some degree of probability (p. 141).
Peter Caws (1965) suggests a similar distinction and adds that induction, in making a logical leap, “runs risks that deduction does not. We shall find these risks at
the root of scientific theory” (p. 194).2 Indeed, induction, going “beyond the given
facts” in Caws’s words, is necessary for scientific progress. Another way to imagine the difference between deduction and induction is in Kuhnian terms, relating
directly to scientific advancement: Deduction is used in periods of normal science,
2Caws delineated these risks this way: “Inductive inferences are problematic in three ways. The first
problem is to account of their happening at all, and this may be called the psychological problem of induction. What induces us to go beyond the facts, and to go beyond them in these particular ways? The
second is to describe the logical relationship between protocol sentences and the generalizations and
hypotheses to which they lead; this may be called the logical problem of induction. . . . The third is to
justify our confidence in inductive inferences; this goes beyond the purely logical scope of the second
problem, and raises the most profound philosophical questions, the answers to which are still in dispute.
It may be called the metaphysical problem of induction” (Caws, 1965, pp. 195–196).
206
GIBSON
when the paradigm is established and scientists are solving problems with established theories; induction, on the other hand, is used either before or between paradigms, when new general principles are being devised.
Despite this important role in the production of scientific knowledge, inductive
reasoning is not even dignified with clear definitions for its constituent parts. Analogy, metaphor, model, example, metonymy, synecdoche: All have been lumped
under the general heading Induction, and the problem is further complicated by the
rather flexible definitions of the terms. In his discussion of rhetorical figures, Richard Lanham (1991) points out that “[e]veryone from Quintilian onward . . . has
complained about the imprecision and proliferation, regretting the absence of a
clear, brief, and definitive set of terms” (p. 79). Kenneth Burke (1969) noted, in
“The Four Master Tropes,” that these various modes of induction “shade into one
another. Give a man but one of them, tell him to exploit its possibilities, and if he is
thorough in doing so, he will come upon the other three” (p. 503). There is not even
any consensus on the relationship the various concepts have with the others. Some
see analogy as the most general: Leatherdale (1974) wrote that “analogy is a more
fundamental and simple concept than metaphor or model” (p. 1), and Elizabeth
Sewell (1964) claimed that “[i]n metaphor the mind sees and expresses an analogy” (p. 42). Gentner and Jeziorski (1993) saw things the other way around—“We
view metaphor as a rather broad category, encompassing analogy. . . . analogy is a
special case of metaphor” (p. 452)—a stance supported by Lakoff and Johnson’s
(1980) Metaphors We Live By, and more recently, Baake’s (2003) excellent Metaphor and Knowledge.
It is vital, then, for me to define my usage as clearly as possible, and for that, I
turn to John Stuart Mill and Edward P. J. Corbett. In A System of Logic, Mill (1891)
described analogical reasoning as the general case in which “[t]wo things resemble
each other in one or more respects; a certain proposition is true of one, therefore it
is true of the other” (p. 365). Corbett (1971), likewise, positions analogy as the
most general form of induction; he writes that “[s]imilarity is the basic principle
behind all inductive argument and all analogy” (p. 116), and he labels metaphor
and simile “analogical tropes” (p. 480). Following their lead, I use analogy as the
name for comparing specific characteristics of otherwise distinct conceptualizations in service of a further comparison. “If Reggie Jackson is a Hall of Famer, then
so is Rafael Palmeiro” points out that, because Jackson and Palmeiro have similar
career home run totals, they ought to be similar in their postcareer recognition as
well. Example and metaphor are, then, particular instances of analogy;3 they use
3It is worth noting that the use of example here is not the everyday use, as in a member of a group
that represents the whole, as in “my daughter is an example of a five-year-old.” Aristotelian examples
are inductive and rhetorical in nature, in a way that these more common examples are not. Thus, when
Penrose and Katz (2004) noted that “analogy is not example,” they are referring to the more common
usage of the term (p. 196, emphasis in original).
SCIENTIFIC ARGUMENTATION
207
slightly different strategies to achieve their inductive goals, but insofar as they are
building upon similar traits to argue for other similarities, they remain analogical
tropes. An example, for instance, tends to use concepts that are more similar than a
typical analogy, but the method of reasoning is the same. Another of Aristotle’s
(1984b) examples of an example in the Rhetoric, for instance, is as follows: “We
must prepare for war against the king of Persia and not let him subdue Egypt. For
Darius of old did not cross the Aegean until he had seized Egypt; but once he had
seized it, he did cross. . . . If therefore the present king seizes Egypt, he also will
cross, and therefore we must not let him” (p. 133). The reasoning process is clearly
analogical—if the initial circumstances become similar (aggressor kings conquering Egypt), the following circumstances will be similar as well (said kings attacking Athens). This example could be turned into a metaphor—“the present king is a
new Darius”—and it would include the same rhetorical conclusion.
If analogy, metaphor, and example are so similar in the inductive reasoning they
employ, an argument could be made that any of them could serve as the general
case, with the other two as instances of the first. I prefer analogy as the general
form for two reasons. The first is rather pragmatic: analogy, for our students,
sounds a bit more technical than the others, a bit less like something that would be
used in a poem (Kenneth Burke, in fact, described analogies as the scientific equivalent of the poetic metaphor). This is, of course, for good reason: The word comes
from the ancient Greek analogia, used by Pythagoras to indicate mathematical
proportion. The Pythagoreans found analogies to be extremely useful, for from
them one could determine “the unknown through the known relation between it
and something known without consulting experience” (Höffding, 1905, p. 203): If
I know that 3 and 5 have the same relationship as 12 and an unknown number, it
takes a fairly simple calculation to determine that the value of the unknown is 20.
In these mathematical analogies, there are no extraneous properties of the numbers
to confuse things; the relative values are all they have.
The second advantage of analogy is more theoretical but equally valuable: It focuses the reader’s attention on the second half of the trope. In the mathematical expression described above, the unknown number clearly stands out, and the purpose
of the analogy is obvious—to make the unknown known. Likewise, the linguistic
analogy between Reggie Jackson and Rafael Palmeiro, with its specific mention of
the second similarity being advocated, makes plain its rhetorical nature. Some examples and metaphors are rhetorically transparent as well, but many are not.
“Rafael Palmeiro is the Reggie Jackson of the 1990s” could mean many things; it is
subtle enough that some readers may miss its rhetorical point altogether. Thus, in
my usage here (and with my students), analogy is not only a specific trope but also,
in the tradition of Mill and Corbett, a type of inductive reasoning built around comparison of similarities, including metaphor and example.
But, as usually happens when we transfer a mathematical concept to a linguistic
realm, analogies become much more complicated. The audience could disagree
208
GIBSON
that choosing athletes is anything like choosing politicians because, for instance,
political ability is so difficult to determine in advance; thus, the methods for selecting them need not be the same. Aristotle (1984b) recognized this inexactness, and
he recommended that analogies, when possible, be used as “subsequent, supplementary, evidence”; otherwise, the argument will have “an inductive air, which
only rarely suits the conditions for speech-making” (p. 134).
Modern rhetoricians have also treated analogy and example extensively. One of
the most memorable concepts from Kenneth Burke’s (1984) Permanence and
Change, for instance, is that of perspective by incongruity: “a constant juxtaposition of incongruous words, attaching to some name a qualifying epithet which had
heretofore gone with a different order of names” (p. 90). Burke describes historians like Nietzsche and Spengler purposely juxtaposing concepts that are quite different to achieve a particular effect, but he also claims that it is this odd perspective
that gives all inductive tools much of their explanatory power; when a writer juxtaposes two unlike concepts, the reader is forced to try to find some connection between them and in so doing is led to a new understanding of the issue at hand.
And while it is true that any analogy places two unlike concepts in concert, most
of them emphasize the similarities of the concepts rather than the differences. In
The Meaning of Meaning (a book Burke knew well), C. K. Ogden and I. A. Richards (1946) explain that “the use of metaphor involves the same kind of contexts as
abstract thought, the important point being that the members shall only possess the
relevant feature in common, and that irrelevant or accidental features shall cancel
one another” (p. 214). This claim may be optimistic, especially when considering
the ambiguities inherent in any linguistic exchange, but, even if it were true that
analogies could focus on the “relevant features” initially, Max Black has suggested
those features are bound to change. His interaction theory of metaphor suggests
that any comparison necessarily affects both items: “Because interactive metaphors bring together a system of implications, other features previously associated
with only one subject in the metaphor are brought to bear on the other” (Stephan,
1986, p. 270). Thus, analogies can never work exactly as planned; unintended consequences are always sneaking in.
In recent years, rhetoric of science scholars have been studying some of these
unintended consequences in scientific discourse. Heather Brodie Graves and
Roger Graves (1998) argue that the analogies scientists and engineers employ can
substantially affect how their readers perceive reality. Citing the use of DOA and
infant-mortality in an engineer’s handbook, Graves and Graves (1998) claim that
though “the pairing with human experience is perhaps intended to inject some
black humor or wry irony into the discussion of defective electronic devices . . . the
pairing also mechanizes (and, in subtle ways, devalues) human life” (p. 391).
These extra interpretations are the result of what Charles Bazerman (1988) calls
the “web of approximate meanings” that accompany the use of metaphor or analogy (p. 37). Because rhetors use inductive devices when their readers are not fully
SCIENTIFIC ARGUMENTATION
209
familiar with a concept, they must combine the analogies with “other underconstrained terms and contextual clues” to most effectively convey their intent
(p. 37). But this “web” is not an exact definition; it is an approximate description,
and there is no way to confidently predict that one’s readers will obtain the meaning desired by the writer. Joseph Little has worked to explain some of this unpredictability, arguing, in this issue, that rhetors are not entirely free to use analogies
as they choose (Little, 2008). Instead, they are required to adhere to the “correspondences” the audience associates with a particular analogy.
This complexity in the use and reception of analogy makes it especially important for young scientific communicators to learn to use these figures appropriately
and effectively, and an extended example may help us teach these uses. For the example, I turn to a branch of science that has been tied closely to analogy throughout
its history—artificial intelligence (AI). Long before any attempts at construction
of machine intelligence began, arguments from analogy were debated by philosophers as potential ways to gain knowledge of other minds. One commonly discussed comparison was that of determining what was going on inside a house by
what it looked like outside: If there are several cars outside and loud music can be
heard, anyone can assume a party is going on inside. Likewise, if a woman is
slouched over and tears are coming from her eyes, it is very likely that she is feeling grief. In 1951, J. F. Thomson argued that this simple analogical method was too
behavioristic, relying too heavily on one set of data. People occasionally, for instance, cry for joy, and it is often difficult to tell the difference by merely observing
the outward indicators. Thomson claimed that a second type of analogy is necessary: not just comparing their outward actions with our own but also comparing
their situation with similar situations we have experienced. Thus, if that woman is
slouched over and crying and is holding the obituary page of the newspaper, it is
fairly safe to assume she is feeling grief, because most people, in similar situations
and acting in similar ways, have felt grief (see Meiland, 1966, and Castañeda,
1962, for other early discussions on this topic).
The widely debated Turing Test is another argument by analogy. Briefly describing a parlor game in which players try to determine the gender of other players
based solely on their answers to questions, British mathematician Alan Turing
claims that a similar game with a machine contestant could indicate whether the
machine possessed humanlike intelligence (Turing, 1990, pp. 40–43). Turing’s argument worked—the majority of the AI community now accepts the Turing Test
(or something like it) as the yardstick for machine intelligence—and many AI researchers have copied his argument strategy as well.
Clearly, then, analogy and metaphor are of vital importance to scientific rhetoric, and our students will often likely find themselves dealing with others’ analogies or employing their own. We will serve them better if we provide them more
sophisticated methods for dealing with these inductive arguments. One such strategy is to combine induction and deduction to capitalize on the strengths of each, a
210
GIBSON
tactic used by Rodney Brooks (1999) in “Intelligence Without Representation.”
Thus, in the remainder of this article, I will offer a close analysis of Brooks’s use of
analogy there: I will examine his argument according to Aristotle’s strictures on
the use of examples, and I will offer at least part of the reason his writings have
been so well received in the academic community and, later, in the AI community.
RODNEY BROOKS AND THE AI COMMUNITY
Rodney Brooks is currently the head of the AI lab at MIT, still one of the top AI research institutions in the country. And in some ways, Brooks seemed destined to
become an AI insider. Born and raised in Australia, Brooks came to America to
pursue a PhD in computer science at Stanford. His doctoral dissertation was a theoretical formation of a three-dimensional vision system, but, after moving to Carnegie Mellon to take a research position, he tried to implement his vision system in a
working machine and found it did not work. In fact, it did not come close. His program was unable to handle the enormous complexities that accompany ordinary
vision; it could not represent, for instance, objects that quickly changed their
shape, speed, and/or size. Without this ability, his program had little real-world application, and he soon abandoned the ideas he had spent his entire academic career
researching.
Disillusioned with the idea of trying to program a machine with humanlike intelligence (known as the “top-down” approach), Brooks began investigating the
possibility of working at the problem from the other direction: build a machine and
allow it to learn or evolve that intelligence (the “bottom-up” approach). Brooks has
been building robots ever since, throughout a career that has led him from Carnegie
Mellon to MIT, back to Stanford, and then back to MIT (allowing him just enough
time to found two robot-building companies along the way). His current home as
the director of the 230-person MIT AI Laboratory affords him the prestige and
funding to tackle his latest project—building a humanoid robot named Cog.
Brooks’s current success belies his initial struggles at writing papers and building robots wherever he could find someone to pay him. The intellectual climate in
the AI community was not particularly friendly toward robotics essays, and he had
a difficult time finding an audience for his work. Most of his papers were not
welcome in the peer-reviewed journals—“Intelligence Without Representation,”
for instance, was snubbed for years.4 Despite this lack of official recognition,
Brooks’s work began to attract some adherents; he writes that “underground ver4As Brooks wrote in an introduction to the article in Cambrian Intelligence: “It was turned down by
many conferences and many journals. The reviews were withering: ‘There is no scientific content here;
little in the way of reasoned argument, as opposed to petulant assertions and non-sequiturs, and ample
evidence of ignorance of the literature on these questions’” (p. 79).
SCIENTIFIC ARGUMENTATION
211
sions [of ‘Intelligence Without Representation’] were widely referenced, and
made it on required reading lists for Artificial Intelligence graduate programs”
(Brooks, 1999, p. 79).
The grassroots support for Brooks and his robots became too much, and the
community eventually accepted him and his work, and he has become the catalyst
for an entire generation of AI researchers. Robotics is still far from the dominant
paradigm in AI research—most of the institutional support and academic energy
still go to more conventional efforts—but Brooks’s work (along with the work of
others who came before him, such as Hans Moravec and Douglas Hofstadter) has
made it much easier for scholars with similar interests to find avenues for their
research.
THE FOUR ANALOGIES OF “INTELLIGENCE WITHOUT
REPRESENTATION”
Brooks’s ideas caught the attention of the community by combining the deductive
and inductive strategies mentioned in Aristotle’s Rhetoric. Aristotle instructed
speakers to argue deductively (via the enthymeme) when possible and simply use
examples as “subsequent supplementary evidence” (1984b, p. 135). If this was impossible, arguments could be based on examples, but it required many examples to
replace a single enthymeme. Brooks combined both strategies to impressive effect:
He employed four analogies, each one contributing an aspect of the overall
enthymeme that presented his argument.
In analyzing Brooks’s analogies, I will consider a number of factors. Aristotle
(1984b) indicated that there are two types of examples: “one consisting in the
mention of actual past facts, the other in the invention of facts by the speaker” (p.
133). And while he recognized that inventing facts is easier than actually finding
historical parallels, he claimed that “it is more valuable for the . . . speaker to supply them by quoting what has actually happened, since in most respects the future will be like the past has been” (Aristotle, 1984b, p. 134). Chaim Perelman
(1982) argued similarly, though he put more force on the value of true examples:
“It is important that the chosen example be incontestable, since it is the reality of
that which is called forth that forms the basis of the conclusion” (p. 107). Corbett
(1971) indicates several traits upon which examples can be judged, among them
the “impressiveness and pertinence” of the example; the “ethical appeal” of the
speaker; “the force of the ‘emotional appeals’; [and] the emotional climate of the
times” (p. 83). I will treat Brooks’s four analogies individually, according to the
above considerations; I will then examine how the analogies work to create the
overarching enthymeme that makes “Intelligence Without Representation” persuasive.
212
GIBSON
Analogy 1: Human Evolution
“We already have an existence proof of the possibility of intelligent entities: human beings” (Brooks, 1999, p. 81). With that, Brooks introduces his first analogy:
human evolution. The quest for AI researchers is to create some sort of intelligent
entity, and Brooks decides that it would be “instructive” to review the history of the
creation of the only existing intelligences (that we know of). And for Brooks, the
important parts of the story of human evolution are a pair of numbers: 3.5 billion
and 450 million. These numbers, Brooks explains, are the numbers of years ago
single-celled animals and insects first appeared, respectively. Put another way (the
way Brooks prefers), nature required over 3 billion years of evolution between insects and single-cell organisms but only 450 million years between insects and human intelligence (Brooks, 1999, p. 81). The conclusion to be drawn from this is
clear: The general aspects of intelligence (vision, motion, etc.) are much more difficult engineering problems than specialized intelligence (proving math theorems,
playing chess, etc). The successes that are claimed when a computer is programmed to integrate differential equations, then, are not substantial victories in
the grand scheme of seeking intelligent behavior.
What are we to make of this example? Brooks’s audience, as he indicated, was
composed primarily of graduate students studying AI and those reading AI journals—in other words, experts in the field. For the vast majority of these readers, the
truth of human evolution is beyond question, so this example has the added benefit
of a historical parallel. It is also a pertinent example; as Brooks mentioned, human
evolution is the only process we know of that has led to intelligence. Brooks’s
ethos is aided by the use of this example as well; though at the time he wrote the article, he was far from a revered figure in the AI community, inferring a connection
with evolution would have brought to the minds of his readers, even if only subconsciously, the overwhelming credibility of Charles Darwin. In addition, the emotional climate of the times likely helped his cause. Many AI researchers were becoming frustrated with the repeated failures of traditional AI to accomplish what
seemed like simple tasks. And though Brooks did not offer a specific solution to
these problems with this example, he did suggest a concrete reason why the small
stuff was giving them such fits: It was not the small stuff.
Analogy 2: Artificial Flight
In the next section of the essay, Brooks relates what he calls “a parable.” In this parable, scientists from the 1890s who are working on artificial flight (AF) are transported to the 1980s where they spend a few hours aboard a Boeing 747, wandering
around the passenger cabin. They are then transported back to the 1890s, and with
renewed determination (because they now know for sure that AF is possible), they
set about their task: “They make great progress in designing pitched seats, double
SCIENTIFIC ARGUMENTATION
213
pane windows, and know if they can only figure out those weird ‘plastics’ they will
have the grail within their grasp” (Brooks, 1999, p. 82). Later in the essay, Brooks
continues the parable, describing the specialization the AF researchers believe is
required: They split up into different teams to work on different parts of the flying machine with little communication between groups. They spend significant
amounts of time on the outer details, like side mirrors and foot pedals, with almost
no effort in the area of aerodynamics because “[s]ome of them had intensely questioned their fellow passengers on this subject and not one of the modern flyers had
known a thing about it. Clearly the AF researchers had previously been wasting
their time in its pursuit” (Brooks, 1999, p. 85).
Again, with no specific explanation, Brooks’s parable is clear. Contemporary
AI researchers, specifically those of the top-down variety, are spending their time
on the bells and whistles that accompany intelligence. And just as pitched seats and
double-pane windows do not a flying machine make, the ability to play a good
game of chess does not necessarily indicate intelligence. The fellow passengers on
the flight are our fellow human beings. Most airline passengers do not understand
very well how planes fly, and most humans do not understand very well how their
minds work, but using these facts as arguments that aerodynamics and neurology
are then unimportant subjects is specious indeed. There is even a jab at connectionists, fellow bottom-uppers though they are, in the parable when he states
that a few of them “caught a glimpse of an engine with its cover off and they are
preoccupied with inspirations from that experience” (Brooks, 1999, p. 82).
This analogy clearly does not have the advantage of being true, as the last one
did, but there were certainly some very persuasive aspects of it nonetheless.
Corbett claimed that examples need to be impressive and pertinent, and this parable was both. The story is told in such vivid detail—it described the features of the
plane these researchers were reproducing, such as “a rather nice setup of louvers in
the windows so that the driver can get fresh air without getting the full blast of the
wind in his face” (Brooks, 1999, p. 85)—that it is quite memorable and impressive.
And the freedom of fiction allows Brooks to make it very pertinent. He digs at every feature of current AI research that bothers him: the top-downers who think specialized intelligence is the key; the various research teams who work on various
aspects of intelligence without ever communicating among themselves; the connectionists who insist that if they could just get the wiring down, they would have
it; and the general public (himself included) who are shockingly little help when it
comes to explaining their own mental processes.
Brooks also employs effective emotional appeals with this fable. It is a very enjoyable read, and the audience cannot help but chuckle at several points along the
way. He describes the AF researchers building the cockpit “with a seat above a
glass floor so that [the pilot] can see the ground below so he will know where to
land” (Brooks, 1999, p. 85). His description sounds as if it were a script for Abbott
and Costello Build a Plane rather than a thinly veiled description of a serious aca-
214
GIBSON
demic pursuit. But those involved in AI research are also those most likely to see
the parallels: Even if top-down researchers sincerely believe that theirs is the right
way to achieve intelligence, our lack of knowledge about the brain is almost appalling, and it remains entirely possible that this analogy is accurate.
Analogy 3: CHAIR
The third example requires a bit of background. When Brooks wrote his article,
most of the AI successes were top-down programs performing a specified task in a
contained environment. The programmers ensured these contained environments
through representation—any item with which the program would need to interact
was described (represented) in the language of the program. In particular, the features of the item relevant to the program were specific: A ball, for instance, could
be thrown or caught, so the program would have two commands spelled out by the
programmer, one for each function.
Brooks’s third analogy is a sample representation of a chair; in computer language familiar to Brooks’s audience (but clear enough to be understood by a nonspecialist reader as well), he gives the following two-part representation: “(CAN
(SIT-ON PERSON CHAIR)) ; (CAN (STAND-ON PERSON CHAIR))” (Brooks,
1999, p. 83). The machine given this input would then, whenever encountering a
chair, anticipate these two features. But, as Brooks immediately points out, “there
is much more to the concept of a chair. Chairs have some flat (maybe) sitting place,
with perhaps a back support. They have a range of possible sizes, requirements on
strength, and a range of possibilities in shape. . . . In particular, the concept of what
is a chair is hard to characterize simply” (p. 83, added emphasis), yet simple characterizations are all most computer languages offer.
Again, the lesson of the example is straightforward, but to illustrate further,
Brooks provides another illustration: Imagine a photo of a hungry person sitting on
a chair, spotting a banana hanging just out of reach. Anyone seeing this photograph
would immediately know how to remedy the situation, but for most intelligent programs, the machines must be provided with descriptions, like the one above, for
PERSON, CHAIR, and BANANA. The devil, however, is in these details; the ability to look at a photo, process the images, and abstract out the relevant information
“is the essence of intelligence and the hard part of the problems being solved”
(p. 84).
Like the second analogy, this example is not actually true: The reader is not told
of any program that uses the specific representation given for CHAIR. But, unlike
the parable of artificial flight, this example is just like actual representations, and it
very well could be true. Thus, Brooks, while taking advantage of the easiness of a
created example, loses very little of the power of a true example in the process. At
the same time, Brooks’s example, by not being an actual representation, does not
meet Perelman’s standard of “incontestable”; a two-sentence representation of a
SCIENTIFIC ARGUMENTATION
215
chair is very simple, and the argument could be made that Brooks is setting up a
straw man with an unrealistic illustration.
Fortunately for Brooks, the usefulness of his example does not stand or fall with
its verity; in fact, a more complex representation would almost demonstrate his
point even more effectively. Brooks is showing that the actual work—the “essence
of intelligence”—is being done by the programmers, and a lengthier representation process would simply highlight this fact more forcefully.
Analogy 4: Creature
In Brooks’s fourth example, he finally gets around to doing what all researchers
must eventually do—provide some inartistic proof. As a final piece of evidence,
Brooks discusses his latest (at the time) robot, called Creature (a machine he calls
his “favourite example”), which embodies all the principles in the article. There
are two important aspects of Creature, on which Brooks spends nearly all his time.
First is Creature’s residence: the real world. Throughout the article, Brooks expresses his frustrations with intelligent programs that cannot possibly function
outside their highly specialized environments, and it is important that Brooks’s final example not be subject to those constraints, even if that means it cannot play
chess very well. Second is the “incremental layer of intelligence which operates in
parallel to the first system” (Brooks, 1999, p. 88). There will be no central command unit that processes everything and that, if compromised, takes down the
whole system. Brooks then simply describes his creation, lauding its strengths and
mentioning its weaknesses. It is “a mobile robot, which avoids hitting things. It
senses objects in its immediate vicinity and moves away from them, halting if it
senses something in its path” (Brooks, 1999, p. 88).
Here, Brooks returns to the advantages of an incontestable example: Creature is
a real machine that performs as Brooks describes, a living proof that Brooks’s
ideas lead to something interesting. Creature also is impressive; the AI community,
along with the general public, has been anxiously awaiting robots as, perhaps, the
ultimate form of intelligent machinery. Creature was far from the finished product,
but it was certainly a step in the right direction. This anticipation contributes to the
emotional appeals attached to this example as well. Creature is particularly a success for Brooks and his coworkers, but, to some degree, the research community as
a whole can take pride in this achievement. Even those ideologically opposed to
learning machines have got to be intrigued with the possibilities.
THE ENTHYMEME
If these four examples were just illustrative analogies, Brooks’s essay would have
been interesting and perhaps slightly informative. After all, the four analogies ap-
216
GIBSON
peal to technical (CHAIR) and nontechnical (artificial flight) readers, they contain
artistic (artificial flight) and inartistic (Creature) proofs, and they employ paradigmatic (human evolution) and experimental (Creature) science. The order in which
Brooks presents them is notable as well: The first is built on an almost universally
accepted scientific principle; the second on a metaphor for AI common to researchers in the field; the third on a strictly bottom-up interpretation of representation languages; and the fourth on his own work. The progression from widely accepted to nearly unheard of was almost certainly intentional, and, by starting the
reader on familiar ground, Brooks probably increased the likelihood that his last
two examples would seem reasonable.
But what makes “Intelligence Without Representation” a particularly impressive rhetorical work is the role these four examples play in the overall argument.
Brooks’s article, despite featuring four prominent inductive examples, is a deductive argument built around an overarching, three-premise enthymeme as shown in
the table. Each example Brooks provides contributes a specific aspect to the
enthymeme. As is often the case, the major premise is left mostly unaddressed;
Brooks assumes his readers will agree that the best way to solve a complex problem is by tackling the hardest part first. His first example, then—human evolution—speaks to his first minor premise, that the general features of intelligence are
the hardest to duplicate. The contrast in the amount of time each took to evolve argues forcefully for Brooks’s contention without Brooks ever saying the actual
words.
The next two examples work together to form the second minor premise. The
humorous AF parable very pointedly suggests that most (if not all) AI researchers
have been spending too much time studying the “double pane windows” of the
mind. And Brooks’s description of the CHAIR representation argues for his belief
that the most useful aspects of intelligence have been sidestepped via computer
languages.
TABLE 1
The Enthymeme in Rodney Brooks’s “Intelligence Without Representation”
Major premise
Solving complex problems is best accomplished by addressing the hardest parts
of the problem first.
Minor premise
The hard part of AI is general intelligence (motion, vision, etc.).
Minor premise
Traditional, top-down AI has instead focused on the easy part, i.e., specialized
intelligence (theorem proving, game playing, etc.).
Conclusion
The fastest route to AI is to build learning machines (especially those with
independent layers of control and that can be tested in the real world) with
general intelligence capabilities and allow them to gain the specialized
intelligence they will need.
SCIENTIFIC ARGUMENTATION
217
The final example presents the case (if only partially complete) for Brooks’s
conclusion. His Creature is an actual instantiation of the principles he espouses in
the article, but the behavior the Creature exhibits is surprisingly complex given the
relatively simple architecture involved. Tying this last example back to the first,
Brooks (1999) claims the Creature operates with “simple insect level intelligence”
(p. 98), certainly a far cry from human intelligence, but, using the evolutionary
time scale, perhaps not as far as his readers might have previously thought.
CONCLUSION
Rodney Brooks has become an icon of artificial intelligence, one of the most visible figures in the field; he was even featured in a full-length documentary, Fast,
Cheap, and Out of Control, the title of which was taken from one of his papers.
Much of his success is undoubtedly due to his vision and seemingly boundless energy: In addition to directing the MIT AI Lab, he directs graduate students, he publishes books and articles, and he has founded two private companies.
But his success within the AI community is also due in large part to his ability to
argue persuasively. Several of his works, both article and book length, are considered foundational in the field, items that no one serious about the field can ignore.
“Intelligence Without Representation” is one of his most important, and the analysis I have offered here can help explain his rhetorical success. In “Intelligence,” he
employs numerous examples and analogies, but, rather than give his arguments the
“inductive air” Aristotle (1984b) warned about (p. 133), Brooks uses these analogies to establish an enthymeme. This combination of induction and deduction affords Brooks the advantages of both, and the result is an effective essay, one that
has convinced many of its readers to support, and often join, his programs.
Rhetoricians and philosophers of science have long argued that analogy is an
important part of scientific discovery and progression. Scientists have, just as long,
resisted this notion, believing that a reliance on analogy detracted from the reality
of their work. Brooks provides a middle ground in this disagreement, and he provides technical writing teachers an avenue for the discussion of a tool that will be
important for our students to master. He shows that persuasion must not necessarily, as Aristotle argued, be by enthymeme or example—it may be by both.
This lesson can be a valuable one for our students, and I argue that our students
will be better scientific communicators if we can teach them to think specifically
about the uses of inductive reasoning. Analogy is clearly an important tool in scientific thought, and Brooks shows how important it can be in scientific writing; we
can, therefore, do our students a great service by including discussions of analogy,
example, and metaphor in our classes. This approach will, especially for our science and engineering students, provide them with another valuable tool with
which to handle the various rhetorical situations they will encounter.
218
GIBSON
REFERENCES
Aristotle. (1984a). The complete works of Aristotle (J. Barnes, Ed., Vols. 1–2). Princeton, NJ: Princeton
University Press.
Aristotle. (1984b). The rhetoric and the poetics of Aristotle (W. R. Roberts & I. Bywater, Trans.). New
York: The Modern Library.
Baake, K. (2003). Metaphor and knowledge: The challenges of writing science. Albany, NY: State University of New York Press.
Babbage, C. (1817). Observations on the analogy which subsists between the calculus of functions and
other branches of analysis. Philosophical Transactions of the Royal Society of London, 107,
197–216.
Bazerman, C. (1988). Shaping written knowledge: The genre and activity of the experimental article in
science. Madison, WI: University of Wisconsin Press.
Brooks, R. A. (1999). Intelligence without representation. Chap 5 in Cambrian intelligence: The early
story of the new AI. Cambridge, MA: MIT Press.
Burke, K. (1969). A grammar of motives. Berkeley, CA: University of California Press.
Burke, K. (1984). Permanence and change: An anatomy of purpose (3rd ed.). Berkeley, CA: University
of California Press.
Campbell, N. R. (1920). Physics, the elements. Cambridge, England: Cambridge University Press.
Carnap, R. (1963). Variety, analogy, and periodicity in inductive logic. Philosophical Quarterly, 30,
222–227.
Castañeda, H. (1962). Criteria, analogy, and knowledge of other minds. The Journal of Philosophy, 59,
533–546.
Caws, P. (1965). The philosophy of science: A systematic account. Princeton, NJ: D. Van Nostrand
Company, Inc.
Corbett, E. P. J. (1971). Classical rhetoric for the modern student (2nd ed.). New York: Oxford University Press.
Gentner, D., & Jeziorski, M. (1993). The shift from metaphor to analogy in Western science. In A.
Ortony (Ed.), Metaphor and thought (2nd ed., pp. 447–480). Cambridge, England: Cambridge University Press.
Graves, H. B., & Graves, R. (1998). Masters, slaves, and infant mortality: Language challenges for
technical editing. Technical Communication Quarterly, 7, 389–413.
Hesse, M. B. (1981). The function of analogies in science. In R. D. Tweney, M. E. Doherty, & C. R.
Mynatt (Eds.), On scientific thinking (pp. 345–348). New York: Columbia University Press.
Höffding, H. (1905). On analogy and its philosophical importance. Mind, 14, 199–209.
Jastrow, J. (1891). The natural history of analogy. Science, 18, 118–119.
Jevons, W. S. (1958). The principles of science: A treatise on logic and scientific method. New York:
Dover Publications.
Lakoff, G., & Johnson, M. (1980). Metaphors we live by. Chicago: University of Chicago Press.
Lanham, R. (1991). A handlist of rhetorical terms (2nd ed.). Berkeley, CA: University of California
Press.
Leatherdale, W. H. (1974). The role of analogy, model, and metaphor in science. Amsterdam:
North-Holland Publishing Company.
Little, J. (2008). The role of analogy in George Gamow’s derivation of drop energy. Technical Communication Quarterly, 17, 220–238.
Meiland, J. W. (1966). Analogy, verification and other minds. Mind, 75, 564–568.
Mill, J. S. (1891). A system of logic ratiocinative and inductive. London: Longmans, Green, and Co.
Ogden, C. K., & Richards, I. A. (1946). The meaning of meaning (8th ed.). New York: Harcourt Brace
Jovanovich.
Pap, A. (1962). An introduction to the philosophy of science. New York: Free Press of Glencoe.
SCIENTIFIC ARGUMENTATION
219
Penrose, A. M., & Katz, S. B. (2004). Writing in the sciences (2nd ed.). New York: Pearson, Longman.
Perelman, C. (1982). The realm of rhetoric (W. Kluback, Trans.). Notre Dame, IN: University of Notre
Dame Press.
Sewell, E. (1964). The human metaphor. Notre Dame, IN: University of Notre Dame Press.
Stephan, N. L. (1986). Race and gender: The role of analogy in science. Isis, 77, 261–277.
Thomson, J. F. (1951). The argument from analogy and our knowledge of other minds. Mind, 60,
336–350.
Turing, A. M. (1990). Computing machinery and intelligence. In M. A. Boden (Ed.), The philosophy of
artificial intelligence (pp. 40–66). Oxford, England: Oxford University Press.
Keith Gibson is an assistant professor of technical and professional communication at
Utah State University. He has spent the last several years researching the rhetorical strategies of scientific discourse communities, with a special emphasis on artificial intelligence researchers and the classical rhetoric tools they employ.
Download