Academic Readings - Shoreline Community College

advertisement
Academic Readings
for
English 80 & English 90/Study-Skills 100
1. Who Are You and What Are You Doing Here?
2. Did the Decline of Sampling Cause the Decline of Political
Hip Hop?
3. The Killing Machines: How to think about drones
4. How do you read "-3"?
5. Murder by Craigslist
6. The Case for Working with Your Hands
7. Fenimore Cooper's Literary Offenses
8. ON THE USES OF A LIBERAL EDUCATION: As a Weapon
in the Hands of the Restless Poor
9. How Did Economists Get It So Wrong?
10.
The Evolution of Life on Earth
1
Who Are You and What Are You Doing Here?
A message in a bottle to the incoming class.
Welcome and congratulations: Getting to the first day of college is a major
achievement. You’re to be commended, and not just you, but the parents,
grandparents, uncles, and aunts who helped get you here.
It’s been said that raising a child effectively takes a village: Well, as you may
have noticed, our American village is not in very good shape. We’ve got guns,
drugs, two wars, fanatical religions, a slime-based popular culture, and some
politicians who—a little restraint here—aren’t what they might be. To merely
survive in this American village and to win a place in the entering class has
taken a lot of grit on your part. So, yes, congratulations to all.
You now may think that you’ve about got it made. Amidst the impressive
college buildings, in company with a high-powered faculty, surrounded by the
best of your generation, all you need is to keep doing what you’ve done before:
Work hard, get good grades, listen to your teachers, get along with the people
around you, and you’ll emerge in four years as an educated young man or
woman. Ready for life.
Do not believe it. It is not true. If you want to get a real education in America
you’re going to have to fight—and I don’t mean just fight against the drugs
and the violence and against the slime-based culture that is still going to
surround you. I mean something a little more disturbing. To get an
education, you’re probably going to have to fight against the institution that
you find yourself in—no matter how prestigious it may be. (In fact, the more
prestigious the school, the more you’ll probably have to push.) You can get a
terrific education in America now—there are astonishing opportunities at
almost every college—but the education will not be presented to you wrapped
and bowed. To get it, you’ll need to struggle and strive, to be strong, and
occasionally even to piss off some admirable people.
I came to college with few resources, but one of them was an understanding,
however crude, of how I might use my opportunities there. This I began to
develop because of my father, who had never been to college—in fact, he’d
barely gotten out of high school. One night after dinner, he and I were sitting
in our kitchen at 58 Clewley Road in Medford, Massachusetts, hatching plans
about the rest of my life. I was about to go off to college, a feat no one in my
family had accomplished in living memory. “I think I might want to be prelaw,” I told my father. I had no idea what being pre-law was. My father
compressed his brow and blew twin streams of smoke, dragon-like, from his
2
magnificent nose. “Do you want to be a lawyer?” he asked. My father had
some experience with lawyers, and with policemen, too; he was not welldisposed toward either. “I’m not really sure,” I told him, “but lawyers make
pretty good money, right?”
My father detonated. (That was not uncommon. My father detonated a lot.)
He told me that I was going to go to college only once, and that while I was
there I had better study what I wanted. He said that when rich kids went to
school, they majored in the subjects that interested them, and that my
younger brother Philip and I were as good as any rich kids. (We were rich
kids minus the money.) Wasn’t I interested in literature? I confessed that I
was. Then I had better study literature, unless I had inside information to
the effect that reincarnation wasn’t just hype, and I’d be able to attend
college thirty or forty times. If I had such info, pre-law would be fine, and
maybe even a tour through invertebrate biology could also be tossed in. But
until I had the reincarnation stuff from a solid source, I better get to work
and pick out some English classes from the course catalog. “How about the
science requirements?”
“Take ’em later,” he said, “you never know.”
My father, Wright Aukenhead Edmundson, Malden High School Class of
1948 (by a hair), knew the score. What he told me that evening at the
Clewley Road kitchen table was true in itself, and it also contains the germ of
an idea about what a university education should be. But apparently almost
everyone else—students, teachers, and trustees and parents—sees the matter
much differently. They have it wrong.
Education has one salient enemy in present-day America, and that enemy is
education—university education in particular. To almost everyone, university
education is a means to an end. For students, that end is a good job. Students
want the credentials that will help them get ahead. They want the certificate
that will give them access to Wall Street, or entrance into law or medical or
business school. And how can we blame them? America values power and
money, big players with big bucks. When we raise our children, we tell them
in multiple ways that what we want most for them is success—material
success. To be poor in America is to be a failure—it’s to be without decent
health care, without basic necessities, often without dignity. Then there are
those back-breaking student loans—people leave school as servants,
indentured to pay massive bills, so that first job better be a good one.
Students come to college with the goal of a diploma in mind—what happens
in between, especially in classrooms, is often of no deep and determining
interest to them.
3
In college, life is elsewhere. Life is at parties, at clubs, in music, with friends,
in sports. Life is what celebrities have. The idea that the courses you take
should be the primary objective of going to college is tacitly considered
absurd. In terms of their work, students live in the future and not the
present; they live with their prospects for success. If universities stopped
issuing credentials, half of the clients would be gone by tomorrow morning,
with the remainder following fast behind.
The faculty, too, is often absent: Their real lives are also elsewhere. Like
most of their students, they aim to get on. The work they are compelled to do
to advance—get tenure, promotion, raises, outside offers—is, broadly
speaking, scholarly work. No matter what anyone says this work has precious
little to do with the fundamentals of teaching. The proof is that virtually no
undergraduate students can read and understand their professors’ scholarly
publications. The public senses this disparity and so thinks of the professors’
work as being silly or beside the point. Some of it is. But the public also
senses that because professors don’t pay full-bore attention to teaching they
don’t have to work very hard—they’ve created a massive feather bed for
themselves and called it a university.
This is radically false. Ambitious professors, the ones who, like their
students, want to get ahead in America, work furiously. Scholarship, even if
pretentious and almost unreadable, is nonetheless labor-intense. One can
slave for a year or two on a single article for publication in this or that
refereed journal. These essays are honest: Their footnotes reflect real
reading, real assimilation, and real dedication. Shoddy work—in which the
author cheats, cuts corners, copies from others—is quickly detected. The
people who do this work have highly developed intellectual powers, and they
push themselves hard to reach a certain standard: That the results have
almost no practical relevance to the students, the public, or even, frequently,
to other scholars is a central element in the tragicomedy that is often
academia.
The students and the professors have made a deal: Neither of them has to
throw himself heart and soul into what happens in the classroom. The
students write their abstract, over-intellectualized essays; the professors
grade the students for their capacity to be abstract and over-intellectual—
and often genuinely smart. For their essays can be brilliant, in a chilly way;
they can also be clipped off the Internet, and often are. Whatever the case, no
one wants to invest too much in them—for life is elsewhere. The professor
saves his energies for the profession, while the student saves his for friends,
social life, volunteer work, making connections, and getting in position to
clasp hands on the true grail, the first job.
4
No one in this picture is evil; no one is criminally irresponsible. It’s just that
smart people are prone to look into matters to see how they might go about
buttering their toast. Then they butter their toast.
As for the administrators, their relation to the students often seems based
not on love but fear. Administrators fear bad publicity, scandal, and
dissatisfaction on the part of their customers. More than anything else,
though, they fear lawsuits. Throwing a student out of college, for this or that
piece of bad behavior, is very difficult, almost impossible. The student will
sue your eyes out. One kid I knew (and rather liked) threatened on his blog to
mince his dear and esteemed professor (me) with a samurai sword for the
crime of having taught a boring class. (The class was a little boring—I had a
damned cold—but the punishment seemed a bit severe.) The dean of students
laughed lightly when I suggested that this behavior might be grounds for
sending the student on a brief vacation. I was, you might say, discomfited,
and showed up to class for a while with my cellphone jiggered to dial 911
with one touch.
Still, this was small potatoes. Colleges are even leery of disciplining guys who
have committed sexual assault, or assault plain and simple. Instead of being
punished, these guys frequently stay around, strolling the quad and swilling
the libations, an affront (and sometimes a terror) to their victims.
You’ll find that cheating is common as well. As far as I can discern, the
student ethos goes like this: If the professor is so lazy that he gives the same
test every year, it’s okay to go ahead and take advantage—you’ve both got
better things to do. The Internet is amok with services selling term papers
and those services exist, capitalism being what it is, because people purchase
the papers—lots of them. Fraternity files bulge with old tests from a variety
of courses.
Periodically the public gets exercised about this situation, and there are
articles in the national news. But then interest dwindles and matters go back
to normal.
One of the reasons professors sometimes look the other way when they sense
cheating is that it sends them into a world of sorrow. A friend of mine had the
temerity to detect cheating on the part of a kid who was the nephew of a wellplaced official in an Arab government complexly aligned with the U.S. Black
limousines pulled up in front of his office and disgorged decorously suited
negotiators. Did my pal fold? Nope, he’s not the type. But he did not enjoy the
process.
What colleges generally want are well-rounded students, civic leaders, people
who know what the system demands, how to keep matters light, not push too
5
hard for an education or anything else; people who get their credentials and
leave the professors alone to do their brilliant work, so they may rise and
enhance the rankings of the university. Such students leave and become
donors and so, in their own turn, contribute immeasurably to the university’s
standing. They’ve done a fine job skating on surfaces in high school—the best
way to get an across-the-board outstanding record—and now they’re on
campus to cut a few more figure eights.
In a culture where the major and determining values are monetary, what else
could you do? How else would you live if not by getting all you can, succeeding
all you can, making all you can?
The idea that a university education really should have no substantial
content, should not be about what John Keats was disposed to call Soulmaking, is one that you might think professors and university presidents
would be discreet about. Not so. This view informed an address that Richard
Brodhead gave to the senior class at Yale before he departed to become
president of Duke. Brodhead, an impressive, articulate man, seems to take as
his educational touchstone the Duke of Wellington’s precept that the Battle of
Waterloo was won on the playing fields of Eton. Brodhead suggests that the
content of the courses isn’t really what matters. In five years (or five months,
or minutes), the student is likely to have forgotten how to do the problem sets
and will only hazily recollect what happens in the ninth book of Paradise
Lost. The legacy of their college years will be a legacy of difficulties overcome.
When they face equally arduous tasks later in life, students will tap their old
resources of determination, and they’ll win.
All right, there’s nothing wrong with this as far as it goes—after all, the
student who writes a brilliant forty-page thesis in a hard week has learned
more than a little about her inner resources. Maybe it will give her needed
confidence in the future. But doesn’t the content of the courses matter at all?
On the evidence of this talk, no. Trying to figure out whether the stuff you’re
reading is true or false and being open to having your life changed is a
fraught, controversial activity. Doing so requires energy from the professor—
which is better spent on other matters. This kind of perspective-altering
teaching and learning can cause the things which administrators fear above
all else: trouble, arguments, bad press, etc. After the kid-samurai episode, the
chair of my department not unsympathetically suggested that this was the
sort of incident that could happen when you brought a certain intensity to
teaching. At the time I found his remark a tad detached, but maybe he was
right.
So, if you want an education, the odds aren’t with you: The professors are off
doing what they call their own work; the other students, who’ve doped out the
6
way the place runs, are busy leaving the professors alone and getting
themselves in position for bright and shining futures; the student-services
people are trying to keep everyone content, offering plenty of entertainment
and building another state-of-the-art workout facility every few months. The
development office is already scanning you for future donations. The primary
function of Yale University, it’s recently been said, is to create prosperous
alumni so as to enrich Yale University.
So why make trouble? Why not just go along? Let the profs roam free in the
realms of pure thought, let yourselves party in the realms of impure pleasure,
and let the student-services gang assert fewer prohibitions and newer
delights for you. You’ll get a good job, you’ll have plenty of friends, you’ll have
a driveway of your own.
You’ll also, if my father and I are right, be truly and righteously screwed. The
reason for this is simple. The quest at the center of a liberal-arts education is
not a luxury quest; it’s a necessity quest. If you do not undertake it, you risk
leading a life of desperation—maybe quiet, maybe, in time, very loud—and I
am not exaggerating. For you risk trying to be someone other than who you
are, which, in the long run, is killing.
By the time you come to college, you will have been told who you are
numberless times. Your parents and friends, your teachers, your counselors,
your priests and rabbis and ministers and imams have all had their say.
They’ve let you know how they size you up, and they’ve let you know what
they think you should value. They’ve given you a sharp and protracted taste
of what they feel is good and bad, right and wrong. Much is on their side.
They have confronted you with scriptures—holy books that, whatever their
actual provenance, have given people what they feel to be wisdom for
thousands of years. They’ve given you family traditions—you’ve learned the
ways of your tribe and your community. And, too, you’ve been tested, probed,
looked at up and down and through. The coach knows what your athletic
prospects are, the guidance office has a sheaf of test scores that relegate you
to this or that ability quadrant, and your teachers have got you pegged. You
are, as Foucault might say, the intersection of many evaluative and
potentially determining discourses: you boy, you girl, have been made.
And—contra Foucault—that’s not so bad. Embedded in all of the major
religions are profound truths. Schopenhauer, who despised belief in
transcendent things, nonetheless thought Christianity to be of inexpressible
worth. He couldn’t believe in the divinity of Jesus, or in the afterlife, but to
Schopenhauer, a deep pessimist, a religion that had as its central emblem the
figure of a man being tortured on a cross couldn’t be entirely misleading. To
the Christian, Schopenhauer said, pain was at the center of the
understanding of life, and that was just as it should be.
7
One does not need to be as harsh as Schopenhauer to understand the use of
religion, even if one does not believe in an otherworldly god. And all of those
teachers and counselors and friends—and the prognosticating uncles, the
dithering aunts, the fathers and mothers with their hopes for your
fulfillment—or their fulfillment in you—should not necessarily be cast aside
or ignored. Families have their wisdom. The question “Who do they think you
are at home?” is never an idle one.
The major conservative thinkers have always been very serious about what
goes by the name of common sense. Edmund Burke saw common sense as a
loosely made, but often profound, collective work, in which humanity has
deposited its hard-earned wisdom—the precipitate of joy and tears—over
time. You have been raised in proximity to common sense, if you’ve been
raised at all, and common sense is something to respect, though not quite—
peace unto the formidable Burke—to revere.
You may be all that the good people who raised you say you are; you may
want all they have shown you is worth wanting; you may be someone who is
truly your father’s son or your mother’s daughter. But then again, you may
not be.
For the power that is in you, as Emerson suggested, may be new in nature.
You may not be the person that your parents take you to be. And—this
thought is both more exciting and more dangerous—you may not be the
person that you take yourself to be, either. You may not have read yourself
aright, and college is the place where you can find out whether you have or
not. The reason to read Blake and Dickinson and Freud and Dickens is not to
become more cultivated, or more articulate, or to be someone who, at a
cocktail party, is never embarrassed (or who can embarrass others). The best
reason to read them is to see if they may know you better than you know
yourself. You may find your own suppressed and rejected thoughts flowing
back to you with an “alienated majesty.” Reading the great writers, you may
have the experience that Longinus associated with the sublime: You feel that
you have actually created the text yourself. For somehow your predecessors
are more yourself than you are.
This was my own experience reading the two writers who have influenced me
the most, Sigmund Freud and Ralph Waldo Emerson. They gave words to
thoughts and feelings that I had never been able to render myself. They
shone a light onto the world and what they saw, suddenly I saw, too. From
Emerson I learned to trust my own thoughts, to trust them even when every
voice seems to be on the other side. I need the wherewithal, as Emerson did,
to say what’s on my mind and to take the inevitable hits. Much more I
learned from the sage—about character, about loss, about joy, about writing
and its secret sources, but Emerson most centrally preaches the gospel of self8
reliance and that is what I have tried most to take from him. I continue to
hold in mind one of Emerson’s most memorable passages: “Society is a jointstock company, in which the members agree, for the better securing of his
bread to each shareholder, to surrender the liberty and culture of the eater.
The virtue in most request is conformity. Self-reliance is its aversion. It loves
not realities and creators, but names and customs.”
Emerson’s greatness lies not only in showing you how powerful names and
customs can be, but also in demonstrating how exhilarating it is to buck
them. When he came to Harvard to talk about religion, he shocked the
professors and students by challenging the divinity of Jesus and the truth of
his miracles. He wasn’t invited back for decades.
From Freud I found a great deal to ponder as well. I don’t mean Freud the
aspiring scientist, but the Freud who was a speculative essayist and
interpreter of the human condition like Emerson. Freud challenges nearly
every significant human ideal. He goes after religion. He says that it comes
down to the longing for the father. He goes after love. He calls it “the
overestimation of the erotic object.” He attacks our desire for charismatic
popular leaders. We’re drawn to them because we hunger for absolute
authority. He declares that dreams don’t predict the future and that there’s
nothing benevolent about them. They’re disguised fulfillments of repressed
wishes.
Freud has something challenging and provoking to say about virtually every
human aspiration. I learned that if I wanted to affirm any consequential
ideal, I had to talk my way past Freud. He was—and is—a perpetual
challenge and goad.
Never has there been a more shrewd and imaginative cartographer of the
psyche. His separation of the self into three parts, and his sense of the
fraught, anxious, but often negotiable relations among them (negotiable when
you come to the game with a Freudian knowledge), does a great deal to help
one navigate experience. (Though sometimes—and this I owe to Emerson—it
seems right to let the psyche fall into civil war, accepting barrages of anxiety
and grief for this or that good reason.)
The battle is to make such writers one’s own, to winnow them out and to find
their essential truths. We need to see where they fall short and where they
exceed the mark, and then to develop them a little, as the ideas themselves,
one comes to see, actually developed others. (Both Emerson and Freud live
out of Shakespeare—but only a giant can be truly influenced by
Shakespeare.) In reading, I continue to look for one thing—to be influenced,
to learn something new, to be thrown off my course and onto another, better
way.
9
My father knew that he was dissatisfied with life. He knew that none of the
descriptions people had for him quite fit. He understood that he was always
out-of-joint with life as it was. He had talent: My brother and I each got about
half the raw ability he possessed and that’s taken us through life well
enough. But what to do with that talent—there was the rub for my father. He
used to stroll through the house intoning his favorite line from Groucho
Marx’s ditty “Whatever it is, I’m against it.” (I recently asked my son, now
twenty-one, if he thought I was mistaken in teaching him this particular song
when he was six years old. “No!” he said, filling the air with an invisible
forest of exclamation points.) But what my father never managed to get was a
sense of who he might become. He never had a world of possibilities spread
before him, never made sustained contact with the best that had been
thought and said. He didn’t get to revise his understanding of himself, figure
out what he’d do best that might give the world some profit.
My father was a gruff man, but also a generous one, so that night at the
kitchen table at 58 Clewley Road he made an effort to let me have the chance
that had been denied to him by both fate and character. He gave me the
chance to see what I was all about, and if it proved to be different from him,
proved even to be something he didn’t like or entirely comprehend, then he’d
deal with it.
Right now, if you’re going to get a real education, you may have to be
aggressive and assertive.
Your professors will give you some fine books to read, and they’ll probably
help you understand them. What they won’t do, for reasons that perplex me,
is to ask you if the books contain truths you could live your lives by. When
you read Plato, you’ll probably learn about his metaphysics and his politics
and his way of conceiving the soul. But no one will ask you if his ideas are
good enough to believe in. No one will ask you, in the words of Emerson’s
disciple William James, what their “cash value” might be. No one will suggest
that you might use Plato as your bible for a week or a year or longer. No one,
in short, will ask you to use Plato to help you change your life.
That will be up to you. You must put the question of Plato to yourself. You
must ask whether reason should always rule the passions, philosophers
should always rule the state, and poets should inevitably be banished from a
just commonwealth. You have to ask yourself if wildly expressive music (rock
and rap and the rest) deranges the soul in ways that are destructive to its
health. You must inquire of yourself if balanced calm is the most desirable
human state.
Occasionally—for you will need some help in fleshing-out the answers—you
may have to prod your professors to see if they take the text at hand—in this
10
case the divine and disturbing Plato—to be true. And you will have to be
tough if the professor mocks you for uttering a sincere question instead of
keeping matters easy for all concerned by staying detached and analytical.
(Detached analysis has a place—but, in the end, you’ve got to speak from the
heart and pose the question of truth.) You’ll be the one who pesters his
teachers. You’ll ask your history teacher about whether there is a design to
our history, whether we’re progressing or declining, or whether, in the words
of a fine recent play, The History Boys, history’s “just one fuckin’ thing after
another.” You’ll be the one who challenges your biology teacher about the
intellectual conflict between evolution and creationist thinking. You’ll not
only question the statistics teacher about what numbers can explain but
what they can’t.
Because every subject you study is a language and since you may adopt one of
these languages as your own, you’ll want to know how to speak it expertly
and also how it fails to deal with those concerns for which it has no adequate
words. You’ll be looking into the reach of every metaphor that every
discipline offers, and you’ll be trying to see around their corners.
The whole business is scary, of course. What if you arrive at college devoted
to pre-med, sure that nothing will make you and your family happier than a
life as a physician, only to discover that elementary-school teaching is where
your heart is?
You might learn that you’re not meant to be a doctor at all. Of course, given
your intellect and discipline, you can still probably be one. You can pound
your round peg through the very square hole of medical school, then go off
into the profession. And society will help you. Society has a cornucopia of
resources to encourage you in doing what society needs done but that you
don’t much like doing and are not cut out to do. To ease your grief, society
offers alcohol, television, drugs, divorce, and buying, buying, buying what you
don’t need. But all those too have their costs.
Education is about finding out what form of work for you is close to being
play—work you do so easily that it restores you as you go. Randall Jarrell
once said that if he were a rich man, he would pay money to teach poetry to
students. (I would, too, for what it’s worth.) In saying that, he (like my father)
hinted in the direction of a profound and true theory of learning.
Having found what’s best for you to do, you may be surprised how far you
rise, how prosperous, even against your own projections, you become. The
student who eschews medical school to follow his gift for teaching small
children spends his twenties in low-paying but pleasurable and soulrewarding toil. He’s always behind on his student-loan payments; he still
lives in a house with four other guys (not all of whom got proper instructions
11
on how to clean a bathroom). He buys shirts from the Salvation Army, has
intermittent Internet, and vacations where he can. But lo—he has a gift for
teaching. He writes an essay about how to teach, then a book—which no one
buys. But he writes another—in part out of a feeling of injured merit,
maybe—and that one they do buy.
Money is still a problem, but in a new sense. The world wants him to write
more, lecture, travel more, and will pay him for his efforts, and he likes this a
good deal. But he also likes staying around and showing up at school and
figuring out how to get this or that little runny-nosed specimen to begin
learning how to read. These are the kinds of problems that are worth having
and if you advance, as Thoreau said, in the general direction of your dreams,
you may have them. If you advance in the direction of someone else’s
dreams—if you want to live someone else’s life rather than yours—then get a
TV for every room, buy yourself a lifetime supply of your favorite quaff, crank
up the porn channel, and groove away. But when we expend our energies in
rightful ways, Robert Frost observed, we stay whole and vigorous and we
don’t weary. “Strongly spent,” the poet says, “is synonymous with kept.”
12
Did the Decline of Sampling Cause the
Decline of Political Hip Hop?
By Erik Nielson
Last week, the electronic artist Clive Tanaka filed a suit against Nicki Minaj, claiming
that she and her production team cribbed significant portions of his 2011 track “Neu
Chicago” to create her 2012 super-hit, “Starships.” In doing so, he added the superstar
rapper to the growing list of hip-hop and R&B musicians to make recent headlines for
copyright disputes. There’s been well-publicized legal drama, for example, involving
Robin Thicke, Pharrell, and T.I. over their use of Marvin Gaye and Funkadelic tracks in
their chart-topping song “Blurred Lines.” Last month New Orleans musician Paul Batiste
accused a number of artists—T-Pain, Rick Ross, and DJ Khaled among them—of
illegally sampling his music. And just a few weeks ago, Young Jeezy found himself
facing a lawsuit from Leroy Hutson for the unauthorized use of Hutson’s song “Gettin’ It
On” on his 2010 mixtape, Trap or Die2.
These kinds of lawsuits have become commonplace since the early 1990s, thanks in large
part to the 1991 U.S. District Court case Grand Upright Music, Ltd. v. Warner Bros.
Records, Inc., which ended the days of free-for-all sampling by requiring artists to clear
all samples in advance to avoid getting sued. The judge in the case opened his ruling
with “Thou shalt not steal” and went so far as to suggest that rapper Biz Markie and his
label should face criminal charges for their unauthorized use of a Gilbert O’Sullivan
sample. (They didn’t.) Similar cases followed, upholding the need to clear even the
smallest of samples.
As a result, landmark albums such as De La Soul’s Three Feet High and Rising and
Public Enemy’s It Takes a Nation of Millions to Hold Us Back, built upon a dizzying
array of samples, soon became all but impossible to produce because of the costs
involved (according to Spin the average base price to clear a single sample is $10,000).
To this day, sample-based rap remains a shadow of its former self, practiced only by hip
hop’s elite—those with the budgets to clear increasingly expensive samples or defend
lawsuits when they don’t.
Some of the consequences for rap music as a genre are clear, the most obvious being that
the sound of the music has changed. The relatively sample-free soundscapes of producers
like Timbaland or the Neptunes are a testament to that fact, as are the songs that rely on
just one or two samples rather than 20 or 30.
But might there be subtler, thematic implications of the decline in sampling?
It’s notable, for instance, that at the same time sampling was curbed by new copyright
enforcement, we also witnessed the sunset of rap’s “golden age,” a time when dropping
socially or politically engaged lyrics didn’t automatically relegate artists to “the
underground.” As someone who studies and teaches about hip hop (and who’s been
13
listening to the music for 25 years), I'm not sure that’s a coincidence. After all, sampling
provided an important engagement with musical and political history, a connection that
was interrupted by Grand Upright and the cases after it, coinciding with a growing
disconnect between rap music and a sense of social responsibility.
That’s not to say sampling always resulted in the lyrics that educated, even during the
“golden age.” The Beastie Boys’ 1989 album Paul’s Boutique, a sampling classic,
wasn’t exactly concerned with social edification. But as Hank Shocklee, pioneering
member of Public Enemy’s production team The Bomb Squad, told me, having open
access to samples often did significantly impact artists’ lyrical content: “A lot of the
records that were being sampled were socially conscious, socially relevant records, and
that has a way of shaping the lyrics that you’re going to write in conjunction with them.”
When you take sampling out of the equation, Shocklee said, much of the social
consciousness disappears because, as he put it, “artists’ lyrical reference point only lies
within themselves.”
When that lyrical reference point can be rooted in previous compositions, the creative
possibilities become astonishing. Take the first 30 seconds of Public Enemy’s song
“Can’t Truss It,” off their 1991 album Apocalypse 91: The Enemy Strikes Black.
Lyrically, the song argues that in order to understand the present, African Americans
have to understand the past—they’ve got to “get to the roots” and grapple with the
historical legacy of slavery. To reinforce the song’s message, there’s an entire storyline
of samples underpinning the lyrics, beginning with Richard Pryor’s voice saying, “It
started in slave ships.” Then, immediately following, is a distorted sample of Alex Haley,
author of Roots (hence the connection to the song’s focus on “roots”), describing the
horrors of the Middle Passage. That clip then cuts to a sample of Malcolm X’s voice,
arguing for violent resistance, which ultimately foreshadows Chuck D’s vengeance later
in the song when he raps, “Still I plan to get my hands around the neck of the man with
the whip.” All throughout these opening moments, we hear churning helicopter blades,
providing a sonic connection to the present and a reminder of the ways in which police
and military power are still used to maintain the hierarchies that trace back to slavery.
The complex use of samples to comment on and reinforce the song’s message continues
throughout; at one point, there’s a sampled bass line from the group Slave, an obvious
connection to the lyrical content, and a subtle call to collective action with a recurring but
not immediately identifiable sample of James Brown’s “Get Up, Get Into It, Get
Involved.” Calling and responding to one another, the samples and the lyrics create
complementary, interconnected narratives that take listeners on a historical tour through
music and politics, in the process offering a reminder that rap music resides within
creative, intellectual, and communal traditions that are, in the words of Dead Prez,
“bigger than hip hop.”
In today’s hip-hop climate, dominated by the likes of Drake, Nicki Minaj, Rick Ross, and
2 Chainz, it has become difficult to find evidence that these traditions are informing
mainstream rap. Earlier this year when Young Money boss Lil Wayne was skewered for
his offensive reference to Emmett Till on “Karate Chop,” people weren’t just responding
14
to his callous indifference to the lynching of a young boy; they were responding to his
utter disregard for, and mockery of, the historical struggles that made his career possible.
With some exceptions, there’s little indication that top-selling rappers like Lil Wayne are
aware of their relatively small place in the social and creative history that made their
careers possible. And without a sense that they are part of something bigger, something
collective, I think it becomes all too easy for them to forego socially conscious messages
in favor of the tired, narcissistic lyrics that dominate radio rotations and Billboard charts.
"These new artists, you cannot learn anything from them," says legendary hip hop
producer Pete Rock, speaking to the frustration that I and many others have with today’s
rap music. “Not one thing. Nothing…It’s just whack how the game changed into
ignorance.” For him, this ignorance has become pervasive in large part due to a lost
connection with the past, one that sampling provided: “Subtract sampling and you get
ignorance…Cats are not open to learning about what was before them.”
Of course, there are a number of other reasons for what USC Professor Todd Boyd once
declared “The Death of Politics in Rap Music and Popular Culture,” not the least of
which was white America’s growing taste for gangsta-style narratives featuring largerthan-life outlaws and graphic depictions of sex and violence. And there are obviously
still artists who engage political themes, some of them without sampling at all.
Macklemore and Ryan Lewis’s mega-hit, The Heist, doesn’t contain a single sample, and
the album has a number of tracks that offer social and political commentary.
Conversely, there are a few contemporary rappers who do use samples extensively but
who are far from emulating the politics of Public Enemy or KRS ONE. Kanye West is an
obvious example—he samples liberally and is just as likely to revel in material excess
and misogyny (as with the Ray Charles clip ported in for "Gold Digger") as he is to
deliver a pointed social message (as with the Gil Scott-Heron performance that closes out
My Beautiful Dark Twisted Fantasy). That oscillation may stem from the fact that,
nowadays, samples often speak less to hip-hop history than they do to present-day
earnings—they have become so expensive to clear that for someone like Kanye West,
they become markers of wealth to flaunt alongside his diamonds and Maybachs. So, for
instance, when Kanye sampled Otis Redding on Watch the Throne, music critic Chris
Richards probably got it right when he said, “although West’s creation sounded cool, the
overriding message was, ‘This cost me a lot of money.’”
Thanks to a corporate and legal system that rappers don’t control, that’s one of the
underlying messages of just about any sample today. This is a significant break from the
early days of hip hop. As Shocklee explains it, “The reason why we sampled in the
beginning was that we couldn’t afford to have a guitar player come in and play on our
record. We couldn’t afford to have that horn section…or the string sections. We were
like scavengers, going through the garbage bin and finding whatever we could from our
old dusty records.” In a complete paradigm shift, today it’s probably less expensive to
hire those string sections than to sample them.
15
With all the money now involved in sampling, though, there’s still one egalitarian space
that has remained: the circulation of free mixtapes, which well-known and obscure artists
alike use to generate publicity or showcase their talents. These mixes, often based on
uncleared samples, have generally avoided copyright lawsuits because they don’t
generate revenue (not directly, anyway), and they have served as an important entrée into
the industry for a number of well-known acts over the years. They have also provided a
space for artistic experimentation as well as, for some, a way to explore oppositional
politics. But recently even mixtapes have begun attracting lawsuits; along with Young
Jeezy, big-name rappers like 50 Cent, Kanye West, Lil Wayne, and Mac Miller have been
sued for illegal sampling on their free mixes.
And so, with this important avenue of expression closing off for hip hop performers, what
will the future of the music look like? As Pete Rock took me through the previous
generation of musicians that inspired and changed him, he finished by saying, “Music can
really, really raise you.” While he was talking about music’s capacity to elevate and
transform, a new generation of kids is being raised on a brand of rap music that has fewer
recognizable links to its artistic antecedents. Perhaps, with so many new, Internet-based
ways to access old music, my own children won’t need samples to “get to the roots” of
rap music and will find them on their own.
I hope so. But just in case, I’m feeding them a steady diet of Public Enemy.
16
The Killing Machines:
How to think about drones
By Mark Bowden
I. Unfairness
Consider David. The shepherd lad steps up to face in single combat the
Philistine giant Goliath. Armed with only a slender staff and a slingshot, he
confronts a fearsome warrior clad in a brass helmet and chain mail, wielding
a spear with a head as heavy as a sledge and a staff “like a weaver’s beam.”
Goliath scorns the approaching youth: “Am I a dog, that thou comest to me
with staves?” (1 Samuel 17)
David then famously slays the boastful giant with a single smooth stone from
his slingshot.
A story to gladden the hearts of underdogs everywhere, its biblical moral is:
Best to have God on your side. But subtract the theological context and what
you have is a parable about technology. The slingshot, a small, lightweight
weapon that employs simple physics to launch a missile with lethal force
from a distance, was an innovation that rendered all the giant’s advantages
moot. It ignored the spirit of the contest. David’s weapon was, like all
significant advances in warfare, essentially unfair.
As anyone who has ever been in combat will tell you, the last thing you want
is a fair fight. Technology has been tilting the balance of battles since Goliath
fell. I was born into the age of push-button warfare. Ivy Mike, the first
thermonuclear bomb, capable of vaporizing an entire modern metropolis, of
killing millions of people at once, was detonated over the Pacific before my
second birthday. Growing up, the concept of global annihilation wasn’t just
science fiction. We held civil-defense drills to practice for it.
Within my lifetime, that evolution has taken a surprising turn. Today we find
ourselves tangled in legal and moral knots over the drone, a weapon that can
find and strike a single target, often a single individual, via remote control.
Unlike nuclear weapons, the drone did not emerge from some multibilliondollar program on the cutting edge of science. It isn’t even completely new.
The first Predator drone consisted of a snowmobile engine mounted on a
radio-controlled glider. When linked via satellite to a distant control center,
drones exploit telecommunications methods perfected years ago by TV
networks—in fact, the Air Force has gone to ESPN for advice. But when you
17
pull together this disparate technology, what you have is a weapon capable of
finding and killing someone just about anywhere in the world.
Drone strikes are a far cry from the atomic vaporizing of whole cities, but the
horror of war doesn’t seem to diminish when it is reduced in scale. If
anything, the act of willfully pinpointing a human being and summarily
executing him from afar distills war to a single ghastly act.
One day this past January, a small patrol of marines in southern
Afghanistan was working its way at dusk down a dirt road not far from
Kandahar, staying to either side to avoid planted bombs, when it
unexpectedly came under fire. The men scattered for cover. A battered pickup
truck was closing in on them and popping off rounds from what sounded like
a big gun.
Continents away, in a different time zone, a slender 19-year-old American
soldier sat at a desk before a large color monitor, watching this action unfold
in startlingly high definition. He had never been near a battlefield. He had
graduated from basic training straight out of high school, and was one of a
select few invited to fly Predators. This was his first time at the controls,
essentially a joystick and the monitor. The drone he was flying was roughly
15,000 feet above the besieged patrol, each member marked clearly in
monochrome on his monitor by an infrared uniform patch. He had been
instructed to watch over the patrol, and to “stay frosty,” meaning: Whatever
happens, don’t panic. No one had expected anything to happen. Now
something was happening.
The young pilot zoomed in tight on the approaching truck. He saw in its bed a
.50-caliber machine gun, a weapon that could do more damage to an army
than a platoon of Goliaths.
A colonel, watching over his shoulder, said, “They’re pinned down pretty
good. They’re gonna be screwed if you don’t do something.”
The colonel told the pilot to fix on the truck. A button on the joystick pulled
up a computer-generated reticle, a grid displaying exact ground coordinates,
distance, direction, range, etc. Once the computer locked on the pickup, it
stayed zeroed in on the moving target.
“Are you ready to help?” the colonel asked.
An overlay on the grid showed the anticipated blast radius of an AGM-114
Hellfire missile—the drone carried two. Communicating via a digital audio
link, the colonel instructed the men on the ground to back away, then gave
them a few seconds to do so.
18
The pilot scrutinized the vehicle. Those who have seen unclassified clips of
aerial attacks have only a dim appreciation of the optics available to the
military and the CIA.
“I could see exactly what kind of gun it was in back,” the pilot told me later.
“I could see two men in the front; their faces were covered. One was in the
passenger seat and one was in the driver’s seat, and then one was on the gun,
and I think there was another sitting in the bed of the truck, but he was kind
of obscured from my angle.”
On the radio, they could hear the marines on the ground shouting for help.
“Fire one,” said the colonel.
The Hellfire is a 100-pound antitank missile, designed to destroy an armored
vehicle. When the blast of smoke cleared, there was only a smoking crater on
the dirt road.
“I was kind of freaked out,” the pilot said. “My whole body was shaking. It
was something that was completely different. The first time doing it, it feels
bad almost. It’s not easy to take another person’s life. It’s tough to think
about. A lot of guys were congratulating me, telling me, ‘You protected them;
you did your job. That’s what you are trained to do, supposed to do,’ so that
was good reinforcement. But it’s still tough.”
One of the things that nagged at him, and that was still bugging him months
later, was that he had delivered this deathblow without having been in any
danger himself. The men he killed, and the marines on the ground, were at
war. They were risking their hides. Whereas he was working his scheduled
shift in a comfortable office building, on a sprawling base, in a peaceful
country. It seemed unfair. He had been inspired to enlist by his grandfather’s
manly stories of battle in the Korean War. He had wanted to prove something
to himself and to his family, to make them as proud of him as they had been
of his Pop-Pop.
“But this was a weird feeling,” he said. “You feel bad. You don’t feel worthy.
I’m sitting there safe and sound, and those guys down there are in the thick
of it, and I can have more impact than they can. It’s almost like I don’t feel
like I deserve to be safe.”
After slaying Goliath, David was made commander of the Israelite armies
and given the hand of King Saul’s daughter. When the Pentagon announced
earlier this year that it would award a new medal to drone pilots and cyber
warriors, it provoked such outrage from veterans that production of the new
decoration was halted and the secretary of defense sentenced the medal to a
19
review and then killed it. Members of Congress introduced legislation to
ensure that any such award would be ranked beneath the Purple Heart, the
medal given to every wounded soldier. How can someone who has never
physically been in combat receive a combat decoration?
The question hints at something more important than war medals, getting at
the core of our uneasiness about the drone. Like the slingshot, the drone
fundamentally alters the nature of combat. While the young Predator pilot
has overcome his unease—his was a clearly justifiable kill shot fired in
conventional combat, and the marines on the ground conveyed their sincere
gratitude—the sense of unfairness lingers.
If the soldier who pulls the trigger in safety feels this, consider the emotions
of those on the receiving end, left to pick up the body parts of their husbands,
fathers, brothers, friends. Where do they direct their anger? When the wrong
person is targeted, or an innocent bystander is killed, imagine the sense of
impotence and rage. How do those who remain strike back? No army is
arrayed against them, no airfield is nearby to be attacked. If they manage to
shoot down a drone, what have they done but disable a small machine? No
matter how justified a strike seems to us, no matter how carefully weighed
and skillfully applied, to those on the receiving end it is profoundly arrogant,
the act of an enemy so distant and superior that he is untouchable.
“The political message [of drone strikes] emphasizes the disparity in power
between the parties and reinforces popular support for the terrorists, who are
seen as David fighting Goliath,” Gabriella Blum and Philip B. Heymann, both
law professors at Harvard, wrote in their 2010 book, Laws, Outlaws, and
Terrorists: Lessons From the War on Terror. “Moreover, by resorting to
military force rather than to law enforcement, targeted killings might
strengthen the sense of legitimacy of terrorist operations, which are
sometimes viewed as the only viable option for the weak to fight against a
powerful empire.”
Is it any wonder that the enemy seizes upon targets of opportunity—a
crowded café, a passenger jet, the finish line of a marathon? There is no
moral justification for deliberately targeting civilians, but one can
understand why it is done. Arguably the strongest force driving lone-wolf
terror attacks in recent months throughout the Western world has been
anger over drone strikes.
The drone is effective. Its extraordinary precision makes it an advance in
humanitarian warfare. In theory, when used with principled restraint, it is
the perfect counterterrorism weapon. It targets indiscriminate killers with
exquisite discrimination. But because its aim can never be perfect, can only
20
be as good as the intelligence that guides it, sometimes it kills the wrong
people—and even when it doesn’t, its cold efficiency is literally inhuman.
So how should we feel about drones?
II. Gorgon Stare
The Defense Department has a secret state-of-the-art control center in Dubai
with an IMAX-size screen at the front of the main room that can project video
feed from dozens of drones at once. The Air Force has been directed to
maintain capability for 65 simultaneous Combat Air Patrols. Each of these
involves multiple drones, and maintains a persistent eye over a potential
target. The Dubai center, according to someone who has seen it, resembles a
control center at NASA, with hundreds of pilots and analysts arrayed in rows
before monitors.
This is a long way from the first known drone strike, on November 4, 2002,
when a Hellfire missile launched from a Predator over Yemen blew up a car
carrying Abu Ali al-Harithi, one of the al-Qaeda leaders responsible for the
2000 bombing of the USS Cole. Killed along with him in the car were five
others, including an American citizen, Kamal Derwish, who was suspected of
leading a terrorist cell based near Buffalo, New York. The drone used that
day had only recently been reconfigured as a weapon. During testing, its
designers had worried that the missile’s backblast would shatter the
lightweight craft. It didn’t. Since that day, drones have killed thousands of
people.
John Yoo, the law professor who got caught up in tremendous controversy as
a legal counselor to President George W. Bush over harsh interrogation
practices, was surprised that drone strikes have provoked so little handwringing.
“I would think if you are a civil libertarian, you ought to be much more upset
about the drone than Guantánamo and interrogations,” he told me when I
interviewed him recently. “Because I think the ultimate deprivation of liberty
would be the government taking away someone’s life. But with drone killings,
you do not see anything, not as a member of the public. You read reports
perhaps of people who are killed by drones, but it happens 3,000 miles away
and there are no pictures, there are no remains, there is no debris that
anyone in the United States ever sees. It’s kind of antiseptic. So it is like a
video game; it’s like Call of Duty.”
The least remarkable thing about the system is the drone itself. The Air
Force bristles at the very word—drones conjures autonomous flying robots,
reinforcing the notion that human beings are not piloting them. The Air
21
Force prefers that they be called Remotely Piloted Aircraft. But this linguistic
battle has already been lost: my New Oxford American Dictionary now
defines drone as—in addition to a male bee and monotonous speech—“a
remote-controlled pilotless aircraft or missile.” Even though drones now
range in size from a handheld Raven, thrown into the air by infantry units so
they can see over the next hill, to the Global Hawk, which is about the same
size as a Boeing 737, the craft itself is just an airplane. Most drones are
propeller-driven and slow-moving—early-20th-century technology.
In December 2012, when Iran cobbled together a rehabilitated version of a
ScanEagle that had crashed there, the catapult-launched weaponless Navy
drone was presented on Iranian national television as a major intelligence
coup.
“They could have gone to RadioShack and captured the same ‘secret’
technology,” Vice Admiral Mark I. Fox, the Navy’s deputy chief for
operations, plans, and strategy, told The New York Times. The vehicle had
less computing power than a smartphone.
Even when, the year before, Iran managed to recover a downed RQ-170
Sentinel, a stealthy, weaponless, unmanned vehicle flown primarily by the
CIA, one of the most sophisticated drones in the fleet, it had little more than
a nifty flying model. Anything sensitive inside had been remotely destroyed
before the Sentinel was seized.
James Poss, a retired Air Force major general who helped oversee the
Predator’s development, says he has grown so weary of fascination with the
vehicle itself that he’s adopted the slogan “It’s about the datalink, stupid.”
The craft is essentially a conduit, an eye in the sky. Cut off from its back end,
from its satellite links and its data processors, its intelligence analysts and
its controller, the drone is as useless as an eyeball disconnected from the
brain. What makes the system remarkable is everything downrange—what
the Air Force, in its defiantly tin-eared way, calls PED (Processing,
Exploitation, and Dissemination). Despite all the focus on missiles, what
gives a drone its singular value is its ability to provide perpetual, relatively
low-cost surveillance, watching a target continuously for hours, days, weeks,
even months. Missiles were mounted on Predators only because too much
time was lost when a fire mission had to be handed off to more-conventional
weapons platforms—a manned aircraft or ground- or ship-based missile
launcher. That delay reduced or erased the key advantage now afforded by
the drone. With steady, real-time surveillance, a controller can strike with
the target in his sights. He can, for instance, choose a moment when his
victim is isolated, or traveling in a car, reducing the chance of harming
anyone else.
22
I recently spoke with an Air Force pilot who asked to be identified only as
Major Dan. He has logged 600 combat hours in the B-1 bomber and, in the
past six years, well over 2,000 hours flying Reapers—larger, more heavily
armed versions of the Predator. He describes the Reaper as a significantly
better war-fighting tool for this mission than the B-1 in every measure. The
only thing you lose when you go from a B-1 to a Reaper, he says, is the thrill
of “lighting four afterburners” on a runway.
From a pilot’s perspective, drones have several key advantages. First,
mission duration can be vastly extended, with rotating crews. No more trying
to stay awake for long missions, nor enduring the physical and mental
stresses of flying. (“After you’ve been sitting in an ejection seat for 20 hours,
you are very tired and sore,” Dan says.)
In addition, drones provide far greater awareness of what’s happening on the
ground. They routinely watch targets for prolonged periods—sometimes for
months—before a decision is made to launch a missile. Once a B-1 is in flight,
the capacity for ground observation is more limited than what is available to
a drone pilot at a ground station. From his control station at the Pentagon,
Dan is not only watching the target in real time; he has immediate access to
every source of information about it, including a chat line with soldiers on the
ground.
Dan was so enthusiastic about these and other advantages of drones that,
until I prodded him, he didn’t say anything about the benefit of getting to be
home with his family and sleep in his own bed. Dan is 38 years old, married,
with two small children. In the years since he graduated from the Air Force
Academy, he has deployed several times to far-off bases for months-long
stretches. Now he is regularly home for dinner.
The dazzling clarity of the drone’s optics does have a downside. As a B-1 pilot,
Dan wouldn’t learn details about the effects of his weapons until a postmission briefing. But flying a drone, he sees the carnage close-up, in real
time—the blood and severed body parts, the arrival of emergency responders,
the anguish of friends and family. Often he’s been watching the people he
kills for a long time before pulling the trigger. Drone pilots become familiar
with their victims. They see them in the ordinary rhythms of their lives—
with their wives and friends, with their children. War by remote control turns
out to be intimate and disturbing. Pilots are sometimes shaken.
“There is a very visceral connection to operations on the ground,” Dan says.
“When you see combat, when you hear the guy you are supporting who is
under fire, you hear the stress in his voice, you hear the emotions being
passed over the radio, you see the tracers and rounds being fired, and when
you are called upon to either fire a missile or drop a bomb, you witness the
23
effects of that firepower.” He witnesses it in a far more immediate way than
in the past, and he disdains the notion that he and his fellow drone pilots are
like video gamers, detached from the reality of their actions. If anything, they
are far more attached. At the same time, he dismisses the notion that the
carnage he now sees up close is emotionally crippling.
“In my mind, the understanding of what I did, I wouldn’t say that one was
significantly different from the other,” he says.
Drones collect three primary packages of data: straight visual; infrared (via a
heat-sensing camera that can see through darkness and clouds); and what is
called SIGINT (Signals Intelligence), gathered via electronic eavesdropping
devices and other sensors. One such device is known as LIDAR (a
combination of the words light and radar), which can map large areas in 3-D.
The optical sensors are so good, and the pixel array so dense, that the device
can zoom in clearly on objects only inches wide from well over 15,000 feet
above. With computer enhancement to eliminate distortion and counteract
motion, facial-recognition software is very close to being able to pick
individuals out of crowds. Operators do not even have to know exactly where
to look.
“We put in the theatre [in 2011] a system called Gorgon Stare,” Lieutenant
General Larry James, the Air Force’s deputy chief of staff for intelligence,
surveillance, and reconnaissance, told me. “Instead of one soda-straw-size
view of the world with the camera, we put essentially 10 cameras ganged
together, and it gives you a very wide area of view of about four kilometers by
four kilometers—about the size of the city of Fairfax, [Virginia]—that you can
watch continuously. Not as much fidelity in terms of what the camera can
see, but I can see movement of cars and people—those sorts of things. Now,
instead of staring at a small space, which may be, like, a villa or compound, I
can look at a whole city continuously for as long as I am flying that particular
system.”
Surveillance technology allows for more than just looking: computers store
these moving images so that analysts can dial back to a particular time and
place and zero in, or mark certain individuals and vehicles and instruct the
machines to track them over time. A suspected terrorist-cell leader or bomb
maker, say, can be watched for months. The computer can then instantly
draw maps showing patterns of movement: where the target went, when
there were visitors or deliveries to his home. If you were watched in this way
over a period of time, the data could not just draw a portrait of your daily
routine, but identify everyone with whom you associate. Add to this
cellphone, text, and e-mail intercepts, and you begin to see how special-ops
units in Iraq and Afghanistan can, after a single nighttime arrest, round up
entire networks before dawn.
24
All of this requires the collection and manipulation of huge amounts of data,
which, James says, is the most difficult technical challenge involved.
“Take video, for example,” he says. “ESPN has all kinds of tools where they
can go back and find Eli Manning in every video that was shot over the last
year, and they can probably do it in 20 minutes. So how do we bring those
types of tools [to intelligence work]? Okay, I want to find this red 1976 Chevy
pickup truck in every piece of video that I have shot in this area for the last
three months. We have a pretty hard push to really work with the Air Force
Research Lab, and the commercial community, to understand what tools I
can bring in to help make sense of all this data.”
To be used effectively, a drone must be able to hover over a potential target
for long periods. A typical Predator can stay aloft for about 20 hours; the
drones are flown in relays to maintain a continuous Combat Air Patrol.
Surveillance satellites pass over a given spot only once during each orbit of
the Earth. The longest the U-2, the most successful spy plane in history, can
stay in the air is about 10 hours, because of the need to spell its pilot and
refuel. The Predator gives military and intelligence agencies a surveillance
option that is both significantly less expensive and more useful, because it
flies unmanned, low, and slow.
Precisely because drones fly so low and so slow, and have such a “noisy”
electronic signature, operating them anywhere but in a controlled airspace is
impractical. The U.S. Air Force completely controls the sky over active war
zones like Afghanistan and Iraq—and has little to fear over countries like
Yemen, Somalia, and Mali. Over the rugged regions of northwestern
Pakistan, where most drone strikes have taken place, the U.S. operates with
the tacit approval of the Pakistani government. Without such permission, or
without a robust protection capability, the drone presents an easy target. Its
datalink can be disrupted, jammed, or hijacked. It’s only slightly harder to
shoot down than a hot-air balloon. This means there’s little danger of enemy
drone attacks in America anytime soon.
Drone technology has applications that go way beyond military uses, of
course—everything from domestic law enforcement to archeological surveys
to environmental studies. As they become smaller and cheaper, they will
become commonplace. Does this mean the government might someday begin
hurling thunderbolts at undesirables on city sidewalks? Unlikely. Our entire
legal system would have to collapse first. If the police just wanted to shoot
people on the street from a distance, they already can—they’ve had that
capability going back to the invention of the Kentucky long rifle and, before
that, the crossbow. I helped cover the one known instance of a local
government dropping a bomb on its own city, in 1985, when a stubborn backto-nature cult called Move was in an armed standoff with the Philadelphia
25
police. Then-Mayor Wilson Goode authorized dropping a satchel packed with
explosives from a hovering helicopter onto a rooftop bunker in West
Philadelphia. The bomb caused a conflagration that consumed an entire city
block. The incident will live long in the annals of municipal stupidity. The
capability to do the same with a drone will not make choosing to do so any
smarter, or any more likely. And as for Big Brother’s eye in the sky,
authorities have been monitoring public spaces from overhead cameras,
helicopters, and planes for decades. Many people think it’s a good idea.
The drone is new only in that it combines known technology in an original
way—aircraft, global telecommunications links, optics, digital sensors,
supercomputers, etc. It greatly lowers the cost of persistent surveillance.
When armed, it becomes a remarkable, highly specialized tool: a weapon that
employs simple physics to launch a missile with lethal force from a distance,
a first step into a world where going to war does not mean fielding an army,
or putting any of your own soldiers, sailors, or pilots at risk.
III. The Kill List
It is the most exclusive list in the world, and you would not want to be on it.
The procedure may have changed, but several years back, at the height of the
drone war, President Obama held weekly counterterror meetings at which he
was presented with a list of potential targets—mostly al-Qaeda or Taliban
figures—complete with photos and brief bios laid out like “a high school
yearbook,” according to a report in The New York Times.
The list is the product of a rigorous vetting process that the administration
has kept secret. Campaigning for the White House in 2008, Obama made it
clear (although few of his supporters were listening closely) that he would
embrace drones to go after what he considered the appropriate post-9/11
military target—“core al-Qaeda.” When he took office, he inherited a drone
war that was already expanding. There were 53 known strikes inside
Pakistan in 2009 (according to numbers assembled from press reports by The
Long War Journal), up from 35 in 2008, and just five the year before that. In
2010, the annual total more than doubled, to 117. The onslaught was
effective, at least by some measures: letters seized in the 2011 raid that killed
Osama bin Laden show his consternation over the rain of death by drone.
As U.S. intelligence analysis improved, the number of targets proliferated.
Even some of the program’s supporters feared it was growing out of control.
The definition of a legitimate target and the methods employed to track such
a target were increasingly suspect. Relying on other countries’ intelligence
agencies for help, the U.S. was sometimes manipulated into striking people
26
who it believed were terrorist leaders but who may not have been, or
implicated in practices that violate American values.
Reporters and academics at work in zones where Predator strikes had
become common warned of a large backlash. Gregory Johnsen, a scholar of
Near East studies at Princeton University, documented the phenomenon in a
2012 book about Yemen titled The Last Refuge. He showed that drone
attacks in Yemen tended to have the opposite of their intended effect,
particularly when people other than extremists were killed or hurt. Drones
hadn’t whittled al-Qaeda down, Johnsen argued; the organization had grown
threefold there. “US strikes and particularly those that kill civilians—be they
men or women—are sowing the seeds of future generations of terrorists,” he
wrote on his blog late last year. (See Johnsen’s accompanying article in this
issue.)
Michael Morrell, who was the deputy director of the CIA until June, was
among those in the U.S. government who argued for more restraint. During
meetings with John Brennan, who was Obama’s counterterrorism adviser
until taking over as the CIA director last spring, Morrell said he worried that
the prevailing goal seemed to be using drones as artillery, striking anyone
who could be squeezed into the definition of a terrorist—an approach
derisively called “Whack-A-Mole.” Morrell insisted that if the purpose of the
drone program was to diminish al-Qaeda and protect the United States from
terror attacks, then indiscriminate strikes were counterproductive.
Brennan launched an effort to select targets more carefully. Formalizing a
series of ad hoc meetings that began in the fall of 2009, Brennan in 2010
instituted weekly conclaves—in effect, death-penalty deliberations—where
would-be successors to bin Laden and Khalid Sheik Mohammed were selected
for execution before being presented to Obama for his approval. Brennan
demanded clear definitions. There were “high-value targets,” which consisted
of important al-Qaeda and Taliban figures; “imminent threats,” such as a
load of roadside bombs bound for the Afghan border; and, most controversial,
“signature strikes,” which were aimed at characters engaged in suspicious
activity in known enemy zones. In these principals’ meetings, which Brennan
chaired from the Situation Room, in the basement of White House,
deliberations were divided into two parts—law and policy. The usual
participants included representatives from the Pentagon, CIA, State
Department, National Counterterrorism Center, and, initially, the Justice
Department—although after a while the lawyers stopped coming. In the first
part of the meetings, questions of legality were considered: Was the prospect
a lawful target? Was he high-level? Could he rightly be considered to pose an
“imminent” threat? Was arrest a viable alternative? Only when these criteria
were deemed met did the discussion shift toward policy. Was it smart to kill
this person? What sort of impact might the killing have on local authorities,
27
or on relations with the governments of Pakistan or Yemen? What effect
would killing him have on his own organization? Would it make things better
or worse?
Brennan himself was often the toughest questioner. Two regular meeting
participants described him to me as thoughtful and concerned; one said his
demeanor was “almost priestly.” Another routinely skeptical and cautious
participant was James Steinberg, the deputy secretary of state for the first
two and a half years of Obama’s first term, who adhered to a strict list of
acceptable legal criteria drawn up by the State Department’s counsel, Harold
Koh. This criteria stipulated that any drone target would have to be a “senior
member” of al-Qaeda who was “externally focused”—that is, actively plotting
attacks on America or on American citizens or armed forces. Koh was
confident that even if his criteria did not meet all the broader concerns of
human-rights activists, they would support an international-law claim of selfdefense—and for that reason he thought the administration ought to make
the criteria public. Throughout Obama’s first term, members of the
administration argued about how much of the deliberation process to reveal.
During these debates, Koh’s position on complete disclosure was dismissively
termed “the Full Harold.” He was its only advocate.
Many of the sessions were contentious. The military and the CIA pushed
back hard against Koh’s strict criteria. Special Forces commanders, in
particular, abhorred what they saw as excessive efforts to “litigate” their war.
The price of every target the White House rejected, military commanders
said, was paid in American lives. Their arguments, coming from the war’s
front line, carried significant weight.
Cameron Munter, a veteran diplomat who was the U.S. ambassador to
Pakistan from 2010 to 2012, felt that weight firsthand when he tried to push
back. Munter saw American influence declining with nearly every strike.
While some factions in the Pakistani military and Inter-Services Intelligence
believed in the value of strikes, the Pakistani public grew increasingly
outraged, and elected officials increasingly hostile. Munter’s job was to
contain the crisis, a task complicated by the drone program’s secrecy, which
prevented him from explaining and defending America’s actions.
Matters came to a head in the summer of 2011 during a meeting to which
Munter was linked digitally. The dynamics of such meetings—where officials
turned to policy discussions after the legal determination had been made—
placed a premium on unified support for policy goals. Most participants
wanted to focus on the success of the battle against America’s enemies, not on
the corrosive foreign-policy side effects of the drone program.
28
At the decision meetings, it was hard for someone like Munter to say no. He
would appear digitally on the screen in the Situation Room, gazing out at the
vice president, the secretary of defense, and other principals, and they would
present him with the targeting decision they were prepared to make. It was
hard to object when so many people who titularly outranked him already
seemed set.
By June of 2011, however, two events in Pakistan—first the arrest and
subsequent release of the CIA contractor Raymond Davis, who had been
charged with murdering two Pakistanis who accosted him on the street in
Lahore, and then the Abbottabad raid that killed bin Laden—had brought
the U.S.-Pakistan partnership to a new low. Concerned about balancing the
short-term benefits of strikes (removing potential enemies from the
battlefield) and their long-term costs (creating a lasting mistrust and
resentment that undercut the policy goal of stability and peace in the region),
Munter decided to test what he believed was his authority to halt a strike. As
he recalled it later, the move played out as follows:
Asked whether he was on board with a particular strike, he said no.
Leon Panetta, the CIA director, said the ambassador had no veto power;
these were intelligence decisions.
Munter proceeded to explain that under Title 22 of the U.S. Code of Federal
Regulations, the president gives the authority to carry out U.S. policy in a
foreign country to his ambassador, delegated through the secretary of state.
That means no American policy should be carried out in any country without
the ambassador’s approval.
Taken aback, Panetta replied, “Well, I do not work for you, buddy.”
“I don’t work for you,” Munter told him.
Then Secretary of State Hillary Clinton stepped in: “Leon, you are wrong.”
Panetta said, flatly, “Hillary, you’re wrong.”
At that point, the discussion moved on. When the secretary of state and the
CIA director clash, the decision gets made upstairs.
Panetta won. A week later, James Steinberg called Munter to inform him
that he did not have the authority to veto a drone strike. Steinberg explained
that the ambassador would be allowed to express an objection to a strike, and
that a mechanism would be put in place to make sure his objection was
29
registered—but the decision to clear or reject a strike would be made higher
up the chain. It was a clear victory for the CIA.
Later that summer, General David Petraeus was named to take over the
intelligence agency from Panetta. Before assuming the job, Petraeus flew
from Kabul, where he was still the military commander, to Islamabad, to
meet with the ambassador. At dinner that night, Petraeus poked his finger
into Munter’s chest.
“You know what happened in that meeting?” the general asked. (Petraeus
had observed the clash via a secure link from his command post in
Afghanistan.) “That’s never going to happen again.”
Munter’s heart sank. He thought the new CIA director, whom he liked and
admired, was about to threaten him. Instead, Petraeus said: “I’m never going
to put you in the position where you feel compelled to veto a strike. If you
have a long-term concern, if you have a contextual problem, a timing
problem, an ethical problem, I want to know about it earlier. We can work
together to avoid these kinds of conflicts far in advance.”
Petraeus kept his word. Munter never had to challenge a drone strike in a
principals’ meeting again during his tenure as ambassador. He left
Islamabad in the summer of 2012.
By then, Brennan’s efforts to make the process more judicious had begun to
show results. The number of drone strikes in Pakistan and Yemen fell to 88
last year, and they have dropped off even more dramatically since.
The decline partly reflects the toll that the drone war has taken on al-Qaeda.
“There are fewer al-Qaeda leadership targets to hit,” a senior White House
official who is working on the administration’s evolving approach to drone
strikes told me. The reduction in strikes is “something that the president
directed. We don’t need a top-20 list. We don’t need to find 20 if there are only
10. We’ve gotten out of the business of maintaining a number as an end in
itself, so therefore that number has gone down.”
Any history of how the United States destroyed Osama bin Laden’s
organization will feature the drone. Whatever questions it has raised,
however uncomfortable it has made us feel, the drone has been an
extraordinarily effective weapon for the job. The U.S. faced a stateless, wellfunded, highly organized terrorist operation that was sophisticated enough to
carry out unprecedented acts of mass murder. Today, while local al-Qaeda
franchises remain a threat throughout the Middle East, the organization that
planned and carried out 9/11 has been crushed. When bin Laden himself was
killed, Americans danced in the streets.
30
“Our actions are effective,” President Obama said in a speech on
counterterrorism at the National Defense University in May.
Don’t take my word for it. In the intelligence gathered at bin Laden’s
compound, we found that he wrote, ‘We could lose the reserves to enemy’s air
strikes. We cannot fight air strikes with explosives.’ Other communications
from al-Qaeda operatives confirm this as well. Dozens of highly skilled alQaeda commanders, trainers, bomb makers, and operatives have been taken
off the battlefield. Plots have been disrupted that would have targeted
international aviation, U.S. transit systems, European cities, and our troops
in Afghanistan. Simply put, these strikes have saved lives.
So why the steady drumbeat of complaint?
IV. Drones Don't Kill People. People Kill People.
The most ardent case against drone strikes is that they kill innocents. John
Brennan has argued that claims of collateral carnage are exaggerated. In
June 2011, he famously declared that there had not been “a single collateral
death” due to a drone strike in the previous 12 months.
Almost no one believes this. Brennan himself later amended his statement,
saying that in the previous 12 months, the United States had found no
“credible evidence” that any civilians had been killed in drone strikes outside
Afghanistan and Iraq. (I am using the word civilians here to mean
“noncombatants.”) A fair interpretation is that drones unfailingly hit their
targets, and so long as the U.S. government believes its targets are all
legitimate, the collateral damage is zero. But drones are only as accurate as
the intelligence that guides them. Even if the machine is perfect, it’s a stretch
to assume perfection in those who aim it.
For one thing, our military and intelligence agencies generously define
combatant to include any military-age male in the strike zone. And local
press accounts from many of the blast sites have reported dead women and
children. Some of that may be propaganda, but not all of it is. No matter how
precisely placed, when a 500-pound bomb or a Hellfire missile explodes, there
are sometimes going to be unintended victims in the vicinity.
How many? Estimates of body counts range so widely and are so politicized
that none of them is completely credible. At one extreme, anti-American
propagandists regularly publish estimates that make the drone war sound
borderline genocidal. These high numbers help drive the anti-drone
narrative, which equates actions of the U.S. government with acts of terror.
In two of the most recent Islamist terror attacks as of this writing—the
Boston Marathon bombing and the beheading of a soldier in London—the
31
perpetrators justified their killings as payback for the deaths of innocent
Muslims. At the other extreme, there is Brennan’s claim of zero civilian
casualties. The true numbers are unknowable.
Secrecy is a big part of the problem. The government doesn’t even
acknowledge most attacks, much less release details of their aftermath. The
Bureau of Investigative Journalism, a left-wing organization based in
London, has made a strenuous effort, using news sources, to count bodies
after CIA drone strikes. It estimates that from 2004 through the first half of
2013, 371 drone strikes in Pakistan killed between 2,564 and 3,567 people
(the range covers the minimum to the maximum credible reported deaths). Of
those killed, the group says, somewhere between 411 and 890—somewhere
between 12 percent and 35 percent of the total—were civilians. The disparity
in these figures is telling. But if we assume the worst case, and take the
largest estimates of soldier and civilian fatalities, then one-quarter of those
killed in drone strikes in Pakistan have been civilians.
Everyone agrees that the amount of collateral damage has dropped steeply
over the past two years. The Bureau of Investigative Journalism estimates
that civilian deaths from drone strikes in Pakistan fell to 12 percent of total
deaths in 2011 and to less than 3 percent in 2012.
No civilian death is acceptable, of course. Each one is tragic. But any
assessment of civilian deaths from drone strikes needs to be compared with
the potential damage from alternative tactics. Unless we are to forgo the
pursuit of al-Qaeda terrorists entirely, U.S. forces must confront them either
from the air or on the ground, in some of the remotest places on Earth. As
aerial attacks go, drones are far more precise than manned bombers or
missiles. That narrows the choice to drone strikes or ground assaults.
Sometimes ground assaults go smoothly. Take the one that killed Osama bin
Laden. It was executed by the best-trained, most-experienced soldiers in the
world. Killed were bin Laden; his adult son Khalid; his primary protectors,
the brothers Abu Ahmed al-Kuwaiti and Abrar al-Kuwaiti; and Abrar’s wife
Bushra. Assuming Bushra qualifies as a civilian, even though she was
helping to shelter the world’s most notorious terrorist, civilian deaths in the
raid amounted to 20 percent of the casualties. In other words, even a nearperfect special-ops raid produced only a slight improvement over the worst
estimates of those counting drone casualties. Many assaults are not that
clean.
In fact, ground combat almost always kills more civilians than drone strikes
do. Avery Plaw, a political scientist at the University of Massachusetts,
estimates that in Pakistani ground offensives against extremists in that
country’s tribal areas, 46 percent of those killed are civilians. Plaw says that
32
ratios of civilian deaths from conventional military conflicts over the past 20
years range from 33 percent to more than 80 percent. “A fair-minded
evaluation of the best data we have available suggests that the drone
program compares favorably with similar operations and contemporary
armed conflict more generally,” he told The New York Times.
When you consider the alternatives—even, and perhaps especially, if you are
deeply concerned with sparing civilians—you are led, as Obama was, to the
logic of the drone.
But don’t drone strikes violate the prohibition on assassination, Executive
Order 12333? That order, signed by Ronald Reagan in 1981, grew out of
revelations that the CIA had tried to kill Fidel Castro and other leftistleaning political figures in the 1960s and ’70s. It was clearly aimed at halting
political assassinations; in fact, the original order, signed in 1976 by Gerald
Ford, refers specifically to such acts. Attempting to prevent acts of mass
murder by a dangerous international organization may stretch the legal
definition of armed conflict, but it is not the same as political assassination.
Besides, executive orders are not statutes; they can be superseded by
subsequent presidents. In the case of President Bush, after the attacks of
September 11, Congress specifically authorized the use of lethal operations
against al-Qaeda.
When Bush branded our effort against al-Qaeda “war,” he effectively
established legal protection for targeted killing. Targeted killing is a longestablished practice in the context of war. According to international treaties,
soldiers can be killed simply for belonging to an enemy army—whether they
are actively engaged in an attack or only preparing for one, whether they are
commanders or office clerks. During World War II, the United States
discovered and shot down the plane carrying Admiral Isoruku Yamamoto, the
commander in chief of the Japanese navy, who had been the architect of the
attack on Pearl Harbor. The order to attack the plane was given by President
Franklin Roosevelt.
But beyond what international treaties call “armed conflict” is “law
enforcement,” and here, there are problems. The 1990 United Nations
Congress on the Prevention of Crime and the Treatment of Offenders laid out
basic principles for the use of force in law-enforcement operations. (The rules,
although nonbinding, elaborate on what is meant by Article 6 of the
International Covenant on Civil and Political Rights, to which the United
States has agreed.) The pertinent passage—written more than a decade
before weaponized drones—reads as follows:
Law enforcement officials shall not use firearms against persons except in
self-defense or defense of others against the imminent threat of death or
33
serious injury, to prevent the perpetration of a particularly serious crime
involving grave threat to life, to arrest a person presenting such a danger and
resisting their authority, or to prevent his or her escape, and only when less
extreme means are insufficient to achieve these objectives. In any event,
intentional lethal use of firearms may only be made when strictly
unavoidable to protect life.
Once the “war” on al-Qaeda ends, the justification for targeted killing will
become tenuous. Some experts on international law say it will become simply
illegal. Indeed, one basis for condemning the drone war has been that the
pursuit of al-Qaeda was never a real war in the first place.
Sir Christopher Greenwood, the British judge on the International Court of
Justice, has written: “In the language of international law there is no basis
for speaking of a war on al-Qaeda or any other terrorist group, for such a
group cannot be a belligerent, it is merely a band of criminals, and to treat it
as anything else risks distorting the law while giving that group a status
which to some implies a degree of legitimacy.” Greenwood rightly observes
that America’s declaration of war against al-Qaeda bolstered the group’s
status worldwide. But history will not quarrel with Bush’s decision, which
was unavoidable, given the national mood. Democracy reflects the will of the
people. Two American presidents from different parties and with vastly
different ideological outlooks have, with strong congressional support, fully
embraced the notion that America is at war. In his speech at the National
Defense University in May, Obama reaffirmed this approach. “America’s
actions are legal,” he said. “Under domestic law and international law, the
United States is at war with al-Qaeda, the Taliban, and their associated
forces.” He noted that during his presidency, he has briefed congressional
overseers about every drone strike. “Every strike,” he said.
Bin Laden himself certainly wasn’t confused about the matter; he held a
press conference in Afghanistan in 1998 to declare jihad on the United
States. Certainly the scale of al-Qaeda’s attacks went well beyond anything
previously defined as criminal.
But what are the boundaries of that war?
Different critics draw the lines in different places. Mary Ellen O’Connell, a
law professor at the University of Notre Dame, is a determined and eloquent
critic of drone strikes. She believes that while strikes in well-defined battle
spaces like Iraq and Afghanistan are justified, and can limit civilian deaths,
strikes in Pakistan, Yemen, Somalia, and other places amount to
“extrajudicial killing,” no matter who the targets are. Such killings are
outside the boundary of armed conflict, she says, and hence violate
international law.
34
Philip Alston, a former United Nations special rapporteur on extrajudicial,
summary, or arbitrary executions, concedes that al-Qaeda’s scope and
menace transcend criminality, but nevertheless faults the U.S. drone
program for lacking due process and transparency. He told Harper’s
magazine:
[International] laws do not prohibit an intelligence agency like the CIA from
carrying out targeted killings, provided it complies with the relevant
international rules. Those rules require, not surprisingly when it’s a matter
of being able to kill someone in a foreign country, that all such killings be
legally justified, that we know the justification, and that there are effective
mechanisms for investigation, prosecution, and punishment if laws are
violated. The CIA’s response to these obligations has been very revealing. On
the one hand, its spokespersons have confirmed the total secrecy and thus
unaccountability of the program by insisting that they can neither confirm
nor deny that it even exists. On the other hand, they have gone to great
lengths to issue unattributable assurances, widely quoted in the media, both
that there is extensive domestic accountability and that civilian casualties
have been minimal. In essence, it’s a ‘you can trust us’ response, from an
agency with a less than stellar track record in such matters.
President Obama has taken steps in recent months to address Alston’s
concerns. He has begun transferring authority for drone strikes from the CIA
to the Pentagon, which will open them up to greater congressional and public
scrutiny. He has sharply limited “signature strikes,” those based on patterns
of behavior rather than strict knowledge of who is being targeted. (Because
most signature strikes have been used to protect American troops in
Afghanistan, this category of drone attack is likely to further diminish once
those forces are withdrawn.) In his May speech, he came close to embracing
“the full Harold,” publicly outlining in general terms the targeting
constraints drafted by Koh. He also made clear that the war on al-Qaeda will
eventually end—though he stopped short of saying when. American combat
troops will be gone from Afghanistan by the end of next year, but the war
effort against “core al-Qaeda” will almost certainly continue at least until
Ahman al Zawahiri, the fugitive Egyptian doctor who now presides over the
remnants of the organization, is captured or killed.
Then what?
“Outside of the context of armed conflict, the use of drones for targeted killing
is almost never likely to be legal,” Alston wrote in 2010. Mary Ellen
O’Connell agrees. “Outside of a combat zone or a battlefield, the use of
military force is not lawful,” she told me.
35
Yet this is where we seem to be headed. Obama has run his last presidential
campaign, and one senses that he might cherish a legacy of ending three
wars on his watch.
“Our commitment to constitutional principles has weathered every war, and
every war has come to an end,” he said in his May speech. “We must define
the nature and scope of this struggle, or else it will define us. We have to be
mindful of James Madison’s warning that ‘no nation could preserve its
freedom in the midst of continual warfare.’
The changes outlined by the president do not mean we will suddenly stop
going after al-Qaeda. If the war on terror is declared over, and the 2001
Authorization for Use of Military Force (AUMF) is withdrawn, then some
other legal justification for targeting al-Qaeda terrorists with drones would
be necessary, and would likely be sought.
“We believe we have a domestic and international legal basis for our current
efforts,” Ben Rhodes, who is Obama’s deputy national-security adviser for
strategic communications, told me. “If you project into the future, there are
different scenarios, you know, so they are kind of hypothetical, but one is that
you might have a narrower AUMF that is a more targeted piece of legislation.
A hypothetical: the Taliban is part of the AUMF now, but we could find
ourselves not in hostilities with the Taliban after 2014.” In that case, the
military authority to attack Taliban targets, which account for many drone
strikes and most signature strikes, would be gone. Another scenario Rhodes
sketched out was one in which a local terrorist group “rose to the level where
we thought we needed to take direct action. You might have to go back to
Congress to get a separate authorization. If we need to get authority against
a new terrorist group that is emerging somewhere else in the world, we
should go back to Congress and get that authorization.”
You can’t know in advance “the circumstances of taking direct action,”
Rhodes said. “You may be acting to prevent an imminent attack on the
United States or you may be acting in response to an attack, each of which
carries its own legal basis. But you have to be accountable for whatever direct
action you are taking,” rather than relying on some blanket authority to
strike whomever and whenever the president chooses. “You would have to
specifically define, domestically and internationally, what the basis for your
action is in each instance—and by each instance, I don’t mean every strike,
per se, but rather the terrorist group or the country where you are acting.”
Seeking such authorization would help draw the debate over continued drone
strikes out of the shadows. Paradoxically, as the war on terror winds down,
and as the number of drone strikes falls, the controversy over them may rise.
36
V. Come Out With Your Hands Up!
Once the pursuit of al-Qaeda is defined as “law enforcement,” ground
assaults may be the only acceptable tactic under international law. A
criminal must be given the opportunity to surrender, and if he refuses, efforts
must be made to arrest him. Mary Ellen O’Connell believes the Abbottabad
raid was an example of how things should work.
“It came as close to what we are permitted to do under international law as
you can get,” she said. “John Brennan came out right after the killing and
said the seals were under orders to attempt to capture bin Laden, and if he
resisted or if their own lives were endangered, then they could use the force
that was necessary. They did not use a drone. They did not drop a bomb.
They did not fire a missile.”
Force in such operations is justified only if the suspect resists arrest—and
even then, his escape is preferable to harming innocent bystanders. These are
the rules that govern police, as opposed to warriors. Yet the enemies we face
will not change if the war on terror ends. The worst of them—the ones we
most need to stop—are determined suicidal killers and hardened fighters.
Since there is no such thing as global police, any force employed would likely
still come from, in most cases, American special-ops units. They are very good
at what they do—but under law-enforcement rules, a lot more people, both
soldiers and civilians, are likely to be killed.
It would be wise to consider how bloody such operations can be. When Obama
chose the riskiest available option for getting bin Laden in Abbottabad—a
special-ops raid—he did so not out of a desire to conform to international law
but because that option allowed the possibility of taking bin Laden alive and,
probably more important, because if bin Laden was killed in a ground
assault, his death could be proved. The raid went well. But what if the seal
raiding party had tripped Pakistan’s air defenses, or if it had been confronted
by police or army units on the ground? American troops and planes stood
ready in Afghanistan to respond if that happened. Such a clash would likely
have killed many Pakistanis and Americans, and left the countries at
loggerheads, if not literally at war.
There’s another example of a law-enforcement-style raid that conforms to the
model that O’Connell and other drone critics prefer: the October 1993 Delta
Force raid in Mogadishu, which I wrote about in the book Black Hawk Down.
The objective, which was achieved, was to swoop in and arrest Omar Salad
and Mohamed Hassan Awale, two top lieutenants of the outlaw clan leader
Mohammed Farrah Aidid. As the arrests were being made, the raiding party
of Delta Force operators and U.S. Army rangers came under heavy fire from
local supporters of the clan leader. Two Black Hawk helicopters were shot
37
down and crashed into the city. We were not officially at war with Somalia,
but the ensuing firefight left 18 Americans dead and killed an estimated 500
to 1,000 Somalis—a number comparable to the total civilian deaths from all
drone strikes in Pakistan from 2004 through the first half of 2013, according
to the Bureau of Investigative Journalists’ estimates.
The Somalia example is an extreme one. But the battle that erupted in
Mogadishu strikes me as a fair reminder of what can happen to even a very
skillful raiding party. Few of the terrorists we target will go quietly. Knowing
they are targets, they will surely seek out terrain hostile to an American or
UN force. Choosing police action over drone strikes may feel like taking the
moral high ground. But if a raid is likely to provoke a firefight, then choosing
a drone shot not only might pass legal muster (UN rules allow lethal force
“when strictly unavoidable in order to protect life”) but also might be the
more moral choice.
The White House knows this, but it is unlikely to announce a formal end to
the war against al-Qaeda anytime soon. Obama’s evolving model for
counterterrorism will surely include both raids and drone strikes—and the
legality of using such strikes outside the context of war remains murky.
Ben Rhodes and others on Obama’s national-security team have been
thinking hard about these questions. Rhodes told me that “the threat picture”
the administration is mainly concerned with has increasingly shifted from
global terrorism, with al-Qaeda at its center, to “more traditional terrorism,
which is localized groups with their own agendas.” Such groups “may be
Islamic extremists, but they are not necessarily signing on to global jihad. A
local agenda may raise the threat to embassies and diplomatic facilities and
things like [the BP facility that was attacked in Algeria early this year], but
it diminishes the likelihood of a complex 9/11-style attack on the homeland.”
If terrorism becomes more localized, Rhodes continued, “we have to have a
legal basis and a counterterrorism policy that fits that model, rather than
this massive post-9/11 edifice that we built.” This means, he said, that post2014 counterterrorism will “take a more traditional form, with a lawenforcement lead. But this will be amplified by a U.S. capability to take
direct action as necessary in a very narrowly defined set of circumstances.”
What U.S. policy will be aiming for, Rhodes said, is “traditional [lawenforcement-style] counterterrorism plus a limited deployment of our drone
and special-forces capabilities when it is absolutely necessary.”
To accommodate the long-term need for drone strikes, Obama is weighing a
formal process for external review of the target list. This might mean
appointing a military-justice panel, or a civilian review court modeled on the
Foreign Intelligence Surveillance Court, which oversees requests to monitor
38
suspected foreign spies and terrorists in the United States. But this raises
thorny constitutional questions about the separation of powers—and
presidents are reluctant to concede their authority to make the final call.
How should we feel about drones? Like any wartime innovation, going back
to the slingshot, drones can be used badly or well. They are remarkable tools,
an exceedingly clever combination of existing technologies that has vastly
improved our ability to observe and to fight. They represent how America has
responded to the challenge of organized, high-level, stateless terrorism—not
timidly, as bin Laden famously predicted, but with courage, tenacity, and
ruthless ingenuity. Improving technologies are making drones capable not
just of broader and more persistent surveillance, but of greater strike
precision. Mary Ellen O’Connell says, half jokingly, that there is a “sunset”
on her objection to them, because drones may eventually offer more options.
She said she can imagine one capable of delivering a warning—“Come out
with your hands up!”—and then landing to make an arrest using handcuffs.
Obama’s efforts to mitigate the use of drones have already made a big
difference in reducing the number of strikes—though critics like O’Connell
say the reduction has come only grudgingly, in response to “a rising level of
worldwide condemnation.” Still, Obama certainly deserves credit: it is good
that drones are being used more judiciously. I told Ben Rhodes that if the
president succeeds in establishing clear and careful guidelines for their use,
he will make a lot of people happy, but a lot of other people mad.
“Well, no,” Rhodes said. “It’s worse than that. We will make a lot of people
mad and we will not quite make people happy.”
No American president will ever pay a political price for choosing national
security over world opinion, but the only right way to proceed is to make
targeting decisions and strike outcomes fully public, even if after the fact. In
the long run, careful adherence to the law matters more than eliminating
another bad actor. Greater prudence and transparency are not just morally
and legally essential, they are in our long-term interest, because the strikes
themselves feed the anti-drone narrative, and inspire the kind of random,
small-scale terror attacks that are bin Laden’s despicable legacy.
In our struggle against terrorist networks like al-Qaeda, the distinction
between armed conflict and law enforcement matters a great deal. Terrorism
embraces lawlessness. It seeks to disrupt. It targets civilians deliberately. So
why restrain our response? Why subject ourselves to the rule of law? Because
abiding by the law is the point—especially with a weapon like the drone. No
act is more final than killing. Drones distill war to its essence. Abiding
carefully by the law—man’s law, not God’s—making judgments carefully,
making them transparent and subject to review, is the only way to invest
39
them with moral authority, and the only way to clearly define the terrorist as
an enemy of civilization.
40
How do you read "-3"?
By Dr. Keith Devlin
How do you say "-3": "negative three" or "minus three"?
It sounds like a simple enough question. But a recent group discussion on LinkedIn
generated over 60 contributions when I last checked. People seem to have very clear
preferences as to what is "right." Unfortunately, those preferences differ.
In expressions such as "5 - 3" there is general agreement: You say "Five minus three." In this
case, the symbol "-" denotes the binary arithmetic operation of subtraction, expressed as
"minus", and the expression "5 - 3" means "subtract 3 from 5," or "5 minus 3."
It is when the expression "- 3" appears on its own that the fun begins. Traditionalists (the
kind of people who rely on the Chicago Manual of Style and insist I should have put the colon
and question mark inside the quotes in my opening question) will say it should be read as
"negative three." (Your last math teacher probably said that too.) But in everyday situations,
most people say "minus three." For example, I doubt you have ever heard the TV weather
person say that the temperature will fall to "negative three (degrees)." No, she or he will
have said "minus three."
My sense (and it is nothing more than that) is that almost all professional mathematicians
will probably come down on the side of "minus three." The reason is that those of us in the
math biz put the minus sign in front of numbers all the time, and those numbers may
themselves be positive or negative. For example, we frequently find ourselves referring to
numbers such as "- N" where N might turn out to be -3. Since the result in such a case is in
fact a positive number, it seems totally wrong (to us) to refer to it using "negative," which we
take as indicating the sign of a number. In other words, we view "negative" as an adjective,
which tells us that the number is less than 0.
We generally view "-", on the other hand, not as a symbolic adjective but an operation.
Usually it is a binary operation, but sometimes we think of it as a unary one. In such cases, it
simply changes the sign (or reflects it in the origin on the number line, if you want to use
geometric language). In other words, we do not view "-N" as indicating that the number is
positive or negative, rather that it has the opposite sign to N. Quite simply, "minus" can
function as a sign-changing, unary operator.
You might claim that we could use "negative" similarly, so that "negative negative three"
means "three," but to me at least (and I know I am not alone) it sounds bizarre (in a way that
"minus minus three" does not).
If the symbol "-" were only ever put in front of positive numbers, it would be fine to read it as
"negative". In fact, it could be advantageous to do so, as it would tell us the sign of the
number. Since many people outside of mathematics, science, and engineering may in fact
never encounter a double-negative (except to wonder how it works out to be positive), that
41
might help explain why the "negative three" camp is well occupied.
But for anyone dealing with numbers in a scientific context, for whom "-" is frequently
applied to a negative (sic) quantity or to one whose sign is not known, rather than use
"negative" on some occasions and "minus" on others, the latter is the default reading on all
occasions.
That, I think, makes the case for professional scientists and mathematicians always using
"minus three." But why do so many non scientists use the same terminology? My guess is
that they either pick it up from their science teachers at school (but perhaps not their math
teachers), or maybe from the TV weather person. TV weather forecasters may well not be
trained scientists, but they do have to read scientific reports from professional
meteorologists, and that could account for the domination of "minus."
So, if you want a ruling from a qualified mathematician, I'll give one: Always read "-N" as
"minus N." Feel free to use my name to try to settle any dispute. But if you ask me to get
personally involved, I'll give you an answer right out of Top Gun: "That's a negative."
42
Murder by Craigslist
A serial killer finds a newly vulnerable class of victims: white, working-class men.
By Hanna Rosin
Wanted: Caretaker For Farm. Simply watch over a 688 acre patch of hilly farmland and
feed a few cows, you get 300 a week and a nice 2 bedroom trailer, someone older and
single preferred but will consider all, relocation a must, you must have a clean record and
be trustworthy—this is a permanent position, the farm is used mainly as a hunting
preserve, is overrun with game, has a stocked 3 acre pond, but some beef cattle will be
kept, nearest neighbor is a mile away, the place is secluded and beautiful, it will be a real
get away for the right person, job of a lifetime—if you are ready to relocate please
contact asap, position will not stay open.
Scott Davis had answered the job ad on Craigslist on October 9, 2011, and now, four
weeks later to the day, he was watching the future it had promised glide past the car
window: acre after acre of Ohio farmland dotted with cattle and horses, each patch
framed by rolling hills and anchored by a house and a barn—sometimes old and worn,
but never decrepit. Nothing a little carpentry couldn’t fix.
Davis rode in the backseat of the white Buick LeSabre; in the front sat his new employer,
a man he knew only as Jack, and a boy Jack had introduced as his nephew, Brogan. The
kid, who was driving the car, was only in high school but was already a giant—at least as
tall as his uncle, who was plenty tall. Jack was a stocky, middle-aged man; Davis noticed
that he’d missed a couple of spots shaving and had a tattoo on his left arm. He was chatty,
telling Davis about his ex-wife, his favorite breakfast foods, and his church.
Davis, 48, had left his girlfriend behind in South Carolina, given away the accounts for
his landscaping business, and put most of his equipment in storage. He’d packed his other
belongings—clothes, tools, stereo equipment, his Harley-Davidson—into a trailer,
hitched it to his truck, and driven to southeastern Ohio. He’d told everyone that he was
moving in part to help take care of his mom, who lived outside Akron and whose house
was falling apart. Moving back home at his age might seem like moving backward in life.
But the caretaker job he’d stumbled across online made it seem more like he’d be getting
paid to live free and easy for a while—a no-rent trailer plus $300 a week, in exchange for
just watching over a farm with a few head of cattle outside the town of Cambridge. Jack
had reminded him in an e-mail to bring his Harley because there were “plenty of
beautiful rural roads to putt-putt in.”
Jack and Brogan had met Davis for breakfast at the Shoney’s in Marietta, where Jack had
quizzed his new hire about what he’d brought with him in the trailer. Davis boasted that it
was “full from top to bottom.” After breakfast, Davis followed Jack and Brogan to the
Food Center Emporium in the small town of Caldwell, where he left his truck and trailer
in the parking lot, to be picked up later. Jack told Davis that the small road leading to the
43
farm had split, and they’d have to repair it before bringing the truck up. They’d been
driving for about 15 minutes, the paved road giving way to gravel, and then the gravel to
dirt, while Davis watched the signal-strength bars on his cellphone disappear.
On a densely wooded, hilly stretch, Jack told his nephew to pull over. “Drop us off where
we got that deer at last time,” he said, explaining to Davis that he’d left some equipment
down the hill by the creek and they’d need to retrieve it to repair the road. Davis got out
to help, stuffing his cigarettes and a can of Pepsi into the pockets of his jean jacket. He
followed Jack down the hill, but when they reached a patch of wet grass by the creek,
Jack seemed to have lost his way and suggested they head back up to the road. Davis
turned around and started walking, with Jack following behind him now.
Davis heard a click, and the word fuck. Spinning around, he saw Jack pointing a gun at
his head. Where we got that deer at last time. In a flash, it was clear to Davis: he was the
next deer.
Davis instinctively threw up his arms to shield his face. The pistol didn’t jam the second
time. As Davis heard the crack of the gunshot, he felt his right elbow shatter. He turned
and started to run, stumbling and falling over the uneven ground. The shots kept coming
as Davis ran deeper into the woods, but none of them hit home. He ran and ran until he
heard no more shots or footsteps behind him. He came to the road and crossed it, worried
that if he stayed in the open he’d be spotted by his would-be killer. He was losing a lot of
blood by now, but he hid in the woods for several hours, until the sun was low, before he
made his way back to the road and started walking.
Jeff Schockling was sitting in his mother’s living room, watching Jeopardy, when he
heard the doorbell. That alone was strange, as he’d later explain on the witness stand,
because out there in the boondocks, visitors generally just walked in the front door.
Besides, he hadn’t heard a car drive up. Schockling sent his 9-year-old nephew to see
who it was, he testified, and the kid came back yelling, “There’s a guy at the door! He’s
been shot and he’s bleeding right through!” Schockling assumed his nephew was playing
a prank, but when he went to the door, there was the stranger, holding his right arm
across his body, his sleeve and pant leg soaked with blood. The guy was pale and fidgety
and wouldn’t sit down at the picnic table outside. But he asked Schockling to call 911.
Sheriff Stephen Hannum of Noble County arrived after about 15 minutes. He would later
describe Davis as remarkably coherent for a man who had been shot and was bleeding
heavily. But what Davis was saying made no sense. He claimed that he’d come to the
area for a job watching over a 688-acre cattle ranch, and that the man who’d offered him
the job had shot him. But Hannum didn’t know of any 688-acre cattle ranches in Noble
County—nothing even close. Most of the large tracts of land had been bought up by
mining companies. Davis kept going on about a Harley-Davidson, and how the guy who
shot him was probably going to steal it. The sheriff sized Davis up—middle-aged white
guy, puffy eyes, long hair, jean jacket, babbling about a Harley—and figured he was
involved in some kind of dope deal gone bad. Hannum made a few calls to his local
informants, but none of them had heard anything. Then he located the truck and trailer in
44
the Food Center Emporium parking lot, and they were just as Davis had described them.
“It was beginning to look,” Hannum later recalled, “like Mr. Davis truly was a victim
rather than whatever I thought he was at the beginning.”
Davis wasn’t the only person to answer the Craigslist ad. More than 100 people applied
for the caretaker job—a fact that Jack was careful to cite in his e-mails back to the
applicants. He wanted to make sure that they knew the position was highly sought-after.
Jack had a specific type of candidate in mind: a middle-aged man who had never been
married or was recently divorced, and who had no strong family connections. Someone
who had a life he could easily walk away from. “If picked I will need you to start
quickly,” he would write in his e-mails.
Jack painstakingly designed the ad to conjure a very particular male fantasy: the cowboy
or rancher, out in the open country, herding cattle, mending fences, hunting game—living
a dream that could transform a post-recession drifter into a timeless American icon. From
the many discarded drafts of the ad that investigators later found, it was clear that Jack
was searching for just the right pitch to catch a certain kind of man’s eye. He tinkered
with details—the number of acres on the property, the idea of a yearly bonus and paid
utilities—before settling on his final language: “hilly,” “secluded,” “job of a lifetime.” If
a woman applied for the job, Jack wouldn’t bother responding. If a man applied, he
would ask for the critical information right off the bat: How old are you? Do you have a
criminal record? Are you married?
Jack seemed drawn to applicants who were less formal in their e-mail replies, those who
betrayed excitement, and with it, vulnerability. “I was raised on a farm as a boy and have
raised some of my own cattle and horses as well,” wrote one. “I’m still in good shape and
not afraid of hard work! I really hope you can give me a chance. If for some reason I
wouldn’t work out for you no hard feelings at all. I would stick with you until you found
help. Thank you very much, George.”
If a candidate lived near Akron, Jack might interview him in person at a local Waffle
House or at a mall food court. He’d start by handing the man a preemployment
questionnaire, which stated that he was an equal-opportunity employer. Jack and the
applicant would make small talk about ex-wives or tattoos, and Jack, who fancied himself
a bit of a street preacher, would describe the ministry he’d founded. He’d ask about
qualifications—any carpentry experience? ever work with livestock?—and provide more
details about the farm. Jack explained that his uncle owned the place, and he had six
brothers and sisters with a lot of kids and grandkids running around, especially on
holiday weekends and during hunting season. The picture Jack painted was of a
boisterous extended family living an idyllic rural life—pretty much the opposite of the
lonely bachelor lives of the men he was interviewing.
If the interview went well, Jack might tell the applicant that he was a finalist for the job.
But if the applicant gave any sign that he did not meet one of Jack’s criteria, the meeting
would end abruptly. For one candidate, everything seemed on track until he mentioned
that he was about to get married. Jack immediately stood up and thanked him for his
45
time. George, the man who’d written the e-mail about being raised on a farm, told Jack
that he’d once been a security guard and was an expert in martial arts. He figured this
would be a plus, given that he’d have to guard all that property when no one else was
around. But the mood of the interview immediately changed for the worse. Jack took the
application out of George’s hands before he even finished filling it out and said he’d call
him in a couple of days. If George didn’t hear anything, he should assume that “someone
else got it.”
David Pauley was the first applicant who met Jack’s exacting criteria. He was 51 years
old, divorced, and living with his older brother, Richard, in his spare bedroom in Norfolk,
Virginia. For nearly two decades, Pauley had worked at Randolph-Bundy, a wholesale
distributor of building materials, managing the warehouse and driving a truck. He
married his high-school sweetheart, Susan, and adopted her son, Wade, from an earlier
marriage. For most of his life, Pauley was a man of routine, his relatives said. He ate his
cereal, took a shower, and went to work at precisely the same times every day. “He was
the stable influence in my life,” says Wade. “I grew up thinking everyone had a nine-tofive.”
But Pauley grew increasingly frustrated with his position at Randolph-Bundy, and finally
around 2003 he quit. He bounced around other jobs but could never find anything steady.
He and Wade often had disagreements, and in 2009 he and Susan got a divorce. Now he
found himself sitting on his brother’s easy chair, using Richard’s laptop to look for jobs.
Mostly he’d find temp stuff, jobs that would last only a few weeks. Sometimes he had to
borrow money just to buy toothpaste. He got along fine with Richard and his wife, Judy,
but their second bedroom—with its seafoam-green walls, frilly lamp shades, and ornate
dresser—was hardly a place where he could put up his poster of Heidi Klum in a bikini or
start enjoying his post-divorce freedom.
Pauley was cruising online job opportunities when he came across the Craigslist ad in
October 2011. Usually Pauley looked for jobs only around Norfolk. But his best friend
since high school, Chris Maul, had moved to Ohio a couple years earlier and was doing
well. He and Maul talked dozens of times a day on the Nextel walkie-talkies they’d
bought specifically for that purpose. If Maul, who was also divorced, could pick up and
start a new life, why couldn’t Pauley?
And the Craigslist job sounded perfect. Three hundred dollars a week and a rent-free
place to live would solve all Pauley’s problems at once. On top of that, his brother, an exNavy man, was always pestering Pauley to cut his long hair before job interviews. With a
gig like this, who would care whether he had long hair—the cattle? Pauley sat down and
wrote an e-mail to Jack.
Well about me, I’m fifty one years young, single male, I love the out doors, I currently
live in virginia have visited ohio and i really love the state. Being out there by myself
would not bother me as i like to be alone. I own my own pick up truck so hauling would
not be a problem. I can fix most anything have my own carpentry tools.
46
If chosen i will work hard to take care of your place and treat it like my own.
I also have a friend in Rocky River, Ohio. Thank you, David.
A few days later, Pauley got an e-mail back from Jack saying that he had narrowed his
list down to three candidates, “and you are one of the 3.” Jack asked his usual
questions—was Pauley married? had he ever been arrested for a felony?—and told him
that if he was chosen, he’d have to start immediately.
Richard remembers his younger brother being energized in a way he hadn’t seen in
months. Pauley called Jack several times to see whether there was anything else he could
do to help him decide. Jack promised that he’d call by 2 p.m. on a Friday, and Pauley
waited by the phone. When 2 o’clock came and went, he told his brother, “Well, I guess
the other person got chosen above me.”
But early that evening, the phone rang. When Pauley got on the line, Richard recalls, his
whole face lit up.
“I got it! I got the job!” he yelled as soon as he hung up. He immediately called his friend
Maul on the walkie-talkie and started talking a mile a minute. He swore that this was the
best thing that had ever happened to him and said he couldn’t wait to pack up and go. To
Maul’s surprise, he found himself in tears. For the past few years he’d been worried
about Pauley, whom he’d always called his “brother with a different last name.” Maul
remembers, “It was like, maybe this is the turning point, and things are finally going the
right way.” They made a promise to each other that on Pauley’s first weekend in Ohio
after settling in, Maul would bring down his hot rod and they’d drive around on the
empty country roads.
Next Pauley called his twin sister, Deb, who lives in Maine. She told him that she hated
the thought of him sitting alone on some farm for Christmas and made him promise that
he’d come visit her for the holidays. He told her that his new boss was a preacher and
said he felt like the Lord was finally pointing him toward the place where he might “find
peace.”
That week, Pauley went to the men’s Bible-study group he’d been attending since he’d
moved into Richard’s house. For weeks he’d been praying—never to win the lottery or
get a girlfriend, always for steady work. Everyone there agreed that God had finally heard
his prayers.
The church gave Pauley $300 from its “helping hands” fund so that he could rent a UHaul trailer. He packed up all his stuff—his model trains, his books and DVDs, his Jeff
Gordon T-shirts and posters, his Christmas lights, and the small box containing the ashes
of his old cat, Maxwell Edison—and hit the road.
Pauley arrived at the Red Roof Inn in Parkersburg, West Virginia, on the night of
Saturday, October 22, 2011. It was not far from Marietta, Ohio, where he was supposed
47
to meet his new employer at a Bob Evans for breakfast the next morning. He called his
sister, who told him that she loved him and said to call back the next day.
Then, just before going to bed, he called up Maul, who told him, “Good luck. As soon as
you’re done talking to them tomorrow, let me know. Give me an exact location so I can
come down Saturday and we can hang out.”
The next day came and went with no call from Pauley. Maul tried him on the walkietalkie, but there was no response. He then called Richard and got the number for Pauley’s
new employer, Jack, whom he reached on his cellphone. Yes, everything was all right,
Jack told Maul. He’d just left Pauley with a list of chores. Yes, he would pass on the
message when he saw him the next day. But a few more days went by without a call, so
Maul dialed Jack again. This time Jack said that when he showed up at the farm that day,
Pauley had packed all his things in a truck and said he was leaving. Apparently he’d met
some guy in town who was headed to Pennsylvania to work on a drilling rig, and he’d
decided to follow him there.
There was no way, Maul thought to himself, that Pauley would take off for Pennsylvania
without telling him. The two men had been best friends since high school, when they’d
bonded over their mutual distaste for sports and love of cars. Over the years they’d
moved to different cities, gotten married, gotten divorced, but they’d stayed “constantly
in touch,” Maul said.
They kept their walkie–talkies on their bedside tables and called each other before they
even got up to brush their teeth in the morning. They talked, by Maul’s estimate, about 50
times a day. “Most people couldn’t figure out how we had so much to talk about,” Maul
said. “But there’d always end up being something.” But after Pauley reached Ohio?
Nothing.
Early in November, about two weeks after he’d last spoken to Pauley, Maul called his
friend’s twin sister. Deb hadn’t heard from him either, and was also worried—a habit
she’d honed over a lifetime. When she and her brother were 14, their mother got
emphysema. Since their father had left the family and their older siblings were already
out of the house, Deb quit school and, as she put it, “basically became David’s mother.”
Years later, she moved to Maine with her second husband and Pauley stayed in Norfolk,
but the twin bond remained strong.
By the time she received the concerned phone call from Maul, Deb had already spent
several days sitting with her laptop at the kitchen table, in her pajamas, looking for clues
to explain why she hadn’t heard a word from her brother. With a red pen and a sheet of
legal paper, she’d made a list of places to call—the motel in Parkersburg, the U-Haul
rental place—but she’d learned nothing from any of them. It wasn’t until Friday night,
November 11, nearly three weeks after Pauley had left for Ohio, that she remembered
something else: Cambridge, the town where he had said the farm was located. She typed
the name into Google and found the local paper, The Daily Jeffersonian. She scrolled
through the pages until she landed on this headline, dated November 8: “Man Says He
48
Was Lured Here for Work, Then Shot.” There was no mention of the man’s name, but
there was one detail that sounded familiar: he said he’d been hired to work on a 688-acre
ranch. The article cited the Noble County sheriff, Stephen Hannum. Deb called his office
right away.
After picking up Scott Davis five days earlier, Hannum and his team had been following
up on his strange story, but not all that urgently. They had Davis’s explanation about the
Craigslist ad, and they’d located security-camera footage from his breakfast meeting with
his “employers.” But Deb’s phone call lit a fire under Hannum’s investigation. She told
Jason Mackie, a detective in the sheriff’s office, that Pauley had talked with his friend
Maul 50 times a day and then suddenly stopped. Although no one said it explicitly at the
time, a sudden drop-off like that meant a missing person, and that in turn meant there
might be a body.
The next day, a Saturday, the sheriff’s office called an FBI cyber-crimes specialist to help
them get information about who had written the Craigslist ad. They also sent a crew with
cadaver dogs back to the woods where Davis had been shot. One FBI agent would later
recall the “torrential downpour” that day and the sound of coyotes howling. A few hours
before dark, the investigators found a patch of disturbed soil overlaid with tree branches.
They began digging with their hands, until they found blood seeping up from the wet
earth and a socked foot appeared. The body they discovered was facedown, and one of
the items they removed from it was a corded black-leather bracelet with a silver clasp.
Mackie telephoned Deb and described the bracelet. Yes, it was her brother’s, she told
them. The investigators also found a second grave, this one empty. They later learned it
had been meant for Davis.
Now the investigators knew they were looking for a murderer. By early the next week,
they had identified the man in the breakfast-meeting footage as a local named Richard
Beasley. Additionally, the cyber-crimes specialist had received enough information from
Craigslist to trace the IP address of the ad’s originating computer to a small house in
Akron. When the investigators arrived at the house, its occupant, Joe Bais, said he’d
never written any ads on Craigslist and he didn’t know anyone named Richard Beasley or
Jack. But when they showed him a picture, he recognized the man who’d called himself
Jack. It was someone he knew as Ralph Geiger, who until recently had rented a room
from Bais for $100 a week. “Real nice guy,” Bais would later recall on the witness stand.
“He didn’t cuss, didn’t smoke, didn’t drink … First Sunday he was there, went to
church.” As it happened, Geiger had just left him a note with his new cellphone number.
The landlord called Geiger and kept him on the line as investigators traced the call. On
November 16, an FBI SWAT team arrested the man outside another Akron house, where
he had been renting a room after leaving Bais’s place. The suspect’s name was, in fact,
Richard Beasley. Although investigators didn’t know it yet, Ralph Geiger was the name
of his first victim.
Tracking down the teenager who had been with Beasley/Jack when he drove Scott Davis
out into the woods proved easier. Just as Jack had said, his name was Brogan, Brogan
Rafferty to be exact, and he was a junior at Stow-Munroe Falls High School. A detective
49
and an FBI agent drove to the school and interviewed Rafferty in the principal’s office,
while another set of investigators searched his house. Rafferty later told his mother that
before he left school that day, he had found a girl he liked and kissed her, even though
her boyfriend was nearby. He had been worried that he’d never see her, or anyone else
from his high school, again. He was right to worry: that evening, police arrived with a
warrant, and he was taken into custody.
Richard beasley, aka Jack, was born in 1959 and raised in Akron primarily by his mother,
who worked as a secretary at a local high school, and his stepfather. He was briefly
married and had a daughter, Tonya, who was about Rafferty’s age. Over the years, he
worked as a machinist, but his job record was interrupted by spells in jail. He served from
1985 to 1990 in a Texas prison on burglary charges and, starting in 1996, another seven
years in a federal prison for a firearms offense. When he went on trial for the 2011
Craigslist murders, the photo favored by newspapers made him look deranged, with wild
eyebrows and hair and a crumpled mouth. But during the trial, with his white hair
combed and his beard trimmed, he looked almost like Santa Claus, especially when he
smiled.
In the mid-2000s, a dump truck hit Beasley’s car and he suffered head, chest, and spinal
injuries. He had recently returned to Akron from federal prison, where, he told everyone,
he’d found God, and he’d begun spending a lot of time at a local megachurch called the
Chapel. After the accident, he started taking opiates for back and neck pain and stopped
working steadily.
But Brogan Rafferty’s father, Michael, who knew Beasley from the local motorcycle
circuit, told me that even before the car wreck, Beasley had been “lazy.” He was known
as someone who always had “a little bit of an angle going,” like a “scam artist,” Michael
Rafferty said. People in their motorcycle clubs knew Beasley had a criminal record, but
to Michael Rafferty he seemed harmless, like he was “all talk.” Michael Rafferty said
he’d never once seen Beasley lose his temper in the 20 years he knew him.
Beasley didn’t drink or smoke much, and he spent a lot of his free time at the Chapel,
where he went to Bible study and worked in a soup kitchen. So when, at age 8, Brogan
Rafferty said he wanted to start going to church on Sundays, his dad said it was okay for
him to go with Beasley. It was only church, after all. And Michael Rafferty, a single
parent who was working long shifts at the time, hated waking up early on Sundays
anyway.
For the next eight years, Beasley was a regular presence in the Rafferty house on
Sundays, coming by early to get his young charge, who’d be waiting in a slightly
rumpled suit. Sometimes when he took Rafferty to church, Beasley would bring along his
daughter, Tonya, or Rafferty’s half-sister Rayna, who was three years younger than
Rafferty and shared the same mother but, like Rafferty, lived full-time with her own
father. (Rafferty’s mother, Yvette, was a crack addict who didn’t have custody of her four
children and was rarely around when they were young.) Beasley was a mentor to Rayna
and her brother, Rayna recalls. After Bible study, he’d sneak them leftover donuts or take
50
them to McDonald’s and talk to them about the importance of school or the danger of
drugs. “The Bible is the key to peace of mind, and a road map to salvation,” he wrote in
the Bible he gave Brogan.
Around 2009, Beasley founded what he told friends was a halfway house to help reform
addicts, runaways, and prostitutes. Beasley would cruise the streets of Akron at night,
picking up strays and bringing them back to the house. If they were in trouble with the
law, he would vouch for them in court, saying they had turned their lives over to Christ.
A few times, Rafferty asked Beasley whether they could go out and look for his mother,
Yvette, who Rafferty always worried was in trouble.
But there was another side to Beasley—and to his halfway house. Amy Saller, who later
described herself on the witness stand at Rafferty’s trial as a former crack addict and
prostitute, lived at the house on and off for more than two years from 2009 to 2011.
Beasley had picked her up one night, and she came to stay with him because he told her
his mission was to “save all the girls that are on the streets,” she testified. “I pictured him
as a savior, somebody that was trying to help me.” There were four or five other
prostitutes in the house, Saller recalled, and Beasley got them all cellphones. Soon,
instead of being their savior, he became their pimp. He began advertising their services
online and driving them to meet johns. Saller said that Beasley would “do anything in his
power” to keep the girls at the house, including buying them drugs. Saller said she never
saw Beasley get violent, although she added that she was nonetheless afraid of him.
In February 2011, Beasley was arrested in Ohio on a variety of drug-related charges.
While he was in jail, investigators were building a prostitution case against him. He was
released on bond in mid-July. But after he failed to check in with authorities in Texas,
where he was still on parole for his earlier crimes, the state issued a warrant for his arrest,
and he was deemed on the run from the law.
Beasley wanted to disappear. The key, he realized, would be to assume a new identity,
and it wasn’t long before he came up with an idea. Whereas once he had preyed on
prostitutes, now he would target a member of a new class of vulnerable citizens drifting
at the margins of society: unemployed, middle-aged white men.
This will be the fourth time we’ve talked to you. And each time we get a little bit more.
But tonight it all needs to come out, 100 percent. All right?
One week after arresting Brogan Rafferty, investigators made a deal with their 16-yearold suspect. If he agreed to testify against Beasley, he would be charged only with
complicity to murder and attempted murder, respectively, in the cases of David Pauley
and Scott Davis. He would not be charged with two other homicides that had by now
been uncovered. Later, Rafferty would back out of the deal, but the plea-deal interview
was recorded and the judge allowed it to be played at Rafferty’s trial.
51
The story Rafferty told began in the first week of August, when Beasley told Rafferty that
he was on the run from the law. He was determined not to go back to jail, and he
suggested to Rafferty that “he needed [his] help to survive.”
The first thing Beasley wanted was a new identity, and he began hanging around a local
homeless shelter searching for someone who looked like him. He had by now come up
with the perfect lure for a male victim in post-recession America: he would present
himself as a beneficent but exacting employer, one with the power to alter a man’s
fortunes by granting him the “job of a lifetime” as the caretaker of a sprawling farm.
It wasn’t long before Beasley met a man named Ralph Geiger, who for many years had
run a thriving maintenance business, but for whom jobs had gradually dried up. Geiger,
56, was staying at a shelter and looking for work, and Beasley told him about the
caretaker job he’d invented. Geiger had lived on a farm when he was younger, and he
leapt at the opportunity. Rafferty remembers Beasley quizzing Geiger about his size and
appearance: How much do you weigh? You look a lot like me, except your hair is a little
bit darker.
Whether Rafferty knew that Beasley intended to kill Geiger would later become a key
point in the teenager’s trial, and he told different versions of his story at different times.
In his plea-deal confession, Rafferty told the investigators that Beasley “said that he
needed a new identity. And that this guy looked similar to him. And he said that he
needed to somehow murder him.” Later, though, Rafferty would tell the jury that he’d
had no idea what was coming. The first time he realized that Beasley was anything other
than a “very nice man,” he claimed, was on August 9, when they drove Geiger to the
same wooded spot where they would later take David Pauley and Scott Davis. After they
got out of the car, Beasley raised a pistol and shot Geiger in the back of the head. “It was
as if somehow I immediately slipped into a dream or something,” Rafferty told the jury.
“Like I had ice in my veins.” From then on, Rafferty said, he lived in a state of fear and
panic, terrified that Beasley would kill his mother or half-sister Rayna if he told anyone
what had happened, or that maybe on their next run Beasley would kill him.
“He was just scared and he didn’t see a way out,” Rafferty’s father, Michael, told me.
“Heroes aren’t born at 16.”
Rafferty didn’t tell anyone about Geiger’s murder, but he did describe it in a poem dated
August 16, 2011, that was later found on his hard drive. It was titled “Midnight Shift”:
We took him out to the woods on a
humid summer’s night.
I walked in front of them.
They were going back to the car.
I did’nt turn around.
The loud crack echoed and I did’nt
hear the thud.
The two of us went back to the car
52
for the shovels.
He was still there when we returned.
He threw the clothes in a garbage
bag along with the personal items.
I dug the hole.
It reached my waist when I was in
it, maybe four feet wide.
We put him in with difficulty,
they call them stiffs for a reason.
We showered him with lime like a
Satanic baptism
it was like we were excommunicating
him from the world
I thought there would be extra dirt,
he was’nt a small man.
There wasint. I don’t know how.
We drove out of there discarding
evidence as we went
felt terrible until I threw up
in the gas station bathroom where
I was supposed to throw away the bullets and shell.
I emptied myself of my guilt, with
my dinner, but not for long.
When I got home,’ took a shower hotter than hell itsself.
prayed like hell that night.
Rafferty grew moody that fall, according to his parents and friends, but they figured it
was just hormones or girl trouble. He later told his mom that after homecoming, while the
other kids were having fun, all he could think about was crashing the Buick his dad had
bought him, so that he could join Gram Rita, his beloved grandmother who’d died a few
years earlier. But he didn’t wreck the car. He just stayed in his room and waited for
Beasley to call.
Beasley, meanwhile, was constructing a life as Ralph Geiger. He dyed his hair brown and
found a room to rent. He went to a doctor to get prescription painkillers for the injuries
he’d sustained in his car accident. In September, he landed a job as a quality inspector at
a company that made liftgates for trucks. But it didn’t last long. Beasley’s back still hurt,
and he became worried that parole officers would somehow catch on to him. Still, he
couldn’t survive without a steady income. Perhaps that’s when the idea came to him. The
Geiger killing had gone so smoothly that he could turn it into a career of sorts, preying on
other men who’d fallen out of the economy.
Instead of trolling the shelters, as he’d done to find Geiger, Beasley came up with the
strategy of placing an ad on Craigslist. After all, he didn’t want his victims to be
completely down and out. He needed men on the margins, yes, but not so marginal that
53
they didn’t have some possessions worth killing for: a truck or a TV or a computer or
even a motorcycle.
On Sunday, October 23, as David Pauley was driving his U-Haul full of stuff to the
breakfast meeting with his new employer, Rafferty woke up early. He fed his cats, tidied
his room, and told his father he was heading out for a job digging drainage ditches. “I
love you, Dad,” he said as he left to pick up Beasley. Before driving to the Bob Evans in
Marietta, Beasley and Rafferty went to Kmart and bought a couple of shovels. Then they
drove to a spot not far from where Geiger was buried, and Rafferty dug the grave
intended for Pauley. Before they left, Beasley put a $20 bill under a nearby rock: if it was
gone when they came back, he’d know someone had been there.
After breakfast with Pauley, Beasley had his new hire follow him to the Emporium in
Caldwell, to park his truck and trailer. He told Pauley the same story about the road to the
farm being split that he would later tell Davis. On the subsequent drive in the Buick,
Pauley asked about the job and Beasley told him not to worry: “You get an honest day’s
pay for an honest day’s work.” When they pulled over near the creek, Beasley asked
Rafferty and Pauley to follow him up a hill, but Rafferty said he had to go to the
bathroom. “And then, as I finished and turned around,” Rafferty told investigators, “I
heard a crack.” Pauley was lying facedown. Somehow his cowboy hat had ended up
hanging on a nearby branch.
Back in Akron, Beasley began to improvise. He’d heard from a friend about the realityTV show Storage Wars, in which people bid on abandoned storage units hoping that
there might be valuable items hidden inside. Beasley told people he was involved in that
kind of thing, and began to unload Pauley’s stuff: he returned the U-Haul, sold Pauley’s
truck for $1,000, and sold the other belongings—the Christmas lights, the model trains,
some tackle boxes, the Jeff Gordon memorabilia—to neighbors or at flea markets.
The Pauley money quickly dwindled, but Beasley wasn’t all that concerned. He already
had a still-better victim lined up in Scott Davis. Before Davis had even hit the road for
Ohio, Beasley told his landlord that he’d won a bid on a fantastic storage unit that
contained a flat-screen TV, a computer, some lawn-care equipment, and, best of all, a
Harley. He told Rafferty that he thought he could net $30,000 on this kill, enough for him
to make it through the winter.
But at Beasley’s moment of anticipated triumph, his gun jammed. Rafferty was waiting
in the car when he saw Beasley hustling back toward him. “He got away,” Beasley said,
breathing hard as he climbed back into the Buick. If they saw Davis along the road,
Rafferty told the investigators, “I was to hit him with my car.” But they didn’t find him,
so they headed back out onto the highway. Beasley started madly tossing things out of the
car—the shovels, a leather jacket, the air freshener, even his own laptop. If Davis made it
to the police, he didn’t want the Buick to be easy to identify. Rafferty went along, but he
refused to toss out the rosary beads hanging from the rearview mirror. They were a gift
from his Gram Rita.
54
Eventually, they made their way back to Akron, where, as Rafferty saw it, any logic or
purpose to Beasley’s actions went out the window too. Following a botched murder like
Davis’s, you’d think Beasley would lie low. But he’d been counting on that haul, and
now that it had fallen through, he recklessly pursued another. Though police were already
talking with Davis and beginning to track down leads, they didn’t move quickly enough
to save Beasley’s fourth and final victim. On Sunday, November 13, exactly a week after
the attempt on Davis’s life, Beasley and Rafferty picked up a man named Timothy Kern
in the parking lot outside a pizzeria in Canton, where he’d spent the night sleeping in his
car. Kern was from the Akron area, 47 years old and divorced. He’d recently lost his job
as a street cleaner.
Beasley had a mental inventory of the items he thought Kern was bringing with him, and
almost as soon as they got into Rafferty’s Buick, Beasley began questioning him. Did he
have that laptop he’d mentioned? Kern said no, he’d left it behind with his sons Zachary
and Nicholas. The flatscreen TV? Same story: Zach and Nick had it. Instead, Kern had
brought an old TV. Apart from that, he just had a couple of garbage bags full of clothes
and cassette tapes, which fit easily in the back of Rafferty’s car. That, and the late-’80s
sedan that he’d abandoned in the pizzeria parking lot because it barely ran.
“I get half a pit in my stomach,” Rafferty later told the investigators, “because as the
story goes on and on, I’m realizing that I’m about to help Beasley do this for no reason at
all. Not that I even wanted to do it at all. But it takes, like, all the minimal sanity and
reason out of doing this … It would be like if a lion killed a zebra just to kill it … Just
’cause it wanted, like, its hoof or something. The man literally I think had $5 in his
pocket.” One other thing struck Rafferty at the time—enough so that he mentioned it to
the investigators more than once: Timothy Kern had given everything he had of value to
his sons, who were just a little older than Rafferty himself. It was clear that Kern’s family
had broken up, but just as clear was that “he loved his kids,” Rafferty told the
investigators.
In his e-mails to Jack, Kern had described himself as single and “available for immediate
relocation,” but hadn’t said much about his sons. In truth, Kern was ambivalent about the
caretaker job he’d been offered—he described it on his Facebook page as a “good offer”
but with “drawbacks,” because he would be more than two hours away from his sons and
wouldn’t have cellphone service. Kern and his ex-wife Tina had divorced in 1997, and
Zach and Nick were already 19 and 17. But Kern made a point of seeing them nearly
every day, even if that meant waiting around the corner from their house until after Tina
left for work.
Kern’s marriage wasn’t the only thing in his life that had fallen apart. In the 1990s, he’d
worked as a sound engineer at a local club, but when he lost that job in 2000, he had
trouble finding a new one. He lived with his parents for a few years, but then his father
kicked him out, and after that no one was sure where he slept. Maybe in his car.
But despite all that, or maybe because of it, he was never unsteady in his commitment to
Zach and Nick. He focused on his children in the intense way certain divorced dads do
55
when they’re cut off from the daily routines of their families. (He had another son from
an earlier marriage, whom he didn’t see much, and that might have played a part, too.)
“He only cared about these two. I mean, that was his purpose, that was his thing,” his exwife told me. It sometimes drove her crazy that he’d spend his last penny on cellphone
bills to make sure he could stay in touch with the boys—instead of, say, keeping up with
his child-support payments. “All day, texting, every day,” Tina said.
Zach and Nick present themselves to the world as pretty tough—they’re both covered in
tattoos, and Zach plays in a heavy-metal band—but they had remarkably tender
relationships with their father. They knew, for instance, to always answer his texts
quickly, so that he didn’t get his feelings hurt and follow up with Oh, I see you’re busy or
2 cool 4 dad. The day Kern left for Ohio, Nick, who was a senior in high school, lent him
$20. That night, Nick texted him before going to a party: I love you. I miss you. I’m proud
of you. Good luck. When Kern got up the next morning, he wrote Nick: Text me when you
wake up. Love you. Leaving soon.
Rafferty knew that they weren’t taking Kern to the same spot where they’d shot Geiger,
Pauley, and Davis—not even Beasley was that crazy. Instead, their destination was a
narrow wooded area behind a mall on the western edge of Akron, where Beasley had had
Rafferty dig a grave the night before. He’d done a sloppy job: it was barely two feet deep
and uneven, but Beasley no longer seemed to care. It was a Sunday, and the mall was
empty. Locals refer to the place as a “dead mall” because every store had gone out of
business during the recession, except for a mattress-and-furniture liquidator, where
desperate families went to sell their belongings. The other storefronts were suspended in
a last moment of forced cheer, with neon signs in the windows still reading Everything
must go and Wall to wall savings.
They parked, and Beasley told Kern that they had been there squirrel hunting earlier and
he’d lost his watch. Kern followed Beasley into the woods behind, which were littered
with plastic cups and beer cans from a party. Rafferty kept his distance, he told the
investigators, and then “heard a pop.” He saw Kern on his knees, holding the side of his
head. He kept taking “enormous gulp[s] of air.” Three shots later, he was still gulping.
Finally, on the fifth shot, he stopped.
That night, Nick tried to call his dad and got no answer, but he figured he was just getting
settled. Then a couple of days went by and Nick started to get worried. “I called him like
2,000 times. Because he would contact us like every hour of every day. And now
nothing?” Nick began sleeping with his phone in his hand and waking up to listen to the
messages, even though no new ones had registered. The next Sunday, a week later, he
was at a friend’s house watching a football game when his mom called and told him to
come home immediately—she had something she needed to tell him.
“Tell me now!” he screamed into the phone. “Tell me right fucking now!” As he
explained to me, “I knew. Because that would be the only explanation for him not calling
us.”
56
I was initially drawn to the story of the Beasley murders because I thought it would
illuminate the isolation and vulnerability of so many working-class men, who have been
pushed by the faltering economy from one way of life—a nine-to-five job, a wife,
children—into another, far more precarious one: unemployed or underemployed, single
or divorced, crashing on relatives’ spare beds or in the backseats of cars. At what other
moment in history would it have been plausible for a serial killer to identify middle-aged
white men as his most vulnerable targets?
But what I discovered in the course of my reporting was something quite different. As
traditional family structures are falling apart for working-class men, many of them are
forging new kinds of relationships: two old high-school friends who chat so many times a
day that they need to buy themselves walkie-talkies; a father who texts his almost-grown
sons as he goes to bed at night and as he wakes up in the morning.
Christians often talk about a “God-shaped hole,” a need inside us that can be filled only
by faith. But perhaps we share a “family-shaped hole.” When the old structures recede for
men, they find ways to replace them with alternative attachments, bonds with one or two
people that offer the warmth and intimacy typically provided by a wife or significant
other. If anything, these improvised families can prove more intense because they are
formed under duress and, lacking a conventional domestic routine or a recognized status,
they must be constantly tended and reinforced.
While researching a recent book she co-wrote about working-class fathers, Doing the
Best I Can, the sociologist Kathryn Edin noticed something surprising. The men she
spoke with were exceptionally emotional when it came to their children—children whom
many of the men did not live with and were not steadily providing for. They had taken
the ethos that fathers should be involved with their children and “kind of gone overboard
with it,” Edin explained to me, “so they were even more expressive than middle-class
men.” Often this emotiveness spilled over into other areas or landed on children who
were not their own, or even on other adults—a sibling or cousin, a childhood best
friend—as if the men were inventing a new language of intimacy. In some cases, when a
man was courting a woman, Edin found that he would court her child so intensely that it
seemed “the child was the main audience for his affections,” not the mother.
Edin concluded that for men who are failing the traditional tests of marriage and
parenting, this kind of intense emotional connection “is the last form of identity
available.” It’s a way to maintain a sense of family if you can’t be a reliable breadwinner,
or even keep up with child support.
David Pauley had his friend Chris Maul and his twin sister, Deb. Timothy Kern had his
sons Zach and Nick. Scott Davis was fortunate enough to live to tell his own story, but
even if he hadn’t, his mother was eagerly waiting for him to arrive in Ohio to help fix her
house. Of all Beasley’s victims, the one it took me the longest to learn much about was
the first, Ralph Geiger. He had some family in California and Atlanta, but unlike the
relatives of the other victims, they did not attend the trials of Beasley and Rafferty. After
Geiger was cremated, his ashes were delivered to a young woman named Summer
57
Rowley. Though she was not a relative of Geiger, she did attend the trials, and I visited
her at home one day. She mentioned that she was afraid of what people might think about
Geiger’s relationship with her, that he was just “hitting on the pretty young girl. But he
would never do that.”
Rowley met Geiger in 2004, when she was 19 and he was 49. A friend had set her up
with a job cleaning his house, and after a few visits he asked her whether she wanted to
try painting some drywall. Rowley didn’t know how to do that, but Geiger taught her, and
after a while they began working together regularly. He taught her how to fix a drain,
caulk a hole, and perform various other plumbing tasks. He taught her how to cook a
roast and make soup. “He was like a father,” Rowley told me. He helped change her from
a wild teenager into a young woman who was ready, at 25, to have a baby with her
fiancé. When her daughter was born, she presented Geiger as the little girl’s “pa-pa.” On
the mantle beneath Rowley’s TV is a picture of Geiger nestling the infant, barely a week
old, against his big chest.
Richard Beasley had believed that no one would come looking for the divorced,
unsettled, middle-aged men he was targeting. But he should have known better. Like his
victims, Beasley was himself divorced, and lived apart from his child, and was only
sporadically employed. And like them, he too had created an intense surrogate family
relationship, with Brogan Rafferty.
When prosecutors interviewed Beasley’s daughter, Tonya, she said that when she saw her
dad and Rafferty together on Sunday mornings, they seemed like father and son—much
closer to one another than she and her dad had ever been. On the stand, Rafferty
described Beasley as “the one person that I could go to … about anything,” “a father that
I never had.”
Rafferty, of course, did have a father, one with whom he lived and who provided for
him—right down to the white Buick LeSabre. But long before the murders, Rafferty
would complain to his half-sister Rayna about how tough his dad, Michael, was on him.
Michael hardly had an easy situation himself. When Brogan was not yet a week old,
Michael came home from work one day to discover that his wife, Yvette, and his infant
son were gone. It was bitterly cold that week—Brogan was born on Christmas Eve—and
Michael searched for 48 hours before Yvette came back home. He assumed she was
doing drugs again, and as much as Michael loved her, he decided to kick her out and raise
Brogan alone, with the help of his own mother, Rita.
At the trial, the local press seized on the story of how, at age 5, when he was in
kindergarten, Brogan would eat breakfast alone, get himself dressed, and make his own
way to the bus stop. “He raised himself, in my opinion,” one grade-school counselor who
knew him told the jury. But things weren’t quite so simple: Michael explained to me that
he worked an early shift at a machine shop and had to leave the house by 6:30 a.m.
Before he left, he laid out clothes for his son, poured his favorite cereal in a bowl, and left
him a little pitcher of milk. Then he gently woke him up and left for work.
58
That said, Michael allows that “I put a lot of responsibility” on Brogan, because it was
“just the two of us.” Michael is regimented and strict and has a fierce temper. He had
been raised to believe that boys don’t cry, and he raised Brogan the same way. Rayna
believes that Brogan drifted toward Beasley because he was a little scared of his father,
and Beasley was “like an escape.”
Rafferty’s lawyers wondered whether Beasley and the boy had a sexual relationship.
Rafferty’s dad wondered too. How else to explain a bond so intense it led Rafferty to pick
up a shovel and dig four graves? But Rafferty rolled his eyes when his dad asked and
said, “It wasn’t like that at all.” The real explanation seems less complicated. Michael
represented an old vision of fatherhood: strict, manly, and reliable, working the early shift
to put food on the table but coming home worn and agitated. Beasley, by contrast, had no
such parental obligations and was free to represent a newer and in some ways more
appealing vision: expressive, loving, always around to listen and give advice. It was easy
for Beasley to be a hero to Rafferty—and, to a lesser degree, to Rayna and the other kids
at their church. He did what their distracted, overworked, and somewhat traumatized
parents couldn’t do, Rayna says, which was “really connect to us.”
In November 2012, a jury convicted Brogan Rafferty of two dozen criminal counts,
including murder, robbery, and kidnapping. Judge Lynne Callahan told Rafferty that he
had been “dealt a lousy hand in life” but that he had “embraced the evil,” and sentenced
him to life without parole. In April 2013, Richard Beasley was also convicted of murder
and was sentenced to death. Throughout his trial, he maintained that he was innocent.
(Both Beasley and Rafferty are appealing their convictions.)
In letters to his father now, Rafferty sometimes sounds like a kid and sometimes like a
damaged man: “I’m sorry … I left my room a mess when I left. I’m sorry for disgracing
you and the family name,” he wrote from jail. He reads books from the library straight
out of a high-school curriculum—The Grapes of Wrath and Catcher in the Rye and
Treasure Island. He identifies with All Quiet on the Western Front, he wrote, because
prison life is like war: “Each man fights his own battle, and each with an invisible
enemy.” He has admitted to his dad that he used to resent him for being so strict, but now
he’s grateful, because thanks to all the rules and the chores and the premature
independence, he knows how to take care of himself.
Mostly, though, his letters are full of longing for family, for his dad and his half-sisters,
his dog, Whiskey, and his cats, Cow and Monkey, his mom and his grandma and grandpa
and his aunts and uncles. The Raffertys are an old Irish clan, with a coat of arms hanging
in the living room. Rafferty draws that coat of arms sometimes in prison, along with the
two tattoos he wants to get, one that says Dad and another that says Rita.
I deserve to be here, but I don’t deserve to sit in a hole while my loved ones and pets die
around me. That’s Hell.
I love you Dad, and I always will.
59
The Case for Working with Your Hands
By MATTHEW B. CRAWFORD
The television show “Deadliest Catch” depicts commercial crab fishermen in the Bering Sea.
Another, “Dirty Jobs,” shows all kinds of grueling work; one episode featured a guy who
inseminates turkeys for a living. The weird fascination of these shows must lie partly in the
fact that such confrontations with material reality have become exotically unfamiliar. Many
of us do work that feels more surreal than real. Working in an office, you often find it
difficult to see any tangible result from your efforts. What exactly have you accomplished at
the end of any given day? Where the chain of cause and effect is opaque and responsibility
diffuse, the experience of individual agency can be elusive. “Dilbert,” “The Office” and similar
portrayals of cubicle life attest to the dark absurdism with which many Americans have come
to view their white-collar jobs.
High-school shop-class programs were widely dismantled in the 1990s as educators prepared
students to become “knowledge workers.” The imperative of the last 20 years to round up
every warm body and send it to college, then to the cubicle, was tied to a vision of the future
in which we somehow take leave of material reality and glide about in a pure information
economy. This has not come to pass. To begin with, such work often feels more enervating
than gliding. More fundamentally, now as ever, somebody has to actually do things: fix our
cars, unclog our toilets, build our houses.
When we praise people who do work that is straightforwardly useful, the praise often betrays
an assumption that they had no other options. We idealize them as the salt of the earth and
emphasize the sacrifice for others their work may entail. Such sacrifice does indeed occur —
the hazards faced by a lineman restoring power during a storm come to mind. But what if
such work answers as well to a basic human need of the one who does it? I take this to be the
suggestion of Marge Piercy’s poem “To Be of Use,” which concludes with the lines “the pitcher
longs for water to carry/and a person for work that is real.” Beneath our gratitude for the
lineman may rest envy.
This seems to be a moment when the useful arts have an especially compelling economic
rationale. A car mechanics’ trade association reports that repair shops have seen their
business jump significantly in the current recession: people aren’t buying new cars; they are
fixing the ones they have. The current downturn is likely to pass eventually. But there are
also systemic changes in the economy, arising from information technology, that have the
surprising effect of making the manual trades — plumbing, electrical work, car repair —
more attractive as careers. The Princeton economist Alan Blinder argues that the crucial
distinction in the emerging labor market is not between those with more or less education,
but between those whose services can be delivered over a wire and those who must do their
work in person or on site. The latter will find their livelihoods more secure against
outsourcing to distant countries. As Blinder puts it, “You can’t hammer a nail over the
Internet.” Nor can the Indians fix your car. Because they are in India.
If the goal is to earn a living, then, maybe it isn’t really true that 18-year-olds need to be
imparted with a sense of panic about getting into college (though they certainly need to
learn). Some people are hustled off to college, then to the cubicle, against their own
inclinations and natural bents, when they would rather be learning to build things or fix
things. One shop teacher suggested to me that “in schools, we create artificial learning
environments for our children that they know to be contrived and undeserving of their full
attention and engagement. Without the opportunity to learn through the hands, the world
remains abstract and distant, and the passions for learning will not be engaged.”
A gifted young person who chooses to become a mechanic rather than to accumulate
academic credentials is viewed as eccentric, if not self-destructive. There is a pervasive
anxiety among parents that there is only one track to success for their children. It runs
60
through a series of gates controlled by prestigious institutions. Further, there is wide use of
drugs to medicate boys, especially, against their natural tendency toward action, the better
to “keep things on track.” I taught briefly in a public high school and would have loved to
have set up a Ritalin fogger in my classroom. It is a rare person, male or female, who is
naturally inclined to sit still for 17 years in school, and then indefinitely at work.
The trades suffer from low prestige, and I believe this is based on a simple mistake. Because
the work is dirty, many people assume it is also stupid. This is not my experience. I have a
small business as a motorcycle mechanic in Richmond, Va., which I started in 2002. I work
on Japanese and European motorcycles, mostly older bikes with some “vintage” cachet that
makes people willing to spend money on them. I have found the satisfactions of the work to
be very much bound up with the intellectual challenges it presents. And yet my decision to go
into this line of work is a choice that seems to perplex many people.
After finishing a Ph.D. in political philosophy at the University of Chicago in 2000, I
managed to stay on with a one-year postdoctoral fellowship at the university’s Committee on
Social Thought. The academic job market was utterly bleak. In a state of professional panic, I
retreated to a makeshift workshop I set up in the basement of a Hyde Park apartment
building, where I spent the winter tearing down an old Honda motorcycle and rebuilding it.
The physicality of it, and the clear specificity of what the project required of me, was a balm.
Stumped by a starter motor that seemed to check out in every way but wouldn’t work, I
started asking around at Honda dealerships. Nobody had an answer; finally one service
manager told me to call Fred Cousins of Triple O Service. “If anyone can help you, Fred can.”
I called Fred, and he invited me to come to his independent motorcycle-repair shop, tucked
discreetly into an unmarked warehouse on Goose Island. He told me to put the motor on a
certain bench that was free of clutter. He checked the electrical resistance through the
windings, as I had done, to confirm there was no short circuit or broken wire. He spun the
shaft that ran through the center of the motor, as I had. No problem: it spun freely. Then he
hooked it up to a battery. It moved ever so slightly but wouldn’t spin. He grasped the shaft,
delicately, with three fingers, and tried to wiggle it side to side. “Too much free play,” he
said. He suggested that the problem was with the bushing (a thick-walled sleeve of metal)
that captured the end of the shaft in the end of the cylindrical motor housing. It was worn, so
it wasn’t locating the shaft precisely enough. The shaft was free to move too much side to
side (perhaps a couple of hundredths of an inch), causing the outer circumference of the rotor
to bind on the inner circumference of the motor housing when a current was applied. Fred
scrounged around for a Honda motor. He found one with the same bushing, then used a
“blind hole bearing puller” to extract it, as well as the one in my motor. Then he gently
tapped the new, or rather newer, one into place. The motor worked! Then Fred gave me an
impromptu dissertation on the peculiar metallurgy of these Honda starter-motor bushings of
the mid-’70s. Here was a scholar.
Over the next six months I spent a lot of time at Fred’s shop, learning, and put in only
occasional appearances at the university. This was something of a regression: I worked on
cars throughout high school and college, and one of my early jobs was at a Porsche repair
shop. Now I was rediscovering the intensely absorbing nature of the work, and it got me
thinking about possible livelihoods.
As it happened, in the spring I landed a job as executive director of a policy organization in
Washington. This felt like a coup. But certain perversities became apparent as I settled into
the job. It sometimes required me to reason backward, from desired conclusion to suitable
premise. The organization had taken certain positions, and there were some facts it was
more fond of than others. As its figurehead, I was making arguments I didn’t fully buy
myself. Further, my boss seemed intent on retraining me according to a certain cognitive
style — that of the corporate world, from which he had recently come. This style demanded
that I project an image of rationality but not indulge too much in actual reasoning. As I sat
61
in my K Street office, Fred’s life as an independent tradesman gave me an image that I kept
coming back to: someone who really knows what he is doing, losing himself in work that is
genuinely useful and has a certain integrity to it. He also seemed to be having a lot of fun.
Seeing a motorcycle about to leave my shop under its own power, several days after arriving
in the back of a pickup truck, I don’t feel tired even though I’ve been standing on a concrete
floor all day. Peering into the portal of his helmet, I think I can make out the edges of a grin
on the face of a guy who hasn’t ridden his bike in a while. I give him a wave. With one of his
hands on the throttle and the other on the clutch, I know he can’t wave back. But I can hear
his salute in the exuberant “bwaaAAAAP!” of a crisp throttle, gratuitously revved. That
sound pleases me, as I know it does him. It’s a ventriloquist conversation in one mechanical
voice, and the gist of it is “Yeah!”
After five months at the think tank, I’d saved enough money to buy some tools I needed, and
I quit and went into business fixing bikes. My shop rate is $40 per hour. Other shops have
rates as high as $70 per hour, but I tend to work pretty slowly. Further, only about half the
time I spend in the shop ends up being billable (I have no employees; every little chore falls
to me), so it usually works out closer to $20 per hour — a modest but decent wage. The
business goes up and down; when it is down I have supplemented it with writing. The work
is sometimes frustrating, but it is never irrational.
And it frequently requires complex thinking. In fixing motorcycles you come up with several
imagined trains of cause and effect for manifest symptoms, and you judge their likelihood
before tearing anything down. This imagining relies on a mental library that you develop. An
internal combustion engine can work in any number of ways, and different manufacturers
have tried different approaches. Each has its own proclivities for failure. You also develop a
library of sounds and smells and feels. For example, the backfire of a too-lean fuel mixture is
subtly different from an ignition backfire.
As in any learned profession, you just have to know a lot. If the motorcycle is 30 years old,
from an obscure maker that went out of business 20 years ago, its tendencies are known
mostly through lore. It would probably be impossible to do such work in isolation, without
access to a collective historical memory; you have to be embedded in a community of
mechanic-antiquarians. These relationships are maintained by telephone, in a network of
reciprocal favors that spans the country. My most reliable source, Fred, has such an
encyclopedic knowledge of obscure European motorcycles that all I have been able to offer
him in exchange is deliveries of obscure European beer.
There is always a risk of introducing new complications when working on old motorcycles,
and this enters the diagnostic logic. Measured in likelihood of screw-ups, the cost is not
identical for all avenues of inquiry when deciding which hypothesis to pursue. Imagine you’re
trying to figure out why a bike won’t start. The fasteners holding the engine covers on 1970sera Hondas are Phillips head, and they are almost always rounded out and corroded. Do you
really want to check the condition of the starter clutch if each of eight screws will need to be
drilled out and extracted, risking damage to the engine case? Such impediments have to be
taken into account. The attractiveness of any hypothesis is determined in part by physical
circumstances that have no logical connection to the diagnostic problem at hand. The
mechanic’s proper response to the situation cannot be anticipated by a set of rules or
algorithms.
There probably aren’t many jobs that can be reduced to rule-following and still be done well.
But in many jobs there is an attempt to do just this, and the perversity of it may go
unnoticed by those who design the work process. Mechanics face something like this problem
in the factory service manuals that we use. These manuals tell you to be systematic in
eliminating variables, presenting an idealized image of diagnostic work. But they never take
into account the risks of working on old machines. So you put the manual away and consider
the facts before you. You do this because ultimately you are responsible to the motorcycle
62
and its owner, not to some procedure.
Some diagnostic situations contain a lot of variables. Any given symptom may have several
possible causes, and further, these causes may interact with one another and therefore be
difficult to isolate. In deciding how to proceed, there often comes a point where you have to
step back and get a larger gestalt. Have a cigarette and walk around the lift. The gap
between theory and practice stretches out in front of you, and this is where it gets
interesting. What you need now is the kind of judgment that arises only from experience;
hunches rather than rules. For me, at least, there is more real thinking going on in the bike
shop than there was in the think tank.
Put differently, mechanical work has required me to cultivate different intellectual habits.
Further, habits of mind have an ethical dimension that we don’t often think about. Good
diagnosis requires attentiveness to the machine, almost a conversation with it, rather than
assertiveness, as in the position papers produced on K Street. Cognitive psychologists speak
of “metacognition,” which is the activity of stepping back and thinking about your own
thinking. It is what you do when you stop for a moment in your pursuit of a solution, and
wonder whether your understanding of the problem is adequate. The slap of worn-out pistons
hitting their cylinders can sound a lot like loose valve tappets, so to be a good mechanic you
have to be constantly open to the possibility that you may be mistaken. This is a virtue that
is at once cognitive and moral. It seems to develop because the mechanic, if he is the sort who
goes on to become good at it, internalizes the healthy functioning of the motorcycle as an
object of passionate concern. How else can you explain the elation he gets when he identifies
the root cause of some problem?
This active concern for the motorcycle is reinforced by the social aspects of the job. As is the
case with many independent mechanics, my business is based entirely on word of mouth. I
sometimes barter services with machinists and metal fabricators. This has a very different
feel than transactions with money; it situates me in a community. The result is that I really
don’t want to mess up anybody’s motorcycle or charge more than a fair price. You often hear
people complain about mechanics and other tradespeople whom they take to be dishonest or
incompetent. I am sure this is sometimes justified. But it is also true that the mechanic deals
with a large element of chance.
I once accidentally dropped a feeler gauge down into the crankcase of a Kawasaki Ninja that
was practically brand new, while performing its first scheduled valve adjustment. I escaped a
complete tear-down of the motor only through an operation that involved the use of a
stethoscope, another pair of trusted hands and the sort of concentration we associate with a
bomb squad. When finally I laid my fingers on that feeler gauge, I felt as if I had cheated
death. I don’t remember ever feeling so alive as in the hours that followed.
Often as not, however, such crises do not end in redemption. Moments of elation are
counterbalanced with failures, and these, too, are vivid, taking place right before your eyes.
With stakes that are often high and immediate, the manual trades elicit heedful absorption
in work. They are punctuated by moments of pleasure that take place against a darker
backdrop: a keen awareness of catastrophe as an always-present possibility. The core
experience is one of individual responsibility, supported by face-to-face interactions between
tradesman and customer.
Contrast the experience of being a middle manager. This is a stock figure of ridicule, but the
sociologist Robert Jackall spent years inhabiting the world of corporate managers,
conducting interviews, and he poignantly describes the “moral maze” they feel trapped in.
Like the mechanic, the manager faces the possibility of disaster at any time. But in his case
these disasters feel arbitrary; they are typically a result of corporate restructurings, not of
physics. A manager has to make many decisions for which he is accountable. Unlike an
entrepreneur with his own business, however, his decisions can be reversed at any time by
someone higher up the food chain (and there is always someone higher up the food chain).
63
It’s important for your career that these reversals not look like defeats, and more generally
you have to spend a lot of time managing what others think of you. Survival depends on a
crucial insight: you can’t back down from an argument that you initially made in
straightforward language, with moral conviction, without seeming to lose your integrity. So
managers learn the art of provisional thinking and feeling, expressed in corporate
doublespeak, and cultivate a lack of commitment to their own actions. Nothing is set in
concrete the way it is when you are, for example, pouring concrete.
Those who work on the lower rungs of the information-age office hierarchy face their own
kinds of unreality, as I learned some time ago. After earning a master’s degree in the early
1990s, I had a hard time finding work but eventually landed a job in the Bay Area writing
brief summaries of academic journal articles, which were then sold on CD-ROMs to
subscribing libraries. When I got the phone call offering me the job, I was excited. I felt I had
grabbed hold of the passing world — miraculously, through the mere filament of a classified
ad — and reeled myself into its current. My new bosses immediately took up residence in my
imagination, where I often surprised them with my hidden depths. As I was shown to my
cubicle, I felt a real sense of being honored. It seemed more than spacious enough. It was my
desk, where I would think my thoughts — my unique contribution to a common enterprise, in
a real company with hundreds of employees. The regularity of the cubicles made me feel I
had found a place in the order of things. I was to be a knowledge worker.
But the feel of the job changed on my first day. The company had gotten its start by
providing libraries with a subject index of popular magazines like Sports Illustrated.
Through a series of mergers and acquisitions, it now found itself offering not just indexes but
also abstracts (that is, summaries), and of a very different kind of material: scholarly works
in the physical and biological sciences, humanities, social sciences and law. Some of this stuff
was simply incomprehensible to anyone but an expert in the particular field covered by the
journal. I was reading articles in Classical Philology where practically every other word was
in Greek. Some of the scientific journals were no less mysterious. Yet the categorical
difference between, say, Sports Illustrated and Nature Genetics seemed not to have
impressed itself on the company’s decision makers. In some of the titles I was assigned,
articles began with an abstract written by the author. But even in such cases I was to write
my own. The reason offered was that unless I did so, there would be no “value added” by our
product. It was hard to believe I was going to add anything other than error and confusion to
such material. But then, I hadn’t yet been trained.
My job was structured on the supposition that in writing an abstract of an article there is a
method that merely needs to be applied, and that this can be done without understanding
the text. I was actually told this by the trainer, Monica, as she stood before a whiteboard,
diagramming an abstract. Monica seemed a perfectly sensible person and gave no outward
signs of suffering delusions. She didn’t insist too much on what she was telling us, and it
became clear she was in a position similar to that of a veteran Soviet bureaucrat who must
work on two levels at once: reality and official ideology. The official ideology was a bit like the
factory service manuals I mentioned before, the ones that offer procedures that mechanics
often have to ignore in order to do their jobs.
My starting quota, after finishing a week of training, was 15 articles per day. By my 11th
month at the company, my quota was up to 28 articles per day (this was the normal,
scheduled increase). I was always sleepy while at work, and I think this exhaustion was
because I felt trapped in a contradiction: the fast pace demanded complete focus on the task,
yet that pace also made any real concentration impossible. I had to actively suppress my own
ability to think, because the more you think, the more the inadequacies in your
understanding of an author’s argument come into focus. This can only slow you down. To not
do justice to an author who had poured himself into the subject at hand felt like violence
against what was best in myself.
64
The quota demanded, then, not just dumbing down but also a bit of moral re-education, the
opposite of the kind that occurs in the heedful absorption of mechanical work. I had to
suppress my sense of responsibility to the article itself, and to others — to the author, to
begin with, as well as to the hapless users of the database, who might naïvely suppose that
my abstract reflected the author’s work. Such detachment was made easy by the fact there
was no immediate consequence for me; I could write any nonsense whatever.
Now, it is probably true that every job entails some kind of mutilation. I used to work as an
electrician and had my own business doing it for a while. As an electrician you breathe a lot
of unknown dust in crawl spaces, your knees get bruised, your neck gets strained from
looking up at the ceiling while installing lights or ceiling fans and you get shocked regularly,
sometimes while on a ladder. Your hands are sliced up from twisting wires together,
handling junction boxes made out of stamped sheet metal and cutting metal conduit with a
hacksaw. But none of this damage touches the best part of yourself.
You might wonder: Wasn’t there any quality control? My supervisor would periodically read
a few of my abstracts, and I was sometimes corrected and told not to begin an abstract with a
dependent clause. But I was never confronted with an abstract I had written and told that it
did not adequately reflect the article. The quality standards were the generic ones of
grammar, which could be applied without my supervisor having to read the article at hand.
Rather, my supervisor and I both were held to a metric that was conjured by someone remote
from the work process — an absentee decision maker armed with a (putatively) profitmaximizing calculus, one that took no account of the intrinsic nature of the job. I wonder
whether the resulting perversity really made for maximum profits in the long term.
Corporate managers are not, after all, the owners of the businesses they run.
At lunch I had a standing arrangement with two other abstracters. One was from my group,
a laconic, disheveled man named Mike whom I liked instantly. He did about as well on his
quota as I did on mine, but it didn’t seem to bother him too much. The other guy was from
beyond the partition, a meticulously groomed Liberian named Henry who said he had
worked for the C.I.A. He had to flee Liberia very suddenly one day and soon found himself
resettled near the office parks of Foster City, Calif. Henry wasn’t going to sweat the quota.
Come 12:30, the three of us would hike to the food court in the mall. This movement was
always thrilling. It involved traversing several “campuses,” with ponds frequented by oddly
real seagulls, then the lunch itself, which I always savored. (Marx writes that under
conditions of estranged labor, man “no longer feels himself to be freely active in any but his
animal functions.”) Over his burrito, Mike would recount the outrageous things he had
written in his abstracts. I could see my own future in such moments of sabotage — the
compensating pleasures of a cubicle drone. Always funny and gentle, Mike confided one day
that he was doing quite a bit of heroin. On the job. This actually made some sense.
How was it that I, once a proudly self-employed electrician, had ended up among these
walking wounded, a “knowledge worker” at a salary of $23,000? I had a master’s degree, and
it needed to be used. The escalating demand for academic credentials in the job market gives
the impression of an ever-more-knowledgeable society, whose members perform cognitive
feats their unschooled parents could scarcely conceive of. On paper, my abstracting job,
multiplied a millionfold, is precisely what puts the futurologist in a rapture: we are getting to
be so smart! Yet my M.A. obscures a more real stupidification of the work I secured with that
credential, and a wage to match. When I first got the degree, I felt as if I had been inducted
to a certain order of society. But despite the beautiful ties I wore, it turned out to be a more
proletarian existence than I had known as an electrician. In that job I had made quite a bit
more money. I also felt free and active, rather than confined and stultified.
A good job requires a field of action where you can put your best capacities to work and see
an effect in the world. Academic credentials do not guarantee this.
Nor can big business or big government — those idols of the right and the left — reliably
65
secure such work for us. Everyone is rightly concerned about economic growth on the one
hand or unemployment and wages on the other, but the character of work doesn’t figure
much in political debate. Labor unions address important concerns like workplace safety and
family leave, and management looks for greater efficiency, but on the nature of the job itself,
the dominant political and economic paradigms are mute. Yet work forms us, and deforms
us, with broad public consequences.
The visceral experience of failure seems to have been edited out of the career trajectories of
gifted students. It stands to reason, then, that those who end up making big decisions that
affect all of us don’t seem to have much sense of their own fallibility, and of how badly things
can go wrong even with the best of intentions (like when I dropped that feeler gauge down
into the Ninja). In the boardrooms of Wall Street and the corridors of Pennsylvania Avenue, I
don’t think you’ll see a yellow sign that says “Think Safety!” as you do on job sites and in
many repair shops, no doubt because those who sit on the swivel chairs tend to live remote
from the consequences of the decisions they make. Why not encourage gifted students to
learn a trade, if only in the summers, so that their fingers will be crushed once or twice
before they go on to run the country?
There is good reason to suppose that responsibility has to be installed in the foundation of
your mental equipment — at the level of perception and habit. There is an ethic of paying
attention that develops in the trades through hard experience. It inflects your perception of
the world and your habitual responses to it. This is due to the immediate feedback you get
from material objects and to the fact that the work is typically situated in face-to-face
interactions between tradesman and customer.
An economy that is more entrepreneurial, less managerial, would be less subject to the kind
of distortions that occur when corporate managers’ compensation is tied to the short-term
profit of distant shareholders. For most entrepreneurs, profit is at once a more capacious and
a more concrete thing than this. It is a calculation in which the intrinsic satisfactions of work
count — not least, the exercise of your own powers of reason.
Ultimately it is enlightened self-interest, then, not a harangue about humility or publicspiritedness, that will compel us to take a fresh look at the trades. The good life comes in a
variety of forms. This variety has become difficult to see; our field of aspiration has narrowed
into certain channels. But the current perplexity in the economy seems to be softening our
gaze. Our peripheral vision is perhaps recovering, allowing us to consider the full range of
lives worth choosing. For anyone who feels ill suited by disposition to spend his days sitting
in an office, the question of what a good job looks like is now wide open.
66
Fenimore Cooper's Literary Offenses
by Mark Twain
"The Pathfinder" and "The Deerslayer" stand at the head of Cooper's novels as
artistic creations. There are others of his works which contain parts as perfect as are
to be found in these, and scenes even more thrilling. Not one can be compared with
either of them as a finished whole. The defects in both of these tales are
comparatively slight. They were pure works of art.
--Professor Lounsbury
The five tales reveal an extraordinary fullness of invention. ... One of the very
greatest characters in fiction, Natty Bumppo... The craft of the woodsman, the tricks
of the trapper, all the delicate art of the forest were familiar to Cooper from his
youth up.
--Professor Matthews
Cooper is the greatest artist in the domain of romantic fiction in America.
--Wilkie Collins
It seems to me that it was far from right for the Professor of English Literature at
Yale, the Professor of English Literature in Columbia, and Wilkie Collins to deliver
opinions on Cooper's literature without having read some of it. It would have been
much more decorous to keep silent and let persons talk who have read Cooper.
Cooper's art has some defects. In one place in "Deerslayer," and in the restricted
space of two-thirds of a page, Cooper has scored 114 offenses against literary art out
of a possible 115. It breaks the record.
There are nineteen rules governing literary art in domain of romantic fiction -- some
say twenty-two. In "Deerslayer," Cooper violated eighteen of them. These eighteen
require:
1. That a tale shall accomplish something and arrive somewhere. But the
"Deerslayer" tale accomplishes nothing and arrives in air.
2. They require that the episodes in a tale shall be necessary parts of the tale,
and shall help to develop it. But as the "Deerslayer" tale is not a tale, and
accomplishes nothing and arrives nowhere, the episodes have no rightful
place in the work, since there was nothing for them to develop.
67
3. They require that the personages in a tale shall be alive, except in the case of
corpses, and that always the reader shall be able to tell the corpses from the
others. But this detail has often been overlooked in the "Deerslayer" tale.
4. They require that the personages in a tale, both dead and alive, shall exhibit
a sufficient excuse for being there. But this detail also has been overlooked in
the "Deerslayer" tale.
5. The require that when the personages of a tale deal in conversation, the talk
shall sound like human talk, and be talk such as human beings would be
likely to talk in the given circumstances, and have a discoverable meaning,
also a discoverable purpose, and a show of relevancy, and remain in the
neighborhood of the subject at hand, and be interesting to the reader, and
help out the tale, and stop when the people cannot think of anything more to
say. But this requirement has been ignored from the beginning of the
"Deerslayer" tale to the end of it.
6. They require that when the author describes the character of a personage in
the tale, the conduct and conversation of that personage shall justify said
description. But this law gets little or no attention in the "Deerslayer" tale, as
Natty Bumppo's case will amply prove.
7. They require that when a personage talks like an illustrated, gilt-edged, treecalf, hand-tooled, seven- dollar Friendship's Offering in the beginning of a
paragraph, he shall not talk like a negro minstrel in the end of it. But this
rule is flung down and danced upon in the "Deerslayer" tale.
8. They require that crass stupidities shall not be played upon the reader as
"the craft of the woodsman, the delicate art of the forest," by either the
author or the people in the tale. But this rule is persistently violated in the
"Deerslayer" tale.
9. They require that the personages of a tale shall confine themselves to
possibilities and let miracles alone; or, if they venture a miracle, the author
must so plausibly set it forth as to make it look possible and reasonable. But
these rules are not respected in the "Deerslayer" tale.
10. They require that the author shall make the reader feel a deep interest in the
personages of his tale and in their fate; and that he shall make the reader
love the good people in the tale and hate the bad ones. But the reader of the
"Deerslayer" tale dislikes the good people in it, is indifferent to the others,
and wishes they would all get drowned together.
11. They require that the characters in a tale shall be so clearly defined that the
reader can tell beforehand what each will do in a given emergency. But in the
"Deerslayer" tale, this rule is vacated.
In addition to these large rules, there are some little ones. These require that the
author shall:
68
12. Say what he is proposing to say, not merely come near it.
13. Use the right word, not its second cousin.
14. Eschew surplusage.
15. Not omit necessary details.
16. Avoid slovenliness of form.
17. Use good grammar.
18. Employ a simple and straightforward style.
Even these seven are coldly and persistently violated in the "Deerslayer" tale.
Cooper's gift in the way of invention was not a rich endowment; but such as it was
he liked to work it, he was pleased with the effects, and indeed he did some quite
sweet things with it. In his little box of stage-properties he kept six or eight cunning
devices, tricks, artifices for his savages and woodsmen to deceive and circumvent
each other with, and he was never so happy as when he was working these innocent
things and seeing them go. A favorite one was to make a moccasined person tread in
the tracks of a moccasined enemy, and thus hide his own trail. Cooper wore out
barrels and barrels of moccasins in working that trick. Another stage-property that
he pulled out of his box pretty frequently was the broken twig. He prized his broken
twig above all the rest of his effects, and worked it the hardest. It is a restful chapter
in any book of his when somebody doesn't step on a dry twig and alarm all the reds
and whites for two hundred yards around. Every time a Cooper person is in peril,
and absolute silence is worth four dollars a minute, he is sure to step on a dry twig.
There may be a hundred other handier things to step on, but that wouldn't satisfy
Cooper. Cooper requires him to turn out and find a dry twig; and if he can't do it, go
and borrow one. In fact, the Leatherstocking Series ought to have been called the
Broken Twig Series.
I am sorry that there is not room to put in a few dozen instances of the delicate art of
the forest, as practiced by Natty Bumppo and some of the other Cooperian experts.
Perhaps we may venture two or three samples. Cooper was a sailor -- a naval officer;
yet he gravely tells us how a vessel, driving toward a lee shore in a gale, is steered
for a particular spot by her skipper because he knows of an undertow there which
will hold her back against the gale and save her. For just pure woodcraft, or
sailorcraft, or whatever it is, isn't that neat? For several years, Cooper was daily in
the society of artillery, and he ought to have noticed that when a cannon-ball strikes
the ground it either buries itself or skips a hundred feet or so; skips again a hundred
feet or so -- and so on, till finally it gets tired and rolls. Now in one place he loses
some "females" -- as he always calls women -- in the edge of a wood near a plain at
night in a fog, on purpose to give Bumppo a chance to show off the delicate art of the
forest before the reader. These mislaid people are hunting for a fort. They hear a
cannon-blast, and a cannon-ball presently comes rolling into the wood and stops at
69
their feet. To the females this suggests nothing. The case is very different with the
admirable Bumppo. I wish I may never know peace again if he doesn't strike out
promptly and follow the track of that cannon-ball across the plain in the dense fog
and find the fort. Isn't it a daisy? If Cooper had any real knowledge of Nature's ways
of doing things, he had a most delicate art in concealing the fact. For instance: one of
his acute Indian experts, Chingachgook (pronounced Chicago, I think), has lost the
trail of a person he is tracking through the forest. Apparently that trail is hopelessly
lost. Neither you nor I could ever have guessed the way to find it. It was very
different with Chicago. Chicago was not stumped for long. He turned a running
stream out of its course, and there, in the slush in its old bed, were that person's
moccasin tracks. The current did not wash them away, as it would have done in all
other like cases -- no, even the eternal laws of Nature have to vacate when Cooper
wants to put up a delicate job of woodcraft on the reader.
We must be a little wary when Brander Matthews tells us that Cooper's books
"reveal an extraordinary fullness of invention." As a rule, I am quite willing to
accept Brander Matthews's literary judgments and applaud his lucid and graceful
phrasing of them; but that particular statement needs to be taken with a few tons of
salt. Bless you heart, Cooper hadn't any more invention than a horse; and don't
mean a high-class horse, either; I mean a clothes- horse. It would be very difficult to
find a really clever "situation" in Cooper's books, and still more difficult to find one
of any kind which has failed to render absurd by his handling of it. Look at the
episodes of "the caves"; and at the celebrated scuffle between Maqua and those
others on the table-land a few days later; and at Hurry Harry's queer water-transit
from the castle to the ark; and at Deerslayer's half-hour with his first corpse; and at
the quarrel between Hurry Harry and Deerslayer later; and at -- but choose for
yourself; you can't go amiss.
If Cooper had been an observer his inventive faculty would have worked better; not
more interestingly, but more rationally, more plausibly. Cooper's proudest creations
in the way of "situations" suffer noticeably from the absence of the observer's
protecting gift. Cooper's eye was splendidly inaccurate. Cooper seldom saw anything
correctly. He saw nearly all things as through a glass eye, darkly. Of course a man
who cannot see the commonest little every-day matters accurately is working at a
disadvantage when he is constructing a "situation." In the "Deerslayer" tale Cooper
has a stream which is fifty feet wide where it flows out of a lake; it presently
narrows to twenty as it meanders along for no given reason, and yet when a stream
acts like that it ought to be required to explain itself. Fourteen pages later the width
of the brook's outlet from the lake has suddenly shrunk thirty feet, and become "the
narrowest part of the stream." This shrinkage is not accounted for. The stream has
bends in it, a sure indication that it has alluvial banks and cuts them; yet these
bends are only thirty and fifty feet long. If Cooper had been a nice and punctilious
70
observer he would have noticed that the bends were often nine hundred feet long
than short of it.
Cooper made the exit of that stream fifty feet wide, in the first place, for no
particular reason; in the second place, he narrowed it to less than twenty to
accommodate some Indians. He bends a "sapling" to form an arch over this narrow
passage, and conceals six Indians in its foliage. They are "laying" for a settler's scow
or ark which is coming up the stream on its way to the lake; it is being hauled
against the stiff current by rope whose stationary end is anchored in the lake; its
rate of progress cannot be more than a mile an hour. Cooper describes the ark, but
pretty obscurely. In the matter of dimensions "it was little more than a modern
canal boat." Let us guess, then, that it was about one hundred and forty feet long. It
was of "greater breadth than common." Let us guess then that it was about sixteen
feet wide. This leviathan had been prowling down bends which were but a third as
long as itself, and scraping between banks where it only had two feet of space to
spare on each side. We cannot too much admire this miracle. A low- roofed dwelling
occupies "two-thirds of the ark's length" -- a dwelling ninety feet long and sixteen
feet wide, let us say -- a kind of vestibule train. The dwelling has two rooms -- each
forty- five feet long and sixteen feet wide, let us guess. One of them is the bedroom of
the Hutter girls, Judith and Hetty; the other is the parlor in the daytime, at night it
is papa's bedchamber. The ark is arriving at the stream's exit now, whose width has
been reduced to less than twenty feet to accommodate the Indians -- say to eighteen.
There is a foot to spare on each side of the boat. Did the Indians notice that there
was going to be a tight squeeze there? Did they notice that they could make money
by climbing down out of that arched sapling and just stepping aboard when the ark
scraped by? No, other Indians would have noticed these things, but Cooper's Indian's
never notice anything. Cooper thinks they are marvelous creatures for noticing, but
he was almost always in error about his Indians. There was seldom a sane one
among them.
The ark is one hundred and forty-feet long; the dwelling is ninety feet long. The idea
of the Indians is to drop softly and secretly from the arched sapling to the dwelling
as the ark creeps along under it at the rate of a mile an hour, and butcher the
family. It will take the ark a minute and a half to pass under. It will take the ninetyfoot dwelling a minute to pass under. Now, then, what did the six Indians do? It
would take you thirty years to guess, and even then you would have to give it up, I
believe. Therefore, I will tell you what the Indians did. Their chief, a person of quite
extraordinary intellect for a Cooper Indian, warily watched the canal-boat as it
squeezed along under him and when he had got his calculations fined down to
exactly the right shade, as he judge, he let go and dropped. And missed the boat!
That is actually what he did. He missed the house, and landed in he stern of the
scow. It was not much of a fall, yet it knocked him silly. He lay there unconscious. If
71
the house had been ninety-seven feet long he would have made the trip. The error
lay in the construction of the house. Cooper was no architect.
There still remained in the roost five Indians. The boat has passed under and is now
out of their reach. Let me explain what the five did -- you would not be able to
reason it out for yourself. No. 1 jumped for the boat, but fell in the water astern of it.
Then No. 2 jumped for the boat, but fell in the water still further astern of it. Then
No. 3 jumped for the boat, and fell a good way astern of it. Then No. 4 jumped for the
boat, and fell in the water away astern. Then even No. 5 made a jump for the boat -for he was Cooper Indian. In that matter of intellect, the difference between a
Cooper Indian and the Indian that stands in front of the cigar-shop is not spacious.
The scow episode is really a sublime burst of invention; but it does not thrill, because
the inaccuracy of details throw a sort of air of fictitiousness and general
improbability over it. This comes of Cooper's inadequacy as observer.
The reader will find some examples of Cooper's high talent for inaccurate
observation in the account of the shooting-match in "The Pathfinder."
A common wrought nail was driven lightly into the target, its head having been first
touched with paint.
The color of the paint is not stated -- an important omission, but Cooper deals freely
in important omissions. No, after all, it was not an important omission; for this nailhead is a hundred yards from the marksmen, and could not be seen at that distance,
no matter what its color might be. How far can the best eyes see a common housefly?
A hundred yards? It is quite impossible. Very well; eyes that cannot see a house-fly
that is a hundred yards away cannot see an ordinary nail-head at that distance, for
the size of the two objects is the same. It takes a keen eye to see a fly or a nail-head
at fifty yards -- one hundred and fifty-feet. Can the reader do it?
The nail was lightly driven, its head painted, and game called. Then the Cooper
miracles began. The bullet of the first marksman chipped an edge of the nail-head;
the next man's bullet drove the nail a little way into the target -- and removed all
the paint. Haven't the miracles gone far enough now? Not to suit Cooper; for the
purpose of this whole scheme is to show off his prodigy, Deerslayer-Hawkeye-LongRifle-Leatherstocking-Pathfinder-Bumppo before the ladies.
"Be all ready to clench it, boys!" cried out Pathfinder, stepping into his friend's
tracks the instant they were vacant. "Never mind a new nail; I can see that, though
the paint is gone, and what I can see I can hit at a hundred yards, though it were
only a mosquito's eye. Be ready to clench!"
The rifle cracked, the bullet sped its way, and the head of the nail was buried in the
wood, covered by the piece of flattened lead.
72
There, you see, is a man who could hunt flies with a rifle, and command a ducal
salary in a Wild West show to-day if we had him back with us.
The recorded feat is certainly surprising just as it stands; but it is not surprising
enough for Cooper. Cooper adds a touch. He has made Pathfinder do this miracle
with another man's rife; and not only that, but Pathfinder did not have even the
advantage of loading it himself. He had everything against him, and yet he made
that impossible shot; and not only made it, but did it with absolute confidence,
saying, "Be ready to clench." Now a person like that would have undertaken that
same feat with a brickbat, and with Cooper to help he would have achieved it, too.
Pathfinder showed off handsomely that day before the ladies. His very first feat a
thing which no Wild West show can touch. He was standing with the group of
marksmen, observing -- a hundred yards from the target, mind; one Jasper Rasper
raised his rifle and drove the center of the bull's-eye. Then the Quartermaster fired.
The target exhibited no result this time. There was a laugh. "It's a dead miss," said
Major Lundie. Pathfinder waited an impressive moment or two; then said, in that
calm, indifferent, know-it-all way of his, "No, Major, he has covered Jasper's bullet,
as will be seen if anyone will take the trouble to examine the target."
Wasn't it remarkable! How could he see that little pellet fly through the air and
enter that distant bullet-hole? Yet that is what he did; for nothing is impossible to a
Cooper person. Did any of those people have any deep-seated doubts about this
thing? No; for that would imply sanity, and these were all Cooper people.
The respect for Pathfinder's skill and for his quickness and accuracy of sight [the
italics are mine] was so profound and general, that the instant he made this
declaration the spectators began to distrust their own opinions, and a dozen rushed
to the target in order to ascertain the fact. There, sure enough, it was found that the
Quartermaster's bullet had gone through the hole made by Jasper's, and that, too, so
accurately as to require a minute examination to be certain of the circumstance,
which, however, was soon clearly established by discovering one bullet over the
other in the stump against which the target was placed.
They made a "minute" examination; but never mind, how could they know that there
were two bullets in that hole without digging the latest one out? for neither probe
nor eyesight could prove the presence of any more than one bullet. Did they dig? No;
as we shall see. It is the Pathfinder's turn now; he steps out before the ladies, takes
aim, and fires.
But, alas! here is a disappointment; in incredible, an unimaginable disappointment - for the target's aspect is unchanged; there is nothing there but that same old bullet
hole!
73
If one dared to hint at such a thing," cried Major Duncan, "I should say that the
Pathfinder has also missed the target."
As nobody had missed it yet, the "also" was not necessary; but never mind about
that, for the Pathfinder is going to speak.
"No, no, Major," said he, confidently, "that would be a risky declaration. I didn't load
the piece, and can't say what was in it; but if it was lead, you will find the bullet
driving down those of the Quartermaster and Jasper, else is not my name
Pathfinder."
A shout from the target announced the truth of this assertion.
Is the miracle sufficient as it stands? Not for Cooper. The Pathfinder speaks again,
as he "now slowly advances toward the stage occupied by the females":
"That's not all, boys, that's not all; if you find the target touched at all, I'll own to a
miss. The Quartermaster cut the wood, but you'll find no wood cut by that last
messenger."
The miracle is at last complete. He knew -- doubtless saw -- at the distance of a
hundred yards -- this his bullet had passed into the hole without fraying the edges.
There were now three bullets in that one hole -- three bullets embedded
processionally in the body of the stump back of the target. Everybody knew this -somehow or other -- and yet nobody had dug any of them out to make sure. Cooper is
not a close observer, but he is interesting. He is certainly always that, no matter
what happens. And he is more interesting when he is not noticing what he is about
than when he is. This is a considerable merit.
The conversations in the Cooper books have a curious sound in our modern ears. To
believe that such talk really ever came out of people's mouths would be to believe
that there was a time when time was of no value to a person who thought he had
something to say; when it was the custom to spread a two-minute remark out to ten;
when a man's mouth was a rolling-mill, and busied itself all day long in turning
four-foot pigs of thought into thirty-foot bars of conversational railroad iron by
attenuation; when subjects were seldom faithfully stuck to, but the talk wandered
all around and arrived nowhere; when conversations consisted mainly of
irrelevancies, with here and there a relevancy, a relevancy with an embarrassed
look, as not being able to explain how it got there.
Cooper was certainly not a master in the construction of dialogue. Inaccurate
observation defeated him here as it defeated him in so many other enterprises of his
life. He even failed to notice that the man who talks corrupt English six days in the
week must and will talk it on seventh, and can't help himself. In the "Deerslayer"
story, he lets Deerslayer talk the showiest kind of book-talk sometimes, and at other
74
times the basest of base dialects. For instance, when some one asks him if he has a
sweetheart, and if so, where she abides, this is his majestic answer:
She's in the forest -- hanging from the boughs of the trees, in a soft rain -- in the dew
on the open grass -- the clouds that float about in the blue heavens -- the birds that
sing in the woods -- the sweet springs where I slake my thirst -- and in all the other
glorious gifts that come from God's Providence!"
And he preceded that, a little before, with this:
"It consarns me as all things that touches a friend consarns a friend."
And this is another of his remarks:
"If I was Injin born, now, I might tell of this, or carry in the scalp and boast of the
expl'ite afore the whole tribe; of if my inimy had only been a bear" -- [and so on]
We cannot imagine such a thing as a veteran Scotch Commander-in- Chief
comporting himself like a windy melodramatic actor, but Cooper could. On one
occasion, Alice and Cora were being chased by the French through a fog in the
neighborhood of their father's fort:
“Point de quartier aux coquins!” cried an eager pursuer, who seemed to direct the
operations of the enemy.
"Stand firm and be ready, my gallant 60ths!" suddenly exclaimed a voice above
them; "wait to see the enemy, fire low, and sweep the glacis."
"Father! father" exclaimed a piercing cry from out the mist. "It is I! Alice! thy own
Elsie! spare, O! save your daughters!"
"Hold!" shouted the former speaker, in the awful tones of parental agony, the sound
reaching even to the woods, and rolling back in a solemn echo. "'Tis she! God has
restored me my children! Throw open the sally- port; to the field, 60ths, to the field!
pull not a trigger, lest ye kill my lambs! Drive off these dogs of France with your
steel!"
Cooper's word-sense was singularly dull. When a person has a poor ear for music he
will flat and sharp right along without knowing it. He keeps near the tune, but is
not the tune. When a person has a poor ear for words, the result is a literary flatting
and sharping; you perceive what he is intending to say, but you also perceive that he
does not say it. This is Cooper. He was not a word-musician. His ear was satisfied
with the approximate words. I will furnish some circumstantial evidence in support
of this charge. My instances are gathered from half a dozen pages of the tale called
"Deerslayer." He uses "Verbal" for "oral"; "precision" for "facility"; "phenomena" for
"marvels"; "necessary" for "predetermined"; "unsophisticated" for "primitive";
75
"preparation" for "expectancy"; "rebuked" for "subdued"; "dependent on" for
"resulting from"; "fact" for "condition"; "fact" for "conjecture"; "precaution" for
"caution"; "explain" for "determine"; "mortified" for "disappointed"; "meretricious" for
"factitious"; "materially" for "considerably"; "decreasing" for "deepening";
"increasing" for "disappearing"; "embedded" for "inclosed"; "treacherous" for
"hostile"; "stood" for "stooped"; "softened" for "replaced"; "rejoined" for "remarked";
"situation" for "condition"; "different" for "differing"; "insensible" for "unsentient";
"brevity" for "celerity"; "distrusted" for "suspicious"; "mental imbecility" for
"imbecility"; "eyes" for "sight"; "counteracting" for "opposing"; "funeral obsequies" for
"obsequies."
There have been daring people in the world who claimed that Cooper could write
English, but they are all dead now -- all dead but Lounsbury. I don't remember that
Lounsbury makes the claim in so many words, still he makes it, for he says that
"Deerslayer" is a "pure work of art." Pure, in that connection, means faultless -faultless in all details -- and language is a detail. If Mr. Lounsbury had only
compared Cooper's English with the English he writes himself -- but it is plain that
he didn't; and so it is likely that he imagines until this day that Cooper's is as clean
and compact as his own. Now I feel sure, deep down in my heart, that Cooper wrote
about the poorest English that exists in our language, and that the English of
"Deerslayer" is the very worst that even Cooper ever wrote.
I may be mistaken, but it does seem to me that "Deerslayer" is not a work of art in
any sense; it does seem to me that it is destitute of every detail that goes to the
making of a work of art; in truth, it seems to me that "Deerslayer" is just simply a
literary delirium tremens.
A work of art? It has no invention; it has no order, system, sequence, or result; it has
no lifelikeness, no thrill, no stir, no seeming of reality; its characters are confusedly
drawn, and by their acts and words they prove that they are not the sort of people
the author claims that they are; its humor is pathetic; its pathos is funny; its
conversations are -- oh! indescribable; its love-scenes odious; its English a crime
against the language.
Counting these out, what is left is Art. I think we must all admit that.
76
ON THE USES OF A LIBERAL EDUCATION: As a
Weapon in the Hands of the Restless Poor
By Earl Shorris
Next month I will publish a book about poverty in America, but not the book I
intended. The world took me by surprise--not once, but again and again. The poor
themselves led me in directions I could not have imagined, especially the one that
came out of a conversation in a maximum-security prison for women that is set,
incongruously, in a lush Westchester suburb fifty miles north of New York City.
I had been working on the book for about three years when I went to the Bedford
Hills Correctional Facility for the first time. The staff and inmates had developed a
program to deal with family violence, and I wanted to see how their ideas fit with
what I had learned about poverty.
Numerous forces--hunger, isolation, illness, landlords, police, abuse, neighbors,
drugs, criminals, and racism, among many others--exert themselves on the poor at
all times and enclose them, making up a "surround of force" from which, it seems,
they cannot escape. I had come to understand that this was what kept the poor from
being political and that the absence of politics in their lives was what kept them
poor. I don't mean "political" in the sense of voting in an election but in the way
Thucydides used the word: to mean activity with other people at every level, from
the family to the neighborhood to the broader community to the city-state.
By the time I got to Bedford Hills, I had listened to more than six hundred people,
some of them over the course of two or three years. Although my method is that of
the bricoleur, the tinkerer who assembles a thesis of the bric-a-brac he finds in the
world, I did not think there would be any more surprises. But I had not counted on
what Viniece Walker was to say.
It is considered bad form in prison to speak of a person's crime, and I will follow that
precise etiquette here. I can tell you that Viniece Walker came to Bedford Hills when
she was twenty years old, a high school dropout who read at the level of a college
sophomore, a graduate of crackhouses, the streets of Harlem, and a long alliance
with a brutal man. On the surface Viniece has remained as tough as she was on the
street. She speaks bluntly, and even though she is HIV positive and the virus has
progressed during her time in prison, she still swaggers as she walks down the long
prison corridors. While in prison, Niecie, as she is known to her friends, completed
her high school requirements and began to pursue a college degree (psychology is the
only major offered at Bedford Hills, but Niecie also took a special interest in
77
philosophy). She became a counselor to women with a history of family violence and
a comforter to those with AIDS.
Only the deaths of other women cause her to stumble in the midst of her swaggering
step, to spend days alone with the remorse that drives her to seek redemption. She
goes through life as if she had been imagined by Dostoevsky, but even more complex
than his fictions, alive, a person, a fair-skinned and freckled African-American
woman, and in prison. It was she who responded to my sudden question, "Why do
you think people are poor?"
We had never met before. The conversation around us focused on the abuse of
women. Niecie's eyes were perfectly opaque--hostile, prison eyes. Her mouth was set
in the beginning of a sneer.
"You got to begin with the children," she said, speaking rapidly, clipping out the
street sounds as they came into her speech.
She paused long enough to let the change of direction take effect, then resumed the
rapid, rhythmless speech. "You've got to teach the moral life of downtown to the
children. And the way you do that, Earl, is by taking them downtown to plays,
museums, concerts, lectures, where they can learn the moral life of downtown."
I smiled at her, misunderstanding, thinking I was indulging her. "And then they
won't be poor anymore?"
She read every nuance of my response, and answered angrily, "And they won't be
poor no more."
"What you mean is--"
"What I mean is what I said--a moral alternative to the street."
She didn't speak of jobs or money. In that, she was like the others I had listened to.
No one had spoken of jobs or money. But how could the "moral life of downtown"
lead anyone out from the surround of force? How could a museum push poverty
away? Who can dress in statues or eat the past? And what of the political life? Had
Niecie skipped a step or failed to take a step? The way out of poverty was politics,
not the "moral life of downtown." But to enter the public world, to practice the
political life, the poor had first to learn to reflect. That was what Niecie meant by
the "moral life of downtown." She did not make the error of divorcing ethics from
politics. Niecie had simply said, in a kind of shorthand, that no one could step out of
the panicking circumstance of poverty directly into the public world.
Although she did not say so, I was sure that when she spoke of the "moral life of
downtown" she meant something that had happened to her. With no job and no
money, a prisoner, she had undergone a radical transformation. She had followed
78
the same path that led to the invention of politics in ancient Greece. She had learned
to reflect. In further conversation it became clear that when she spoke of "the moral
life of downtown" she meant the humanities, the study of human constructs and
concerns, which has been the source of reflection for the secular world since the
Greeks first stepped back from nature to experience wonder at what they beheld. If
the political life was the way out of poverty, the humanities provided an entrance to
reflection and the political life. The poor did not need anyone to release them; an
escape route existed. But to open this avenue to reflection and politics a major
distinction between the preparation for the life of the rich and the life of the poor
had to be eliminated.
Once Niecie had challenged me with her theory, the comforts of tinkering came to an
end; I could no longer make an homage to the happenstance world and rest. To test
Niecie's theory, students, faculty, and facilities were required. Quantitative
measures would have to be developed) anecdotal information would also be useful.
And the ethics of the experiment had to be considered: I resolved to do no harm.
There was no need for the course to have a "sink or swim" character; it could aim to
keep as many afloat as possible.
When the idea for an experimental course became clear in my mind, I discussed it
with Dr. Jaime Inclan, director of the Roberto Clemente Family Guidance Center in
lower Manhattan, a facility that provides counseling to poor people, mainly Latinos,
in their own language and in their own community. Dr. Inclan offered the center's
conference room for a classroom. We would put three metal tables end to end to
approximate the boat-shaped tables used in discussion sections at the University of
Chicago of the Hutchins era,(n1) which I used as a model for the course. A card table
in the back of the room would hold a coffeemaker and a few cookies. The setting was
not elegant, but it would do. And the front wall was covered by a floor-to-ceiling
blackboard.
Now the course lacked only students and teachers. With no funds and a budget that
grew every time a new idea for the course crossed my mind, I would have to ask the
faculty to donate its time and effort. Moreover, when Hutchins said, "The best
education for the best is the best education for us all," he meant it: he insisted that
full professors teach discussion sections in the college. If the Clemente Course in the
Humanities was to follow the same pattern, it would require a faculty with the
knowledge and prestige that students might encounter in their first year at
Harvard, Yale, Princeton, or Chicago.
I turned first to the novelist Charles Simmons. He had been assistant editor of The
New York Times Book Review and had taught at Columbia University. He
volunteered to teach poetry, beginning with simple poems, Housman, and ending
with Latin poetry. Grace Glueck, who writes art news and criticism for the New
York Times, planned a course that began with cave paintings and ended in the late
79
twentieth century. Timothy Koranda, who did his graduate work at MIT, had
published journal articles on mathematical logic, but he had been away from his
field for some years and looked forward to getting back to it. I planned to teach the
American history course through documents, beginning with the Magna Carta,
moving on to the second of Locke's Two Treatises of Government, the Declaration of
Independence, and so on through the documents of the Civil War. I would also teach
the political philosophy class.
Since I was a naif in this endeavor, it did not immediately occur to me that
recruiting students would present a problem. I didn't know how many I needed. All I
had were criteria for selection:
Age: 18-35.
Household income: Less than 150 percent of the Census Bureau's Official Poverty
Threshold (though this was to change slightly).
Educational level: Ability to read a tabloid newspaper.
Educational goals: An expression of intent to complete the course.
Dr. Inclan arranged a meeting of community activists who could help recruit
students. Lynette Lauretig of The Door, a program that provides medical and
educational services to adolescents, and Angel Roman of the Grand Street
Settlement, which offers work and training and GED programs, were both willing to
give us access to prospective students. They also pointed out some practical
considerations. The course had to provide bus and subway tokens, because fares
ranged between three and six dollars per class per student, and the students could
not afford sixty or even thirty dollars a month for transportation. We also had to
offer dinner or a snack, because the classes were to be held from 6:00 to 7:30 P.M.
The first recruiting session came only a few days later. Nancy Mamis-King,
associate executive director of the Neighborhood Youth & Family Services program
in the South Bronx, had identified some Clemente Course candidates and had
assembled about twenty of her clients and their supervisors in a circle of chairs in a
conference room. Everyone in the room was black or Latino, with the exception of
one social worker and me.
After I explained the idea of the course, the white social worker was the first to ask a
question: "Are you going to teach African history?"
"No. We'll be teaching a section on American history, based on documents, as I said.
We want to teach the ideas of history so that--"
"You have to teach African history."
80
"This is America, so we'll teach American history. If we were in Africa, I would teach
African history, and if we were in China, I would teach Chinese history."
"You're indoctrinating people in Western culture."
I tried to get beyond her. "We'll study African art," I said, "as it affects art in
America. We'll study American history and literature; you can't do that without
studying African-American culture, because culturally all Americans are black as
well as white, Native American, Asian, and so on." It was no use; not one of them
applied for admission to the course.
A few days later Lynette Lauretig arranged a meeting with some of her staff at The
Door. We disagreed about the course. They thought it should be taught at a much
lower level. Although I could not change their views, they agreed to assemble a
group of Door members who might be interested in the humanities.
On an early evening that same week, about twenty prospective students were
scheduled to meet in a classroom at The Door. Most of them came late. Those who
arrived first slumped in their chairs, staring at the floor or greeting me with sullen
glances. A few ate candy or what appeared to be the remnants of a meal. The
students were mostly black and Latino, one was Asian, and five were white; two of
the whites were immigrants who had severe problems with English. When I
introduced myself, several of the students would not shake my hand, two or three
refused even to look at me, one girl giggled, and the last person to volunteer his
name, a young man dressed in a Tommy Hilfiger sweatshirt and wearing a cap
turned sideways, drawled, "Henry Jones, but they call me Sleepy, because I got
these sleepy eyes--"
"In our class, we'll call you Mr. Jones."
He smiled and slid down in his chair so that his back was parallel to the floor.
Before I finished attempting to shake hands with the prospective students, a
waiflike Asian girl with her mouth half-full of cake said, "Can we get on with it? I'm
bored."
I liked the group immediately.
Having failed in the South Bronx, I resolved to approach these prospective students
differently. "You've been cheated," I said. "Rich people learn the humanities; you
didn't. The humanities are a foundation for getting along in the world, for thinking,
for learning to reflect on the world instead of just reacting to whatever force is
turned against you. I think the humanities are one of the ways to become political,
and I don't mean political in the sense of voting in an election but in the broad
sense." I told them Thucydides' definition of politics.
81
"Rich people know politics in that sense. They know how to negotiate instead of
using force. They know how to use politics to get along, to get power. It doesn't mean
that rich people are good and poor people are bad. It simply means that rich people
know a more effective method for living in this society.
"Do all rich people, or people who are in the middle, know the humanities? Not a
chance. But some do. And it helps. It helps to live better and enjoy life more. Will the
humanities make you rich? Yes. Absolutely. But not in terms of money. In terms of
life.
"Rich people learn the humanities in private schools and expensive universities. And
that's one of the ways in which they learn the political life. I think that is the real
difference between the haves and have-nots in this country. If you want real power,
legitimate power, the kind that comes from the people and belongs to the people, you
must understand politics. The humanities will help.
"Here's how it works: We'll pay your subway fare; take care of your children, if you
have them; give you a snack or a sandwich; provide you with books and any other
materials you need. But we'll make you think harder, use your mind more fully,
than you ever have before. You'll have to read and think about the same kinds of
ideas you would encounter in a first-year course at Harvard or Yale or Oxford.
"You'll have to come to class in the snow and the rain and the cold and the dark. No
one will coddle you, no one will slow down for you. There will be tests to take, papers
to write. And I can't promise you anything but a certificate of completion at the end
of the course. I'll be talking to colleges about giving credit for the course, but I can't
promise anything. If you come to the Clemente Course, you must do it because you
want to study the humanities, because you want a certain kind of life, a richness of
mind and spirit. That's all I offer you: philosophy, poetry, art history, logic, rhetoric,
and American history.
"Your teachers will all be people of accomplishment in their fields," I said, and I
spoke a little about each teacher. "That's the course. October through May, with a
two-week break at Christmas. It is generally accepted in America that the liberal
arts and the humanities in particular belong to the elites. I think you're the elites."
The young Asian woman said, "What are you getting out of this?"
"This is a demonstration project. I'm writing a book. This will be proof, I hope, of my
idea about the humanities. Whether it succeeds or fails will be up to the teachers
and you."
All but one of the prospective students applied for admission to the course.
82
I repeated the new presentation at the Grand Street Settlement and at other places
around the city. There were about fifty candidates for the thirty positions in the
course. Personal interviews began in early September.
Meanwhile, almost all of my attempts to raise money had failed. Only the novelist
Starling Lawrence, who is also editor in chief of W. W. Norton, which had contracted
to publish the book; the publishing house itself; and a small, private family
foundation supported the experiment. We were far short of our budgeted expenses,
but my wife, Sylvia, and I agreed that the cost was still very low, and we decided to
go ahead.
Of the fifty prospective students who showed up at the Clemente Center for personal
interviews, a few were too rich (a postal supervisor's son, a fellow who claimed his
father owned a factory in Nigeria that employed sixty people) and more than a few
could not read. Two home-care workers from Local 1199 could not arrange their
hours to enable them to take the course. Some of the applicants were too young: a
thirteen-year-old and two who had just turned sixteen.
Lucia Medina, a woman with five children who told me that she often answered the
door at the single-room occupancy hotel where she lived with a butcher knife in her
hand, was the oldest person accepted into the course. Carmen Quinones, a
recovering addict who had spent time in prison, was the next eldest. Both were in
their early thirties.
The interviews went on for days.
Abel Lomas(n2) shared an apartment and worked part-time wrapping packages at
Macy's. His father had abandoned the family when Abel was born. His mother was
murdered by his stepfather when Abel was thirteen. With no one to turn to and no
place to stay, he lived on the streets, first in Florida, then back in New York City. He
used the tiny stipend from his mother's Social Security to keep himself alive.
After the recruiting session at The Door, I drove up Sixth Avenue from Canal Street
with Abel, and we talked about ethics. He had a street tough's delivery, spitting out
his ideas in crudely formed sentences of four, five, eight words, strings of blunt
declarations, with never a dependent clause to qualify his thoughts. He did not clear
his throat with badinage, as timidity teaches us to do, nor did he waste his breath
with tact.
"What do you think about drugs?" he asked, the strangely breathless delivery
further coarsened by his Dominican accent. "My cousin is a dealer."
"I've seen a lot of people hurt by drugs."
83
"Your family has nothing to eat. You sell drugs. What's worse? Let your family
starve or sell drugs?"
"Starvation and drug addiction are both bad, aren't they?"
"Yes," he said, not "yeah" or "uh-huh" but a precise, almost formal "yes."
"So it's a question of the worse of two evils? How shall we decide?"
The question came up near Thirty-fourth Street, where Sixth Avenue remains
hellishly traffic-jammed well into the night. Horns honked, people flooded into the
street against the light. Buses and trucks and taxicabs threatened their way from
one lane to the next where the overcrowded avenue crosses the equally crowded
Broadway. As we passed Herald Square and made our way north again, I said,
"There are a couple of ways to look at it. One comes from Immanuel Kant, who said
that you should not do anything unless you want it to become a universal law; that
is, unless you think it's what everybody should do. So Kant wouldn't agree to selling
drugs or letting your family starve."
Again he answered with a formal "Yes."
"There's another way to look at it, which is to ask what is the greatest good for the
greatest number: in this case, keeping your family from starvation or keeping tens,
perhaps hundreds of people from losing their lives to drugs. So which is the greatest
good for the greatest number?"
"That's what I think," he said.
"What?"
"You shouldn't sell drugs. You can always get food to eat. Welfare. Something."
"You're a Kantian."
"Yes."
"You know who Kant is?"
"I think so."
We had arrived at Seventy-seventh Street, where he got out of the car to catch the
subway before I turned east. As he opened the car door and the light came on, the
almost military neatness of him struck me. He had the newly cropped hair of a
cadet. His clothes were clean, without a wrinkle. He was an orphan, a street kid, an
immaculate urchin. Within a few weeks he would be nineteen years old, the Social
Security payments would end, and he would have to move into a shelter.
84
Some of those who came for interviews were too poor. I did not think that was
possible when we began, and I would like not to believe it now, but it was true.
There is a point at which the level of forces that surround the poor can become
insurmountable, when there is no time or energy left to be anything but poor. Most
often I could not recruit such people for the course; when I did, they soon dropped
out.
Over the days of interviewing, a class slowly assembled. I could not then imagine
who would last the year and who would not. One young woman submitted a neatly
typed essay that said, "I was homeless once, then I lived for some time in a shelter.
Right now, I have got my own space granted by the Partnership for the Homeless.
Right now, I am living alone, with very limited means. Financially I am
overwhelmed by debts. I cannot afford all the food I need . . ."
A brother and sister, refugees from Tashkent, lived with their parents in the
farthest reaches of Queens, far beyond the end of the subway line. They had no
money, and they had been refused admission by every school to which they had
applied. I had not intended to accept immigrants or people who had difficulty with
the English language, but I took them into the class.
I also took four who had been in prison, three who were homeless, three who were
pregnant, one who lived in a drugged dream-state in which she was abused, and one
whom I had known for a long time and who was dying of AIDS. As I listened to
them, I wondered how the course would affect them. They had no public life, no
place; they lived within the surround of force, moving as fast as they could, driven by
necessity, without a moment to reflect. Why should they care about fourteenthcentury Italian painting or truth tables or the death of Socrates?
Between the end of recruiting and the orientation session that would open the
course, I made a visit to Bedford Hills to talk with Niecie Walker. It was hot, and the
drive up from the city had been unpleasant. I didn't yet know Niecie very well. She
didn't trust me, and I didn't know what to make of her. While we talked, she held a
huge white pill in her hand. "For AIDS," she said.
"Are you sick?"
"My T-cell count is down. But that's neither here nor there. Tell me about the course,
Earl. What are you going to teach?"
"Moral philosophy."
"And what does that include?"
She had turned the visit into an interrogation. I didn't mind. At the end of the
conversation I would be going out into "the free world"; if she wanted our meeting to
85
be an interrogation, I was not about to argue. I said, "We'll begin with Plato: the
Apology, a little of the Crito, a few pages of the Phaedo so that they'll know what
happened to Socrates. Then we'll read Aristotle's Nicomachean Ethics. I also want
them to read Thucydides, particularly Pericles' Funeral Oration in order to make the
connection between ethics and politics, to lead them in the direction I hope the
course will take them. Then we'll end with Antigone, but read as moral and political
philosophy as well as drama."
"There's something missing," she said, leaning back in her chair, taking on an air of
superiority.
The drive had been long, the day was hot, the air in the room was dead and damp.
"Oh, yeah," I said, "and what's that?"
"Plato's Allegory of the Cave. How can you teach philosophy to poor people without
the Allegory of the Cave? The ghetto is the cave. Education is the light. Poor people
can understand that."
At the beginning of the orientation at the Clemente Center a week later, each
teacher spoke for a minute or two. Dr. Inclan and his research assistant, Patricia
Vargas, administered the questionnaire he had devised to measure, as best he could,
the role of force and the amount of reflection in the lives of the students. I explained
that each class was going to be videotaped as another way of documenting the
project. Then I gave out the first assignment: "In preparation for our next meeting, I
would like you to read a brief selection from Plato's Republic: the Allegory of the
Cave."
I tried to guess how many students would return for the first class. I hoped for
twenty, expected fifteen, and feared ten. Sylvia, who had agreed to share the
administrative tasks of the course, and I prepared coffee and cookies for twenty-five.
We had a plastic container filled with subway tokens. Thanks to Starling Lawrence,
we had thirty copies of Bernard Knox's Norton Book of Classical Literature, which
contained all of the texts for the philosophy section except the Republic and the
Nicomachean Ethics.
At six o'clock there were only ten students seated around the long table, but by sixfifteen the number had doubled, and a few minutes later two more straggled in out
of the dusk. I had written a time line on the blackboard, showing them the temporal
progress of thinking--from the role of myth in Neolithic societies to The Gilgamesh
Epic and forward to the Old Testament, Confucius, the Greeks, the New Testament,
the Koran, the Epic of SonJara, and ending with Nahuatl and Maya poems, which
took us up to the contact between Europe and America, where the history course
began. The time line served as context and geography as well as history: no race, no
86
major culture was ignored. "Let's agree," I told them, "that we are all human,
whatever our origins. And now let's go into Plato's cave."
I told them that there would be no lectures in the philosophy section of the course;
we would use the Socratic method, which is called maieutic dialogue. "'Maieutic'
comes from the Greek word for midwifery. I'll take the role of midwife in our
dialogue. Now, what do I mean by that? What does a midwife do?"
It was the beginning of a love affair, the first moment of their infatuation with
Socrates. Later, Abel Lomas would characterize that moment in his no-nonsense
fashion, saying that it was the first time anyone had ever paid attention to their
opinions.
Grace Glueck began the art history class in a darkened room lit with slides of the
Lascaux caves and next turned the students' attention to Egypt, arranging for them
to visit the Metropolitan Museum of Art to see the Temple of Dendur and the
Egyptian Galleries. They arrived at the museum on a Friday evening. Darlene Codd
brought her two-year-old son. Pearl Lau was late, as usual. One of the students, who
had told me how much he was looking forward to the museum visit, didn't show up,
which surprised me. Later I learned that he had been arrested for jumping a
turnstile in a subway station on his way to the museum and was being held in a
prison cell under the Brooklyn criminal courthouse. In the Temple of Dendur,
Samantha Smoot asked questions of Felicia Blum, a museum lecturer. Samantha
was the student who had burst out with the news, in one of the first sessions of the
course, that people in her neighborhood believed it "wasn't no use goin' to school,
because the white man wouldn't let you up no matter what." But in a hall where the
statuary was of half-human, half-animal female figures, it was Samantha who asked
what the glyphs. meant, encouraging Felicia Blum to read them aloud, to translate
them into English. Toward the end of the evening, Grace led the students out of the
halls of antiquities into the Rockefeller Wing, where she told them of the connections
of culture and art in Mali, Benin, and the Pacific Islands. When the students had
collected their coats and stood together near the entrance to the museum, preparing
to leave, Samantha stood apart, a tall, slim young woman, dressed in a deerstalker
cap and a dark blue peacoat. She made an exaggerated farewell wave at us and
returned to Egypt--her ancient mirror.
Charles Simmons began the poetry class with poems as puzzles and laughs. His plan
was to surprise the class, and he did. At first he read the poems aloud to them,
interrupting himself with footnotes to bring them along. He showed them poems of
love and of seduction, and satiric commentaries on those poems by later poets. "Let
us read," the students demanded, but Charles refused. He tantalized them with the
opportunity to read poems aloud. A tug-of war began between him and the students,
and the standoff was ended not by Charles directly but by Hector Anderson. When
Charles asked if anyone in the class wrote poetry, Hector raised his hand.
87
"Can you recite one of your poems for us?" Charles said.
Until that moment, Hector had never volunteered a comment, though he had spoken
well and intelligently when asked. He preferred to slouch in his chair, dressed in full
camouflage gear, wearing a nylon stocking over his hair and eating slices of fresh
cantaloupe or honeydew melon.
In response to Charles's question, Hector slid up to a sitting position. "If you turn
that camera off," he said. "I don't want anybody using my lyrics." When he was sure
the red light of the video camera was off, Hector stood and recited verse after verse
of a poem that belonged somewhere in the triangle formed by Ginsberg's Howl, the
Book of Lamentations, and hip-hop. When Charles and the students finished
applauding, they asked Hector to say the poem again, and he did. Later Charles told
me, "That kid is the real thing." Hector's discomfort with Sylvia and me turned to
ease. He came to our house for a small Christmas party and at other times. We
talked on the telephone about a scholarship program and about what steps he
should take next in his education. I came to know his parents. As a student, he
began quietly, almost secretly, to surpass many of his classmates.
Timothy Koranda was the most professorial of the professors. He arrived precisely
on time, wearing a hat of many styles--part fedora, part Borsalino, part Stetson, and
at least one-half World War I campaign hat. He taught logic during class hours,
filling the blackboard from floor to ceiling, wall to wall, drawing the intersections of
sets here and truth tables there and a great square of oppositions in the middle of it
all. After class, he walked with students to the subway, chatting about Zen or logic
or Heisenberg.
On one of the coldest nights of the winter, he introduced the students to logic
problems stated in ordinary language that they could solve by reducing the phrases
to symbols. He passed out copies of a problem, two pages long, then wrote out some
of the key phrases on the blackboard. "Take this home with you," he said, "and at
our next meeting we shall see who has solved it. I shall also attempt to find the
answer."
By the time he finished writing out the key phrases, however, David Iskhakov raised
his hand. Although they listened attentively, neither David nor his sister Susana
spoke often in class. She was shy, and he was embarrassed at his inability to speak
perfect English.
"May I go to blackboard?" David said. "And will see if I have found correct answer to
zis problem."
Together Tim and David erased the blackboard, then David began covering it with
signs and symbols. "If first man is earning this money, and second man is closer to
88
this town . . . ," he said, carefully laying out the conditions. After five minutes or so,
he said, "And the answer is: B will get first to Cleveland!"
Samantha Smoot shouted, "That's not the answer. The mistake you made is in the
first part there, where it says who earns more money."
Tim folded his arms across his chest, happy. "I shall let you all take the problem
home," he said.
When Sylvia and I left the Clemente Center that night, a knot of students was
gathered outside, huddled against the wind. Snow had begun to fall, a slippery
powder on the gray ice that covered all but a narrow space down the center of the
sidewalk. Samantha and David stood in the middle of the group, still arguing over
the answer to the problem. I leaned in for a moment to catch the character of the
argument. It was even more polite than it had been in the classroom, because now
they governed themselves.
One Saturday morning in January, David Howell telephoned me at home. "Mr.
Shores," he said, Anglicizing my name, as many of the students did.
"Mr. Howell," I responded, recognizing his voice.
"How you doin', Mr. Shores?"
"I'm fine. How are you?"
"I had a little problem at work."
Uh-oh, I thought, bad news was coming. David is a big man, generally good-humored
but with a quick temper. According to his mother, he had a history of violent
behavior. In the classroom he had been one of the best students, a steady man,
twenty-four years old, who always did the reading assignments and who often made
interesting connections between the humanities and daily life. "What happened?"
"Mr. Shores, there's a woman at my job, she said some things to me and I said some
things to her. And she told my supervisor I had said things to her, and he called me
in about it. She's forty years old and she don't have no social life, and I have a good
social life, and she's jealous of me."
"And then what happened?" The tone of his voice and the timing of the call did not
portend good news.
"Mr. Shores, she made me so mad, I wanted to smack her up against the wall. I tried
to talk to some friends to calm myself down a little, but nobody was around."
"And what did you do?" I asked, fearing this was his one telephone call from the city
jail.
89
"Mr. Shores, I asked myself, `What would Socrates do?'"
David Howell had reasoned that his co-worker's envy was not his problem after all,
and he had dropped his rage.
One evening, in the American history section, I was telling the students about
Gordon Wood's ideas in The Radicalism of the American Revolution. We were
talking about the revolt by some intellectuals against classical reaming at the turn
of the eighteenth century, including Benjamin Franklin's late-life change of heart,
when Henry Jones raised his hand.
"If the Founders loved the humanities so much, how come they treated the natives
so badly?"
I didn't know how to answer this question. There were confounding explanations to
offer about changing attitudes toward Native Americans, vaguely useful references
to views of Rousseau and James Fenimore Cooper. For a moment I wondered if I
should tell them about Heidegger's Nazi past. Then I saw Abel Lomas's raised hand
at the far end of the table. "Mr. Lomas," I said.
Abel said, "That's what Aristotle means by incontinence, when you know what's
morally right but you don't do it, because you're overcome by your passions."
The other students nodded. They were all inheritors of wounds caused by the
incontinence of educated men; now they had an ally in Aristotle, who had given
them a way to analyze the actions of their antagonists.
Those who appreciate ancient history understand the radical character of the
humanities. They know that politics did not begin in a perfect world but in a society
even more flawed than ours: one that embraced slavery, denied the rights of women,
practiced a form of homosexuality that verged on pedophilia, and endured the
intrigues and corruption of its leaders. The genius of that society originated in man's
re-creation of himself through the recognition of his humanness as expressed in art,
literature, rhetoric, philosophy, and the unique notion of freedom. At that moment,
the isolation of the private life ended and politics began.
The winners in the game of modem society, and even those whose fortune falls in the
middle, have other means to power: they are included at birth. They know this. And
they know exactly what to do to protect their place in the economic and social
hierarchy. As Allan Bloom, author of the nationally best-selling tract in defense of
elitism, The Closing of the American Mind, put it, they direct the study of the
humanities exclusively at those young people who "have been raised in comfort and
with the expectation of ever increasing comfort."
90
In the last meeting before graduation, the Clemente students answered the same set
of questions they'd answered at orientation. Between October and May, students
had fallen to AIDS, pregnancy job opportunities, pernicious anemia, clinical
depression, a schizophrenic child, and other forces, but of the thirty students
admitted to the course, sixteen had completed it, and fourteen had earned credit
from Bard College. Dr. Inclan found that the students' self-esteem and their abilities
to divine and solve problems had significantly increased; their use of verbal
aggression as a tactic for resolving conflicts had significantly decreased. And they all
had notably more appreciation for the concepts of benevolence, spirituality,
universalism, and collectivism.
It cost about $2,000 for a student to attend the Clemente Course. Compared with
unemployment, welfare, or prison, the humanities are a bargain. But coming into
possession of the faculty of reflection and the skills of politics leads to a choice for
the poor--and whatever they choose, they will be dangerous: they may use politics to
get along in a society based on the game, to escape from the surround of force into a
gentler life, to behave as citizens, and nothing more ; or they may choose to oppose
the game itself. No one can predict the effect of politics, although we all would like to
think that wisdom goes our way. That is why the poor are so often mobilized and so
rarely politicized. The possibility that they will adopt a moral view other than that
of their mentors can never be discounted. And who wants to run that risk?
On the night of the first Clemente Course graduation, the students and their
families filled the eighty-five chairs we crammed into the conference room where
classes had been held. Robert Martin, associate dean of Bard College, read the
graduates' names. David Dinkins, the former mayor of New York City, handed out
the diplomas. There were speeches and presentations. The students gave me a
plaque on which they had misspelled my name. I offered a few words about each
student, congratulated them, and said finally, "This is what I wish for you: May you
never be more active than when you are doing nothing . . ." I saw their smiles of
recognition at the words of Cato, which I had written on the blackboard early in the
course. They could recall again too the moment when we had come to the
denouement of Aristotle's brilliantly constructed thriller, the Nicomachean Ethics-the idea that in the contemplative life man was most like God. One or two, perhaps
more of the students, closed their eyes. In the momentary stillness of the room it
was possible to think.
The Clemente Course in the Humanities ended a second year in June 1997. Twentyeight new students had enrolled; fourteen graduated. Another version of the course
will begin this fall in Yucatan, Mexico, using classical Maya literature in Maya.
On May 14, 1997, Viniece Walker came up for parole for the second time. She had
served more than ten years of her sentence, and she had been the best of prisoners.
In a version of the Clemente Course held at the prison, she had been my teaching
91
assistant. After a brief hearing, her request for parole was denied. She will serve two
more years before the parole board will reconsider her case.
A year after graduation, ten of the first sixteen Clemente Course graduates were
attending four-year colleges or going to nursing school; four of them had received full
scholarships to Bard College. The other graduates were attending community college
or working full-time. Except for one: she had been fired from her job in a fast-food
restaurant for trying to start a union.
(n1) Under the guidance of Robert Maynard Hutchins (1929-1951), the University of
Chicago required year-long courses in the humanities, social sciences, and natural
sciences for the Bachelor of Arts degree. Hutchins developed the curriculum with the
help of Mortimer Adler, among others; the Hutchins courses later influenced Adler's
Great Books program.
92
How Did Economists Get It So Wrong?
By PAUL KRUGMAN
I. MISTAKING BEAUTY FOR TRUTH
It’s hard to believe now, but not long ago economists were congratulating
themselves over the success of their field. Those successes — or so they
believed — were both theoretical and practical, leading to a golden era for the
profession. On the theoretical side, they thought that they had resolved their
internal disputes. Thus, in a 2008 paper titled “The State of Macro” (that is,
macroeconomics, the study of big-picture issues like recessions), Olivier
Blanchard of M.I.T., now the chief economist at the International Monetary
Fund, declared that “the state of macro is good.” The battles of yesteryear, he
said, were over, and there had been a “broad convergence of vision.” And in
the real world, economists believed they had things under control: the
“central problem of depression-prevention has been solved,” declared Robert
Lucas of the University of Chicago in his 2003 presidential address to the
American Economic Association. In 2004, Ben Bernanke, a former Princeton
professor who is now the chairman of the Federal Reserve Board, celebrated
the Great Moderation in economic performance over the previous two
decades, which he attributed in part to improved economic policy making.
Last year, everything came apart.
Few economists saw our current crisis coming, but this predictive failure was
the least of the field’s problems. More important was the profession’s
blindness to the very possibility of catastrophic failures in a market economy.
During the golden years, financial economists came to believe that markets
were inherently stable — indeed, that stocks and other assets were always
priced just right. There was nothing in the prevailing models suggesting the
possibility of the kind of collapse that happened last year. Meanwhile,
macroeconomists were divided in their views. But the main division was
between those who insisted that free-market economies never go astray and
those who believed that economies may stray now and then but that any
major deviations from the path of prosperity could and would be corrected by
the all-powerful Fed. Neither side was prepared to cope with an economy that
went off the rails despite the Fed’s best efforts.
93
And in the wake of the crisis, the fault lines in the economics profession have
yawned wider than ever. Lucas says the Obama administration’s stimulus
plans are “schlock economics,” and his Chicago colleague John Cochrane says
they’re based on discredited “fairy tales.” In response, Brad DeLong of the
University of California, Berkeley, writes of the “intellectual collapse” of the
Chicago School, and I myself have written that comments from Chicago
economists are the product of a Dark Age of macroeconomics in which hardwon knowledge has been forgotten.
What happened to the economics profession? And where does it go from here?
As I see it, the economics profession went astray because economists, as a
group, mistook beauty, clad in impressive-looking mathematics, for truth.
Until the Great Depression, most economists clung to a vision of capitalism
as a perfect or nearly perfect system. That vision wasn’t sustainable in the
face of mass unemployment, but as memories of the Depression faded,
economists fell back in love with the old, idealized vision of an economy in
which rational individuals interact in perfect markets, this time gussied up
with fancy equations. The renewed romance with the idealized market was,
to be sure, partly a response to shifting political winds, partly a response to
financial incentives. But while sabbaticals at the Hoover Institution and job
opportunities on Wall Street are nothing to sneeze at, the central cause of the
profession’s failure was the desire for an all-encompassing, intellectually
elegant approach that also gave economists a chance to show off their
mathematical prowess.
Unfortunately, this romanticized and sanitized vision of the economy led
most economists to ignore all the things that can go wrong. They turned a
blind eye to the limitations of human rationality that often lead to bubbles
and busts; to the problems of institutions that run amok; to the imperfections
of markets — especially financial markets — that can cause the economy’s
operating system to undergo sudden, unpredictable crashes; and to the
dangers created when regulators don’t believe in regulation.
It’s much harder to say where the economics profession goes from here. But
what’s almost certain is that economists will have to learn to live with
messiness. That is, they will have to acknowledge the importance of
irrational and often unpredictable behavior, face up to the often idiosyncratic
imperfections of markets and accept that an elegant economic “theory of
everything” is a long way off. In practical terms, this will translate into more
cautious policy advice — and a reduced willingness to dismantle economic
safeguards in the faith that markets will solve all problems.
II. FROM SMITH TO KEYNES AND BACK
94
The birth of economics as a discipline is usually credited to Adam Smith, who
published “The Wealth of Nations” in 1776. Over the next 160 years an
extensive body of economic theory was developed, whose central message
was: Trust the market. Yes, economists admitted that there were cases in
which markets might fail, of which the most important was the case of
“externalities” — costs that people impose on others without paying the price,
like traffic congestion or pollution. But the basic presumption of
“neoclassical” economics (named after the late-19th-century theorists who
elaborated on the concepts of their “classical” predecessors) was that we
should have faith in the market system.
This faith was, however, shattered by the Great Depression. Actually, even in
the face of total collapse some economists insisted that whatever happens in
a market economy must be right: “Depressions are not simply evils,” declared
Joseph Schumpeter in 1934 — 1934! They are, he added, “forms of something
which has to be done.” But many, and eventually most, economists turned to
the insights of John Maynard Keynes for both an explanation of what had
happened and a solution to future depressions.
Keynes did not, despite what you may have heard, want the government to
run the economy. He described his analysis in his 1936 masterwork, “The
General Theory of Employment, Interest and Money,” as “moderately
conservative in its implications.” He wanted to fix capitalism, not replace it.
But he did challenge the notion that free-market economies can function
without a minder, expressing particular contempt for financial markets,
which he viewed as being dominated by short-term speculation with little
regard for fundamentals. And he called for active government intervention —
printing more money and, if necessary, spending heavily on public works —
to fight unemployment during slumps.
It’s important to understand that Keynes did much more than make bold
assertions. “The General Theory” is a work of profound, deep analysis —
analysis that persuaded the best young economists of the day. Yet the story of
economics over the past half century is, to a large degree, the story of a
retreat from Keynesianism and a return to neoclassicism. The neoclassical
revival was initially led by Milton Friedman of the University of Chicago,
who asserted as early as 1953 that neoclassical economics works well enough
as a description of the way the economy actually functions to be “both
extremely fruitful and deserving of much confidence.” But what about
depressions?
Friedman’s counterattack against Keynes began with the doctrine known as
monetarism. Monetarists didn’t disagree in principle with the idea that a
95
market economy needs deliberate stabilization. “We are all Keynesians now,”
Friedman once said, although he later claimed he was quoted out of context.
Monetarists asserted, however, that a very limited, circumscribed form of
government intervention — namely, instructing central banks to keep the
nation’s money supply, the sum of cash in circulation and bank deposits,
growing on a steady path — is all that’s required to prevent depressions.
Famously, Friedman and his collaborator, Anna Schwartz, argued that if the
Federal Reserve had done its job properly, the Great Depression would not
have happened. Later, Friedman made a compelling case against any
deliberate effort by government to push unemployment below its “natural”
level (currently thought to be about 4.8 percent in the United States):
excessively expansionary policies, he predicted, would lead to a combination
of inflation and high unemployment — a prediction that was borne out by the
stagflation of the 1970s, which greatly advanced the credibility of the antiKeynesian movement.
Eventually, however, the anti-Keynesian counterrevolution went far beyond
Friedman’s position, which came to seem relatively moderate compared with
what his successors were saying. Among financial economists, Keynes’s
disparaging vision of financial markets as a “casino” was replaced by
“efficient market” theory, which asserted that financial markets always get
asset prices right given the available information. Meanwhile, many
macroeconomists completely rejected Keynes’s framework for understanding
economic slumps. Some returned to the view of Schumpeter and other
apologists for the Great Depression, viewing recessions as a good thing, part
of the economy’s adjustment to change. And even those not willing to go that
far argued that any attempt to fight an economic slump would do more harm
than good.
Not all macroeconomists were willing to go down this road: many became
self-described New Keynesians, who continued to believe in an active role for
the government. Yet even they mostly accepted the notion that investors and
consumers are rational and that markets generally get it right.
Of course, there were exceptions to these trends: a few economists challenged
the assumption of rational behavior, questioned the belief that financial
markets can be trusted and pointed to the long history of financial crises that
had devastating economic consequences. But they were swimming against
the tide, unable to make much headway against a pervasive and, in
retrospect, foolish complacency.
III. PANGLOSSIAN FINANCE
96
In the 1930s, financial markets, for obvious reasons, didn’t get much respect.
Keynes compared them to “those newspaper competitions in which the
competitors have to pick out the six prettiest faces from a hundred
photographs, the prize being awarded to the competitor whose choice most
nearly corresponds to the average preferences of the competitors as a whole;
so that each competitor has to pick, not those faces which he himself finds
prettiest, but those that he thinks likeliest to catch the fancy of the other
competitors.”
And Keynes considered it a very bad idea to let such markets, in which
speculators spent their time chasing one another’s tails, dictate important
business decisions: “When the capital development of a country becomes a byproduct of the activities of a casino, the job is likely to be ill-done.”
By 1970 or so, however, the study of financial markets seemed to have been
taken over by Voltaire’s Dr. Pangloss, who insisted that we live in the best of
all possible worlds. Discussion of investor irrationality, of bubbles, of
destructive speculation had virtually disappeared from academic discourse.
The field was dominated by the “efficient-market hypothesis,” promulgated
by Eugene Fama of the University of Chicago, which claims that financial
markets price assets precisely at their intrinsic worth given all publicly
available information. (The price of a company’s stock, for example, always
accurately reflects the company’s value given the information available on
the company’s earnings, its business prospects and so on.) And by the 1980s,
finance economists, notably Michael Jensen of the Harvard Business School,
were arguing that because financial markets always get prices right, the best
thing corporate chieftains can do, not just for themselves but for the sake of
the economy, is to maximize their stock prices. In other words, finance
economists believed that we should put the capital development of the nation
in the hands of what Keynes had called a “casino.”
It’s hard to argue that this transformation in the profession was driven by
events. True, the memory of 1929 was gradually receding, but there
continued to be bull markets, with widespread tales of speculative excess,
followed by bear markets. In 1973-4, for example, stocks lost 48 percent of
their value. And the 1987 stock crash, in which the Dow plunged nearly 23
percent in a day for no clear reason, should have raised at least a few doubts
about market rationality.
These events, however, which Keynes would have considered evidence of the
unreliability of markets, did little to blunt the force of a beautiful idea. The
theoretical model that finance economists developed by assuming that every
investor rationally balances risk against reward — the so-called Capital
Asset Pricing Model, or CAPM (pronounced cap-em) — is wonderfully
97
elegant. And if you accept its premises it’s also extremely useful. CAPM not
only tells you how to choose your portfolio — even more important from the
financial industry’s point of view, it tells you how to put a price on financial
derivatives, claims on claims. The elegance and apparent usefulness of the
new theory led to a string of Nobel prizes for its creators, and many of the
theory’s adepts also received more mundane rewards: Armed with their new
models and formidable math skills — the more arcane uses of CAPM require
physicist-level computations — mild-mannered business-school professors
could and did become Wall Street rocket scientists, earning Wall Street
paychecks.
To be fair, finance theorists didn’t accept the efficient-market hypothesis
merely because it was elegant, convenient and lucrative. They also produced
a great deal of statistical evidence, which at first seemed strongly supportive.
But this evidence was of an oddly limited form. Finance economists rarely
asked the seemingly obvious (though not easily answered) question of
whether asset prices made sense given real-world fundamentals like
earnings. Instead, they asked only whether asset prices made sense given
other asset prices. Larry Summers, now the top economic adviser in the
Obama administration, once mocked finance professors with a parable about
“ketchup economists” who “have shown that two-quart bottles of ketchup
invariably sell for exactly twice as much as one-quart bottles of ketchup,” and
conclude from this that the ketchup market is perfectly efficient.
But neither this mockery nor more polite critiques from economists like
Robert Shiller of Yale had much effect. Finance theorists continued to believe
that their models were essentially right, and so did many people making realworld decisions. Not least among these was Alan Greenspan, who was then
the Fed chairman and a long-time supporter of financial deregulation whose
rejection of calls to rein in subprime lending or address the ever-inflating
housing bubble rested in large part on the belief that modern financial
economics had everything under control. There was a telling moment in 2005,
at a conference held to honor Greenspan’s tenure at the Fed. One brave
attendee, Raghuram Rajan (of the University of Chicago, surprisingly),
presented a paper warning that the financial system was taking on
potentially dangerous levels of risk. He was mocked by almost all present —
including, by the way, Larry Summers, who dismissed his warnings as
“misguided.”
By October of last year, however, Greenspan was admitting that he was in a
state of “shocked disbelief,” because “the whole intellectual edifice” had
“collapsed.” Since this collapse of the intellectual edifice was also a collapse of
real-world markets, the result was a severe recession — the worst, by many
measures, since the Great Depression. What should policy makers do?
98
Unfortunately, macroeconomics, which should have been providing clear
guidance about how to address the slumping economy, was in its own state of
disarray.
IV. THE TROUBLE WITH MACRO
“We have involved ourselves in a colossal muddle, having blundered in the
control of a delicate machine, the working of which we do not understand.
The result is that our possibilities of wealth may run to waste for a time —
perhaps for a long time.” So wrote John Maynard Keynes in an essay titled
“The Great Slump of 1930,” in which he tried to explain the catastrophe then
overtaking the world. And the world’s possibilities of wealth did indeed run to
waste for a long time; it took World War II to bring the Great Depression to a
definitive end.
Why was Keynes’s diagnosis of the Great Depression as a “colossal muddle”
so compelling at first? And why did economics, circa 1975, divide into
opposing camps over the value of Keynes’s views?
I like to explain the essence of Keynesian economics with a true story that
also serves as a parable, a small-scale version of the messes that can afflict
entire economies. Consider the travails of the Capitol Hill Baby-Sitting Coop.
This co-op, whose problems were recounted in a 1977 article in The Journal of
Money, Credit and Banking, was an association of about 150 young couples
who agreed to help one another by baby-sitting for one another’s children
when parents wanted a night out. To ensure that every couple did its fair
share of baby-sitting, the co-op introduced a form of scrip: coupons made out
of heavy pieces of paper, each entitling the bearer to one half-hour of sitting
time. Initially, members received 20 coupons on joining and were required to
return the same amount on departing the group.
Unfortunately, it turned out that the co-op’s members, on average, wanted to
hold a reserve of more than 20 coupons, perhaps, in case they should want to
go out several times in a row. As a result, relatively few people wanted to
spend their scrip and go out, while many wanted to baby-sit so they could add
to their hoard. But since baby-sitting opportunities arise only when someone
goes out for the night, this meant that baby-sitting jobs were hard to find,
which made members of the co-op even more reluctant to go out, making
baby-sitting jobs even scarcer. . . .
In short, the co-op fell into a recession.
99
O.K., what do you think of this story? Don’t dismiss it as silly and trivial:
economists have used small-scale examples to shed light on big questions
ever since Adam Smith saw the roots of economic progress in a pin factory,
and they’re right to do so. The question is whether this particular example, in
which a recession is a problem of inadequate demand — there isn’t enough
demand for baby-sitting to provide jobs for everyone who wants one — gets at
the essence of what happens in a recession.
Forty years ago most economists would have agreed with this interpretation.
But since then macroeconomics has divided into two great factions:
“saltwater” economists (mainly in coastal U.S. universities), who have a more
or less Keynesian vision of what recessions are all about; and “freshwater”
economists (mainly at inland schools), who consider that vision nonsense.
Freshwater economists are, essentially, neoclassical purists. They believe
that all worthwhile economic analysis starts from the premise that people are
rational and markets work, a premise violated by the story of the baby-sitting
co-op. As they see it, a general lack of sufficient demand isn’t possible,
because prices always move to match supply with demand. If people want
more baby-sitting coupons, the value of those coupons will rise, so that
they’re worth, say, 40 minutes of baby-sitting rather than half an hour — or,
equivalently, the cost of an hours’ baby-sitting would fall from 2 coupons to
1.5. And that would solve the problem: the purchasing power of the coupons
in circulation would have risen, so that people would feel no need to hoard
more, and there would be no recession.
But don’t recessions look like periods in which there just isn’t enough demand
to employ everyone willing to work? Appearances can be deceiving, say the
freshwater theorists. Sound economics, in their view, says that overall
failures of demand can’t happen — and that means that they don’t.
Keynesian economics has been “proved false,” Cochrane, of the University of
Chicago, says.
Yet recessions do happen. Why? In the 1970s the leading freshwater
macroeconomist, the Nobel laureate Robert Lucas, argued that recessions
were caused by temporary confusion: workers and companies had trouble
distinguishing overall changes in the level of prices because of inflation or
deflation from changes in their own particular business situation. And Lucas
warned that any attempt to fight the business cycle would be
counterproductive: activist policies, he argued, would just add to the
confusion.
By the 1980s, however, even this severely limited acceptance of the idea that
recessions are bad things had been rejected by many freshwater economists.
100
Instead, the new leaders of the movement, especially Edward Prescott, who
was then at the University of Minnesota (you can see where the freshwater
moniker comes from), argued that price fluctuations and changes in demand
actually had nothing to do with the business cycle. Rather, the business cycle
reflects fluctuations in the rate of technological progress, which are amplified
by the rational response of workers, who voluntarily work more when the
environment is favorable and less when it’s unfavorable. Unemployment is a
deliberate decision by workers to take time off.
Put baldly like that, this theory sounds foolish — was the Great Depression
really the Great Vacation? And to be honest, I think it really is silly. But the
basic premise of Prescott’s “real business cycle” theory was embedded in
ingeniously constructed mathematical models, which were mapped onto real
data using sophisticated statistical techniques, and the theory came to
dominate the teaching of macroeconomics in many university departments.
In 2004, reflecting the theory’s influence, Prescott shared a Nobel with Finn
Kydland of Carnegie Mellon University.
Meanwhile, saltwater economists balked. Where the freshwater economists
were purists, saltwater economists were pragmatists. While economists like
N. Gregory Mankiw at Harvard, Olivier Blanchard at M.I.T. and David
Romer at the University of California, Berkeley, acknowledged that it was
hard to reconcile a Keynesian demand-side view of recessions with
neoclassical theory, they found the evidence that recessions are, in fact,
demand-driven too compelling to reject. So they were willing to deviate from
the assumption of perfect markets or perfect rationality, or both, adding
enough imperfections to accommodate a more or less Keynesian view of
recessions. And in the saltwater view, active policy to fight recessions
remained desirable.
But the self-described New Keynesian economists weren’t immune to the
charms of rational individuals and perfect markets. They tried to keep their
deviations from neoclassical orthodoxy as limited as possible. This meant
that there was no room in the prevailing models for such things as bubbles
and banking-system collapse. The fact that such things continued to happen
in the real world — there was a terrible financial and macroeconomic crisis in
much of Asia in 1997-8 and a depression-level slump in Argentina in 2002 —
wasn’t reflected in the mainstream of New Keynesian thinking.
Even so, you might have thought that the differing worldviews of freshwater
and saltwater economists would have put them constantly at loggerheads
over economic policy. Somewhat surprisingly, however, between around 1985
and 2007 the disputes between freshwater and saltwater economists were
mainly about theory, not action. The reason, I believe, is that New
101
Keynesians, unlike the original Keynesians, didn’t think fiscal policy —
changes in government spending or taxes — was needed to fight recessions.
They believed that monetary policy, administered by the technocrats at the
Fed, could provide whatever remedies the economy needed. At a 90th
birthday celebration for Milton Friedman, Ben Bernanke, formerly a more or
less New Keynesian professor at Princeton, and by then a member of the
Fed’s governing board, declared of the Great Depression: “You’re right. We
did it. We’re very sorry. But thanks to you, it won’t happen again.” The clear
message was that all you need to avoid depressions is a smarter Fed.
And as long as macroeconomic policy was left in the hands of the maestro
Greenspan, without Keynesian-type stimulus programs, freshwater
economists found little to complain about. (They didn’t believe that monetary
policy did any good, but they didn’t believe it did any harm, either.)
It would take a crisis to reveal both how little common ground there was and
how Panglossian even New Keynesian economics had become.
V. NOBODY COULD HAVE PREDICTED . . .
In recent, rueful economics discussions, an all-purpose punch line has become
“nobody could have predicted. . . .” It’s what you say with regard to disasters
that could have been predicted, should have been predicted and actually were
predicted by a few economists who were scoffed at for their pains.
Take, for example, the precipitous rise and fall of housing prices. Some
economists, notably Robert Shiller, did identify the bubble and warn of
painful consequences if it were to burst. Yet key policy makers failed to see
the obvious. In 2004, Alan Greenspan dismissed talk of a housing bubble: “a
national severe price distortion,” he declared, was “most unlikely.” Homeprice increases, Ben Bernanke said in 2005, “largely reflect strong economic
fundamentals.”
How did they miss the bubble? To be fair, interest rates were unusually low,
possibly explaining part of the price rise. It may be that Greenspan and
Bernanke also wanted to celebrate the Fed’s success in pulling the economy
out of the 2001 recession; conceding that much of that success rested on the
creation of a monstrous bubble would have placed a damper on the festivities.
But there was something else going on: a general belief that bubbles just
don’t happen. What’s striking, when you reread Greenspan’s assurances, is
that they weren’t based on evidence — they were based on the a priori
assertion that there simply can’t be a bubble in housing. And the finance
theorists were even more adamant on this point. In a 2007 interview, Eugene
102
Fama, the father of the efficient-market hypothesis, declared that “the word
‘bubble’ drives me nuts,” and went on to explain why we can trust the
housing market: “Housing markets are less liquid, but people are very careful
when they buy houses. It’s typically the biggest investment they’re going to
make, so they look around very carefully and they compare prices. The
bidding process is very detailed.”
Indeed, home buyers generally do carefully compare prices — that is, they
compare the price of their potential purchase with the prices of other houses.
But this says nothing about whether the overall price of houses is justified.
It’s ketchup economics, again: because a two-quart bottle of ketchup costs
twice as much as a one-quart bottle, finance theorists declare that the price of
ketchup must be right.
In short, the belief in efficient financial markets blinded many if not most
economists to the emergence of the biggest financial bubble in history. And
efficient-market theory also played a significant role in inflating that bubble
in the first place.
Now that the undiagnosed bubble has burst, the true riskiness of supposedly
safe assets has been revealed and the financial system has demonstrated its
fragility. U.S. households have seen $13 trillion in wealth evaporate. More
than six million jobs have been lost, and the unemployment rate appears
headed for its highest level since 1940. So what guidance does modern
economics have to offer in our current predicament? And should we trust it?
VI. THE STIMULUS SQUABBLE
Between 1985 and 2007 a false peace settled over the field of
macroeconomics. There hadn’t been any real convergence of views between
the saltwater and freshwater factions. But these were the years of the Great
Moderation — an extended period during which inflation was subdued and
recessions were relatively mild. Saltwater economists believed that the
Federal Reserve had everything under control. Fresh­water economists didn’t
think the Fed’s actions were actually beneficial, but they were willing to let
matters lie.
But the crisis ended the phony peace. Suddenly the narrow, technocratic
policies both sides were willing to accept were no longer sufficient — and the
need for a broader policy response brought the old conflicts out into the open,
fiercer than ever.
Why weren’t those narrow, technocratic policies sufficient? The answer, in a
word, is zero.
103
During a normal recession, the Fed responds by buying Treasury bills —
short-term government debt — from banks. This drives interest rates on
government debt down; investors seeking a higher rate of return move into
other assets, driving other interest rates down as well; and normally these
lower interest rates eventually lead to an economic bounceback. The Fed
dealt with the recession that began in 1990 by driving short-term interest
rates from 9 percent down to 3 percent. It dealt with the recession that began
in 2001 by driving rates from 6.5 percent to 1 percent. And it tried to deal
with the current recession by driving rates down from 5.25 percent to zero.
But zero, it turned out, isn’t low enough to end this recession. And the Fed
can’t push rates below zero, since at near-zero rates investors simply hoard
cash rather than lending it out. So by late 2008, with interest rates basically
at what macroeconomists call the “zero lower bound” even as the recession
continued to deepen, conventional monetary policy had lost all traction.
Now what? This is the second time America has been up against the zero
lower bound, the previous occasion being the Great Depression. And it was
precisely the observation that there’s a lower bound to interest rates that led
Keynes to advocate higher government spending: when monetary policy is
ineffective and the private sector can’t be persuaded to spend more, the
public sector must take its place in supporting the economy. Fiscal stimulus
is the Keynesian answer to the kind of depression-type economic situation
we’re currently in.
Such Keynesian thinking underlies the Obama administration’s economic
policies — and the freshwater economists are furious. For 25 or so years they
tolerated the Fed’s efforts to manage the economy, but a full-blown
Keynesian resurgence was something entirely different. Back in 1980, Lucas,
of the University of Chicago, wrote that Keynesian economics was so
ludicrous that “at research seminars, people don’t take Keynesian theorizing
seriously anymore; the audience starts to whisper and giggle to one another.”
Admitting that Keynes was largely right, after all, would be too humiliating a
comedown.
And so Chicago’s Cochrane, outraged at the idea that government spending
could mitigate the latest recession, declared: “It’s not part of what anybody
has taught graduate students since the 1960s. They [Keynesian ideas] are
fairy tales that have been proved false. It is very comforting in times of stress
to go back to the fairy tales we heard as children, but it doesn’t make them
less false.” (It’s a mark of how deep the division between saltwater and
freshwater runs that Cochrane doesn’t believe that “anybody” teaches ideas
that are, in fact, taught in places like Princeton, M.I.T. and Harvard.)
104
Meanwhile, saltwater economists, who had comforted themselves with the
belief that the great divide in macroeconomics was narrowing, were shocked
to realize that freshwater economists hadn’t been listening at all. Freshwater
economists who inveighed against the stimulus didn’t sound like scholars
who had weighed Keynesian arguments and found them wanting. Rather,
they sounded like people who had no idea what Keynesian economics was
about, who were resurrecting pre-1930 fallacies in the belief that they were
saying something new and profound.
And it wasn’t just Keynes whose ideas seemed to have been forgotten. As
Brad DeLong of the University of California, Berkeley, has pointed out in his
laments about the Chicago school’s “intellectual collapse,” the school’s current
stance amounts to a wholesale rejection of Milton Friedman’s ideas, as well.
Friedman believed that Fed policy rather than changes in government
spending should be used to stabilize the economy, but he never asserted that
an increase in government spending cannot, under any circumstances,
increase employment. In fact, rereading Friedman’s 1970 summary of his
ideas, “A Theoretical Framework for Monetary Analysis,” what’s striking is
how Keynesian it seems.
And Friedman certainly never bought into the idea that mass unemployment
represents a voluntary reduction in work effort or the idea that recessions are
actually good for the economy. Yet the current generation of freshwater
economists has been making both arguments. Thus Chicago’s Casey Mulligan
suggests that unemployment is so high because many workers are choosing
not to take jobs: “Employees face financial incentives that encourage them
not to work . . . decreased employment is explained more by reductions in the
supply of labor (the willingness of people to work) and less by the demand for
labor (the number of workers that employers need to hire).” Mulligan has
suggested, in particular, that workers are choosing to remain unemployed
because that improves their odds of receiving mortgage relief. And Cochrane
declares that high unemployment is actually good: “We should have a
recession. People who spend their lives pounding nails in Nevada need
something else to do.”
Personally, I think this is crazy. Why should it take mass unemployment
across the whole nation to get carpenters to move out of Nevada? Can anyone
seriously claim that we’ve lost 6.7 million jobs because fewer Americans want
to work? But it was inevitable that freshwater economists would find
themselves trapped in this cul-de-sac: if you start from the assumption that
people are perfectly rational and markets are perfectly efficient, you have to
conclude that unemployment is voluntary and recessions are desirable.
105
Yet if the crisis has pushed freshwater economists into absurdity, it has also
created a lot of soul-searching among saltwater economists. Their framework,
unlike that of the Chicago School, both allows for the possibility of
involuntary unemployment and considers it a bad thing. But the New
Keynesian models that have come to dominate teaching and research assume
that people are perfectly rational and financial markets are perfectly
efficient. To get anything like the current slump into their models, New
Keynesians are forced to introduce some kind of fudge factor that for reasons
unspecified temporarily depresses private spending. (I’ve done exactly that in
some of my own work.) And if the analysis of where we are now rests on this
fudge factor, how much confidence can we have in the models’ predictions
about where we are going?
The state of macro, in short, is not good. So where does the profession go from
here?
VII. FLAWS AND FRICTIONS
Economics, as a field, got in trouble because economists were seduced by the
vision of a perfect, frictionless market system. If the profession is to redeem
itself, it will have to reconcile itself to a less alluring vision — that of a
market economy that has many virtues but that is also shot through with
flaws and frictions. The good news is that we don’t have to start from scratch.
Even during the heyday of perfect-market economics, there was a lot of work
done on the ways in which the real economy deviated from the theoretical
ideal. What’s probably going to happen now — in fact, it’s already happening
— is that flaws-and-frictions economics will move from the periphery of
economic analysis to its center.
There’s already a fairly well developed example of the kind of economics I
have in mind: the school of thought known as behavioral finance.
Practitioners of this approach emphasize two things. First, many real-world
investors bear little resemblance to the cool calculators of efficient-market
theory: they’re all too subject to herd behavior, to bouts of irrational
exuberance and unwarranted panic. Second, even those who try to base their
decisions on cool calculation often find that they can’t, that problems of trust,
credibility and limited collateral force them to run with the herd.
On the first point: even during the heyday of the efficient-market hypothesis,
it seemed obvious that many real-world investors aren’t as rational as the
prevailing models assumed. Larry Summers once began a paper on finance
by declaring: “THERE ARE IDIOTS. Look around.” But what kind of idiots
(the preferred term in the academic literature, actually, is “noise traders”)
are we talking about? Behavioral finance, drawing on the broader movement
106
known as behavioral economics, tries to answer that question by relating the
apparent irrationality of investors to known biases in human cognition, like
the tendency to care more about small losses than small gains or the
tendency to extrapolate too readily from small samples (e.g., assuming that
because home prices rose in the past few years, they’ll keep on rising).
Until the crisis, efficient-market advocates like Eugene Fama dismissed the
evidence produced on behalf of behavioral finance as a collection of “curiosity
items” of no real importance. That’s a much harder position to maintain now
that the collapse of a vast bubble — a bubble correctly diagnosed by
behavioral economists like Robert Shiller of Yale, who related it to past
episodes of “irrational exuberance” — has brought the world economy to its
knees.
On the second point: suppose that there are, indeed, idiots. How much do
they matter? Not much, argued Milton Friedman in an influential 1953
paper: smart investors will make money by buying when the idiots sell and
selling when they buy and will stabilize markets in the process. But the
second strand of behavioral finance says that Friedman was wrong, that
financial markets are sometimes highly unstable, and right now that view
seems hard to reject.
Probably the most influential paper in this vein was a 1997 publication by
Andrei Shleifer of Harvard and Robert Vishny of Chicago, which amounted to
a formalization of the old line that “the market can stay irrational longer
than you can stay solvent.” As they pointed out, arbitrageurs — the people
who are supposed to buy low and sell high — need capital to do their jobs.
And a severe plunge in asset prices, even if it makes no sense in terms of
fundamentals, tends to deplete that capital. As a result, the smart money is
forced out of the market, and prices may go into a downward spiral.
The spread of the current financial crisis seemed almost like an object lesson
in the perils of financial instability. And the general ideas underlying models
of financial instability have proved highly relevant to economic policy: a focus
on the depleted capital of financial institutions helped guide policy actions
taken after the fall of Lehman, and it looks (cross your fingers) as if these
actions successfully headed off an even bigger financial collapse.
Meanwhile, what about macroeconomics? Recent events have pretty
decisively refuted the idea that recessions are an optimal response to
fluctuations in the rate of technological progress; a more or less Keynesian
view is the only plausible game in town. Yet standard New Keynesian models
left no room for a crisis like the one we’re having, because those models
generally accepted the efficient-market view of the financial sector.
107
There were some exceptions. One line of work, pioneered by none other than
Ben Bernanke working with Mark Gertler of New York University,
emphasized the way the lack of sufficient collateral can hinder the ability of
businesses to raise funds and pursue investment opportunities. A related line
of work, largely established by my Princeton colleague Nobuhiro Kiyotaki
and John Moore of the London School of Economics, argued that prices of
assets such as real estate can suffer self-reinforcing plunges that in turn
depress the economy as a whole. But until now the impact of dysfunctional
finance hasn’t been at the core even of Keynesian economics. Clearly, that
has to change.
VIII. RE-EMBRACING KEYNES
So here’s what I think economists have to do. First, they have to face up to
the inconvenient reality that financial markets fall far short of perfection,
that they are subject to extraordinary delusions and the madness of crowds.
Second, they have to admit — and this will be very hard for the people who
giggled and whispered over Keynes — that Keynesian economics remains the
best framework we have for making sense of recessions and depressions.
Third, they’ll have to do their best to incorporate the realities of finance into
macroeconomics.
Many economists will find these changes deeply disturbing. It will be a long
time, if ever, before the new, more realistic approaches to finance and
macroeconomics offer the same kind of clarity, completeness and sheer
beauty that characterizes the full neoclassical approach. To some economists
that will be a reason to cling to neoclassicism, despite its utter failure to
make sense of the greatest economic crisis in three generations. This seems,
however, like a good time to recall the words of H. L. Mencken: “There is
always an easy solution to every human problem — neat, plausible and
wrong.”
When it comes to the all-too-human problem of recessions and depressions,
economists need to abandon the neat but wrong solution of assuming that
everyone is rational and markets work perfectly. The vision that emerges as
the profession rethinks its foundations may not be all that clear; it certainly
won’t be neat; but we can hope that it will have the virtue of being at least
partly right.
108
THE EVOLUTION OF LIFE ON EARTH
By STEPHEN JAY GOULD
The history of life is not necessarily progressive; it is certainly not
predictable. The earth's creatures have evolved through a series of contingent
and fortuitous events.
Some creators announce their inventions with grand éclat. God proclaimed,
"Fiat lux," and then flooded his new universe with brightness. Others bring
forth great discoveries in a modest guise, as did Charles Darwin in defining
his new mechanism of evolutionary causality in 1859: "I have called this
principle, by which each slight variation, if useful, is preserved, by the term
Natural Selection."
Natural selection is an immensely powerful yet beautifully simple theory that
has held up remarkably well, under intense and unrelenting scrutiny and
testing, for 135 years. In essence, natural selection locates the mechanism of
evolutionary change in a "struggle" among organisms for reproductive
success, leading to improved fit of populations to changing environments. (
Struggle is often a metaphorical description and need not be viewed as overt
combat, guns blazing. Tactics for reproductive success include a variety of
non-martial activities such as earlier and more frequent mating or better
cooperation with partners in raising offspring.) Natural selection is therefore
a principle of local adaptation, not of general advance or progress.
SLAB CONTAINING SPECIMENS of Pteridinium from Namibia shows a
prominent organism from the earth's first multicellular fauna, called
Ediacaran, which appeared some 600 million years ago. The Ediacaran
animals died out before the Cambrian explosion of modern life. These thin,
quilted, sheetlike organisms may be ancestral to some modern forms but may
also represent a separate and ultimately failed experiment in multicellular
life. The history of life tends to move in quick and quirky episodes, rather
than by gradual improvement.
Yet powerful though the principle may be, natural selection is not the only
cause of evolutionary change (and may, in many cases, be overshadowed by
other forces). This point needs emphasis because the standard misapplication
of evolutionary theory assumes that biological explanation may be equated
with devising accounts, often speculative and conjectural in practice, about
the adaptive value of any given feature in its original environment (human
109
aggression as good for hunting, music and religion as good for tribal cohesion,
for example). Darwin himself strongly emphasized the multifactorial nature
of evolutionary change and warned against too exclusive a reliance on
natural selection, by placing the following statement in a maximally
conspicuous place at the very end of his introduction: "I am convinced that
Natural Selection has been the most important, but not the exclusive, means
of modification."
Natural selection is not fully sufficient to explain evolutionary change for two
major reasons. First, many other causes are powerful, particularly at levels of
biological organization both above and below the traditional Darwinian focus
on organisms and their struggles for reproductive success. At the lowest level
of substitution in individual base pairs of DNA, change is often effectively
neutral and therefore random. At higher levels, involving entire species or
faunas, punctuated equilibrium can produce evolutionary trends by selection
of species based on their rates of origin and extirpation, whereas mass
extinctions wipe out substantial parts of biotas for reasons unrelated to
adaptive struggles of constituent species in "normal" times between such
events.
Second, and the focus of this article, no matter how adequate our general
theory of evolutionary change, we also yearn to document and understand the
actual pathway of life's history. Theory, of course, is relevant to explaining
the pathway (nothing about the pathway can be inconsistent with good
theory, and theory can predict certain general aspects of life's geologic
pattern). But the actual pathway is strongly underdetermined by our general
theory of life's evolution. This point needs some belaboring as a central yet
widely misunderstood aspect of the world's complexity. Webs and chains of
historical events are so intricate, so imbued with random and chaotic
elements, so unrepeatable in encompassing such a multitude of unique (and
uniquely interacting) objects, that standard models of simple prediction and
replication do not apply. History can be explained, with satisfying rigor if
evidence be adequate, after a sequence of events unfolds, but it cannot be
predicted with any precision beforehand. Pierre-Simon Laplace, echoing the
growing and confident determinism of the late 18th century, once said that
he could specify all future states if he could know the position and motion of
all particles in the cosmos at any moment, but the nature of universal
complexity shatters this chimerical dream. History includes too much chaos,
or extremely sensitive dependence on minute and unmeasurable differences
in initial conditions, leading to massively divergent outcomes based on tiny
and unknowable disparities in starting points.
110
PROGRESS DOES NOT RULE (and is not even a primary thrust of ) the
evolutionary process. For reasons of chemistry and physics, life arises next to
the "left wall" of its simplest conceivable and preservable complexity. This
style of life (bacterial) has remained most common and most successful. A few
creatures occasionally move to the right, thus extending the right tail in the
distribution of complexity. Many always move to the left, but they are
absorbed within space already occupied. Note that the bacterial mode has
never changed in position, but just grown higher.
And history includes too much contingency, or shaping of present results by
long chains of unpredictable antecedent states, rather than immediate
determination by timeless laws of nature.
Homo sapiens did not appear on the earth, just a geologic second ago, because
evolutionary theory predicts such an outcome based on themes of progress
and increasing neural complexity. Humans arose, rather, as a fortuitous and
contingent outcome of thousands of linked events, any one of which could
have occurred differently and sent history on an alternative pathway that
would not have led to consciousness. To cite just four among a multitude: (1)
If our inconspicuous and fragile lineage had not been among the few
survivors of the initial radiation of multicellular animal life in the Cambrian
explosion 530 million years ago, then no vertebrates would have inhabited
the earth at all. (Only one member of our chordate phylum, the genus Pikaia,
has been found among these earliest fossils. This small and simple swimming
creature, showing its allegiance to us by possessing a notochord, or dorsal
stiffening rod, is among the rarest fossils of the Burgess Shale, our best
preserved Cambrian fauna.) (2) If a small and unpromising group of lobefinned fishes had not evolved fin bones with a strong central axis capable of
bearing weight on land, then vertebrates might never have become
terrestrial. (3) If a large extraterrestrial body had not struck the earth 65
million years ago, then dinosaurs would still be dominant and mammals
insignificant (the situation that had prevailed for 100 million years
previously). (4) If a small lineage of primates had not evolved upright posture
on the drying African savannas just two to four million years ago, then our
ancestry might have ended in a line of apes that, like the chimpanzee and
gorilla today, would have become ecologically marginal and probably doomed
to extinction despite their remarkable behavioral complexity.
Therefore, to understand the events and generalities of life's pathway, we
must go beyond principles of evolutionary theory to a paleontological
examination of the contingent pattern of life's history on our planet - the
single actualized version among millions of plausible alternatives that
happened not to occur. Such a view of life's history is highly contrary both to
conventional deterministic models of Western science and to the deepest
111
social traditions and psychological hopes of Western culture for a history
culminating in humans as life's highest expression and intended planetary
steward.
Science can, and does, strive to grasp nature's factuality, but all science is
socially embedded, and all scientists record prevailing "certainties," however
hard they may be aiming for pure objectivity. Darwin himself, in the closing
lines of The Origin of Species, expressed Victorian social preference more
than nature's record in writing: "As natural selection works solely by and for
the good of each being, all corporeal and mental endowments will tend to
progress towards perfection."
Life's pathway certainly includes many features predictable from laws of
nature, but these aspects are too broad and general to provide the "rightness"
that we seek for validating evolution's particular results - roses, mushrooms,
people and so forth. Organisms adapt to, and are constrained by, physical
principles. It is, for example, scarcely surprising, given laws of gravity, that
the largest vertebrates in the sea (whales) exceed the heaviest animals on
land (elephants today, dinosaurs in the past), which, in turn, are far bulkier
than the largest vertebrate that ever flew (extinct pterosaurs of the Mesozoic
era).
Predictable ecological rules govern the structuring of communities by
principles of energy flow and thermodynamics (more biomass in prey than in
predators, for example). Evolutionary trends, once started, may have local
predictability ("arms races," in which both predators and prey hone their
defenses and weapons, for example - a pattern that Geerat J. Vermeij of the
University of California at Davis has called "escalation" and documented in
increasing strength of both crab claws and shells of their gastropod prey
through time). But laws of nature do not tell us why we have crabs and snails
at all, why insects rule the multicellular world and why vertebrates rather
than persistent algal mats exist as the most complex forms of life on the
earth
Relative to the conventional view of life's history as an at least broadly
predictable process of gradually advancing complexity through time, three
features of the paleontological record stand out in opposition and shall
therefore serve as organizing themes for the rest of this article: the constancy
of modal complexity throughout life's history; the concentration of major
events in short bursts interspersed with long periods of relative stability; and
the role of external impositions, primarily mass extinctions, in disrupting
patterns of "normal" times. These three features, combined with more
general themes of chaos and contingency, require a new framework for
112
conceptualizing and drawing life's history, and this article therefore closes
with suggestions for a different iconography of evolution.
The primary paleontological fact about life's beginnings points to
predictability for the onset and very little for the particular pathways
thereafter. The earth is 4.6 billion years old, but the oldest rocks date to
about 3.9 billion years because the earth's surface became molten early in its
history, a result of bombardment by large amounts of cosmic debris during
the solar system's coalescence, and of heat generated by radioactive decay of
short-lived isotopes. These oldest rocks are too metamorphosed by
subsequent heat and pressure to preserve fossils (though some scientists
interpret the proportions of carbon isotopes in these rocks as signs of organic
production). The oldest rocks sufficiently unaltered to retain cellular fossils African and Australian sediments dated to 3.5 billion years old - do preserve
prokaryotic cells (bacteria and cyanophytes) and stromatoIites (mats of
sediment trapped and bound by these cells in shallow marine waters). Thus,
life on the earth evolved quickly and is as old as it could be. This fact alone
seems to indicate an inevitability, or at least a predictability, for life's origin
from the original chemical constituents of atmosphere and ocean.
No one can doubt that more complex creatures arose sequentially after this
prokaryotic beginning - first eukaryotic cells, perhaps about two billion years
ago, then multicellular animals about 600 million years ago, with a relay of
highest complexity among animals passing from invertebrates, to marine
vertebrates and, finally (if we wish, albeit parochially, to honor neural
architecture as a primary criterion), to reptiles, mammals and humans. This
is the conventional sequence represented in the old charts and texts as an
"age of invertebrates," followed by an "age of fishes," "age of reptiles," "age of
mammals," and "age of man" (to add the old gender bias to all the other
prejudices implied by this sequence).
I do not deny the facts of the preceding paragraph but wish to argue that our
conventional desire to view history as progressive, and to see humans as
predictably dominant, has grossly distorted our interpretation of life's
pathway by falsely placing in the center of things a relatively minor
phenomenon that arises only as a side consequence of a physically
constrained starting point. The most salient feature of life has been the
stability of its bacterial mode from the beginning of the fossil record until
today and, with little doubt, into all future time so long as the earth endures.
This is truly the "age of bacteria" - as it was in the beginning, is now and ever
shall be.
113
NEW ICONOGRAPHY OF LIFE'S TREE shows that maximal diversity in
anatomical forms (not in number of species) is reached very early in life's
multicellular history. Later times feature extinction of most of these initial
experiments and enormous success within surviving lines. This success is
measured in the proliferation of species but not in the development of new
anatomies. Today we have more species than ever before, although they are
restricted to fewer basic anatomies.
For reasons related to the chemistry of life's origin and the physics of selforganization, the first living things arose at the lower limit of life's
conceivable, preservable complexity. Call this lower limit the "left wall" for an
architecture of complexity. Since so little space exists between the left wall
and life's initial bacterial mode in the fossil record, only one direction for
future increment exists - toward greater complexity at the right. Thus, every
once in a while, a more complex creature evolves and extends the range of
life's diversity in the only available direction. In technical terms, the
distribution of complexity becomes more strongly right skewed through these
occasional additions.
But the additions are rare and episodic. They do not even constitute an
evolutionary series but form a motley sequence of distantly related taxa,
usually depicted as eukaryotic cell, jellyfish, trilobite, nautiloid, eurypterid (a
large relative of horseshoe crabs), fish, an amphibian such as Eryops, a
dinosaur, a mammal and a human being. This sequence cannot be construed
as the major thrust or trend of life's history. Think rather of an occasional
creature tumbling into the empty right region of complexity's space.
Throughout this entire time, the bacterial mode has grown in height and
remained constant in position. Bacteria represent the great success story of
life's pathway. They occupy a wider domain of environments and span a
broader range of biochemistries than any other group. They are adaptable,
indestructible and astoundingly diverse. We cannot even imagine how
anthropogenic intervention might threaten their extinction, although we
worry about our impact on nearly every other form of life. The number of
Escherichia coli cells in the gut of each human being exceeds the number of
humans that has ever lived on this planet.
One might grant that complexification for life as a whole represents a
pseudo-trend based on constraint at the left wall but still hold that evolution
within particular groups differentially favors complexity when the founding
lineage begins far enough from the left wall to permit movement in both
directions. Empirical tests of this interesting hypothesis are just beginning
(as concern for the subject mounts among paleontologists), and we do not yet
have enough cases to advance a generality. But the first two studies - by
Daniel W. McShea of the University of Michigan on mammalian vertebrae
114
and by George F. Boyajian of the University of Pennsylvania on ammonite
suture lines - show no evolutionary tendencies to favor increased complexity.
Moreover, when we consider that for each mode of life involving greater
complexity, there probably exists an equally advantageous style based on
greater simplicity of form (as often found in parasites, for example), then
preferential evolution toward complexity seems unlikely a priori. Our
impression that life evolves toward greater complexity is probably only a bias
inspired by parochial focus on ourselves, and consequent overattention to
complexifying creatures, while we ignore just as many lineages adapting
equally well by becoming simpler in form. The morphologically degenerate
parasite, safe within its host, has just as much prospect for evolutionary
success as its gorgeously elaborate relative coping with the slings and arrows
of outrageous fortune in a tough external world.
Even if complexity is only a drift away from a constraining left wall, we
might view trends in this direction as more predictable and characteristic of
life's pathway as a whole if increments of complexity accrued in a persistent
and gradually accumulating manner through time. But nothing about life's
history is more peculiar with respect to this common (and false) expectation
than the actual pattern of extended stability and rapid episodic movement, as
revealed by the fossil record.
Life remained almost exclusively unicellular for the first five sixths of its
history - from the first recorded fossils at 3.5 billion years to the first welldocumented multicellular animals less than 600 million years ago. (Some
simple multicellular algae evolved more than a billion years ago, but these
organisms belong to the plant kingdom and have no genealogical connection
with animals.) This long period of unicellular life does include, to be sure, the
vitally important transition from simple prokaryotic cells without organelles
to eukaryotic cells with nuclei, mitochondria and other complexities of
intracellular architecture - but no recorded attainment of multicellular
animal organization for a full three billion years. If complexity is such a good
thing, and multicellularity represents its initial phase in our usual view, then
life certainly took its time in making this crucial step. Such delays speak
strongly against general progress as the major theme of life's history, even if
they can be plausibly explained by lack of sufficient atmospheric oxygen for
most of Precambrian time or by failure of unicellular life to achieve some
structural threshold acting as a prerequisite to multicellularity.
More curiously, all major stages in organizing animal life's multicellular
architecture then occurred in a short period beginning less than 600 million
years ago and ending by about 530 million years ago - and the steps within
this sequence are also discontinuous and episodic, not gradually
115
accumulative. The first fauna, called Ediacaran to honor the Australian
locality of its initial discovery but now known from rocks on all continents,
consists of highly flattened fronds, sheets and circlets composed of numerous
slender segments quilted together. The nature of the Ediacaran fauna is now
a subject of intense discussion. These creatures do not seem to be simple
precursors of later forms. They may constitute a separate and failed
experiment in animal life, or they may represent a full range of diploblastic
(two-layered) organization, of which the modern phylum Cnidaria (corals,
jellyfishes and their allies) remains as a small and much altered remnant.
In any case, they apparently died out well before the Cambrian biota evolved.
The Cambrian then began with an assemblage of bits and pieces,
frustratingly difficult to interpret, called the "small shelly fauna." The
subsequent main pulse, starting about 530 million years ago, constitutes the
famous Cambrian explosion, during which all but one modern phylum of
animal ]ife made a first appearance in the fossil record. ( Geologists had
previously allowed up to 40 million years for this event, but an elegant study,
published in 1993, clearly restricts this period of phyletic flowering to a mere
five million years.) The Bryozoa, a group of sessile and colonial marine
organisms, do not arise until the beginning of the subsequent, Ordovician
period, but this apparent delay may be an artifact of failure to discover
Cambrian representatives.
GREAT DIVERSITY quickly evolved at the dawn of multicellular animal life
during the Cambrian period (530 million years ago). The creatures shown
here are all found in the Middle Cambrian Burgess Shale fauna of Canada.
They include some familiar forms (sponges, brachiopods) that have survived.
But many creatures (such as the giant Anomalocaris at the lower right,
largest of all the Cambrian animals) did not live for long and are so
anatomically peculiar (relative to survivors) that we cannot classify them
among known phyla.
Although interesting and portentous events have occurred since, from the
flowering of dinosaurs to the origin of human consciousness, we do not
exaggerate greatly in stating that the subsequent history of animal life
amounts to little more than variations on anatomical themes established
during the Cambrian explosion within five million years. Three billion years
of unicellularity, followed by five million years of intense creativity and then
capped by more than 500 million years of variation on set anatomical themes
can scarcely be read as a predictable, inexorable or continuous trend toward
progress or increasing complexity.
116
We do not know why the Cambrian explosion could establish all major
anatomical designs so quickly. An "external" explanation based on ecology
seems attractive: the Cambrian explosion represents an initial filling of the
"ecological barrel" of niches for multicellular organisms, and any experiment
found a space. The barrel has never emptied since; even the great mass
extinctions left a few species in each principal role, and their occupation of
ecological space forecloses opportunity for fundamental novelties. But an
"internal" explanation based on genetics and development also seems
necessary as a complement: the earliest multicellular animals may have
maintained a flexibility for genetic change and embryological transformation
that became greatly reduced as organisms "locked in" to a set of stable and
successful designs.
In any case, this initial period of both internal and external flexibility yielded
a range of invertebrate anatomies that may have exceeded (in just a few
million years of production) the full scope of animal form in all the earth's
environments today (after more than 500 million years of additional time for
further expansion). Scientists are divided on this question. Some claim that
the anatomical range of this initial explosion exceeded that of modern life, as
many early experiments died out and no new phyla have ever arisen. But
scientists most strongly opposed to this view allow that Cambrian diversity
at least equaled the modern range - so even the most cautious opinion holds
that 500 million subsequent years of opportunity have not expanded the
Cambrian range, achieved in just five million years. The Cambrian explosion
was the most remarkable and puzzling event in the history of life.
Moreover, we do not know why most of the early experiments died, while a
few survived to become our modern phyla. It is tempting to say that the
victors won by virtue of greater anatomical complexity, better ecological fit or
some other predictable feature of conventional Darwinian struggle. But no
recognized traits unite the victors, and the radical alternative must be
entertained that each early experiment received little more than the
equivalent of a ticket in the largest lottery ever played out on our planet and that each surviving lineage, including our own phylum of vertebrates,
inhabits the earth today more by the luck of the draw than by any predictable
struggle for existence. The history of multicellular animal life may be more a
story of great reduction in initial possibilities, with stabilization of lucky
survivors, than a conventional tale of steady ecological expansion and
morphological progress in complexity.
Finally, this pattern of long stasis, with change concentrated in rapid
episodes that establish new equilibria, may be quite general at several scales
of time and magnitude, forming a kind of fractal pattern in self-similarity.
According to the punctuated equilibrium model of speciation, trends within
117
lineages occur by accumulated episodes of geologically instantaneous
speciation, rather than by gradual change within continuous populations
(like climbing a staircase rather than rolling a ball up an inclined plane).
Even if evolutionary theory implied a potential internal direction for life's
pathway (although previous facts and arguments in this article cast doubt on
such a claim), the occasional imposition of a rapid and substantial, perhaps
even truly catastrophic, change in environment would have intervened to
stymie the pattern. These environ mental changes trigger mass extinction of
a high percentage of the earth's species and may so derail any internal
direction and so reset the pathway that the net pattern of life's history looks
more capricious and concentrated in episodes than steady and directional.
Mass extinctions have been recognized since the dawn of paleontology; the
major divisions of the geologic time scale were established at boundaries
marked by such events. But until the revival of interest that began in the late
1970S, most paleontologists treated mass extinctions only as intensifications
of ordinary events, leading (at most) to a speeding up of tendencies that
pervaded normal times. In this gradualistic theory of mass extinction, these
events really took a few million years to unfold (with the appearance of
suddenness interpreted as an artifact of an imperfect fossil record), and they
only made the ordinary occur faster (more intense Darwinian competition in
tough times, for example, leading to even more efficient replacement of less
adapted by superior forms).
CLASSICAL REPRESENTATIONS OF LIFE'S HISTORY reveal the severe
biases of viewing evolution as embodying a central principle of progress and
complexification. In these paintings by Charles R. Knight from a 1942 issue
of National Geographic, the first panel shows invertebrates of the Burgess
Shale. But as soon as fishes evolve (panel 2), no subsequent scene ever shows
another invertebrate, although they did not go away or stop evolving. When
land vertebrates arise (panel 3), we never see another fish, even though
return of land vertebrate lineages to the sea may be depicted (panel 4). The
sequence always ends with mammals (panel 5) - even though fishes,
invertebrates and reptiles are still thriving - and, of course, humans (panel
6).
The reinterpretation of mass extinctions as central to life's pathway and
radically different in effect began with the presentation of data by Luis and
Walter Alvarez in 1979, indicating that the impact of a large extraterrestrial
object (they suggested an asteroid seven to 10 kilometers in diameter) set off
the last great extinction at the Cretaceous- Tertiary boundary 65 million
years ago. Although the Alvarez hypothesis initially received very skeptical
treatment from scientists (a proper approach to highly unconventional
explanations), the case now seems virtually proved by discovery of the
118
"smoking gun," a crater of appropriate size and age located off the Yucatan
peninsula in Mexico.
This reawakening of interest also inspired paleontologists to tabulate the
data of mass extinction more rigorously. Work by David M. Raup, J. J.
Sepkoski, Jr., and David Jablonski of the University of Chicago has
established that multicellular animal life experienced five major (end of
Ordovician, late Devonian, end of Permian, end of Triassic and end of
Cretaceous) and many minor mass extinctions during its 530 million- year
history. We have no clear evidence that any but the last of these events was
triggered by catastrophic impact, but such careful study leads to the general
conclusion that mass extinctions were more frequent, more rapid, more
extensive in magnitude and more different in effect than paleontologists had
previously realized. These four properties encompass the radical implications
of mass extinction for understanding life's pathway as more contingent and
chancy than predictable and directional.
Mass extinctions are not random in their impact on life. Some lineages
succumb and others survive-as sensible outcomes based on presence or
absence of evolved features. But especially if the triggering cause of
extinction be sudden and catastrophic, the reasons for life or death may be
random with respect to the original value of key features when first evolved
in Darwinian struggles of normal times. This "different rules" model of mass
extinction imparts a quirky and unpredictable character to life's pathway
based on the evident claim that lineages cannot anticipate future
contingencies of such magnitude and different operation.
To cite two examples from the impact- triggered Cretaceous-Tertiary
extinction 65 million years ago: First, an important study published in 1986
noted that diatoms survived the extinction far better than other single-celled
plankton (primarily coccoliths and radiolaria). This study found that many
diatoms had evolved a strategy of dormancy by encystrnent, perhaps to
survive through seasonal periods of unfavorable conditions (months of
darkness in polar species as otherwise fatal to these photosynthesizing cells;
sporadic availability of silica needed to construct their skeletons). Other
planktonic cells had not evolved any mechanisms for dormancy. If the
terminal Cretaceous impact produced a dust cloud that blocked light for
several months or longer (one popular idea for a "killing scenario" in the
extinction), then diatoms may have survived as a fortuitous result of
dormancy mechanisms evolved for the entirely different function of
weathering seasonal droughts in ordinary times. Diatoms are not superior to
radiolaria or other plankton that succumbed in far greater numbers; they
were simply fortunate to possess a favorable feature, evolved for other
reasons, that fostered passage through the impact and its sequelae.
119
Second, we all know that dinosaurs perished in the end Cretaceous event and
that mammals therefore rule the vertebrate world today. Most people assume
that mammals prevailed in these tough times for some reason of general
superiority over dinosaurs. But such a conclusion seems most unlikely.
Mammals and dinosaurs had coexisted for 100 million years, and mammals
had remained rat-sized or smaller, making no evolutionary "move" to oust
dinosaurs. No good argument for mammalian prevalence by general
superiority has ever been advanced, and fortuity seems far more likely. As
one plausible argument, mammals may have survived partly as a result of
their small size (with much larger, and therefore extinction- resistant,
populations as a consequence, and less ecological specialization with more
places to hide, so to speak). Small size may not have been a positive
mammalian adaptation at all, but more a sign of inability ever to penetrate
the dominant domain of dinosaurs. Yet this "negative" feature of normal
times may be the key reason for mammalian survival and a prerequisite to
my writing and your reading this article today.
Sigmund Freud often remarked that great revolutions in the history of
science have but one common, and ironic, feature: they knock human
arrogance off one pedestal after another of our previous conviction about our
own self-importance. In Freud's three examples, Copernicus moved our home
from center to periphery, Darwin then relegated us to "descent from an
animal world"; and, finally (in one of the least modest statements of
intellectual history), Freud himself discovered the unconscious and exploded
the myth of a fully rational mind. In this wise and crucial sense, the
Darwinian revolution remains woefully incomplete because, even though
thinking humanity accepts the fact of evolution, most of us are still unwilling
to abandon the comforting view that evolution means (or at least embodies a
central principle of) progress defined to render the appearance of something
like human consciousness either virtually inevitable or at least predictable.
The pedestal is not smashed until we abandon progress or complexification as
a central principle and come to entertain the strong possibility that H.
sapiens is but a tiny, late-arising twig on life's enormously arborescent bush a small bud that would almost surely not appear a second time if we could
replant the bush from seed and let it grow again.
Primates are visual animals, and the pictures we draw betray our deepest
convictions and display our current conceptual limitations. Artists have
always painted the history of fossil life as a sequence from invertebrates, to
fishes, to early terrestrial amphibians and reptiles, to dinosaurs, to mammals
and, finally, to humans. There are no exceptions; all sequences painted since
the inception of this genre in the 1850s follow the convention.
120
Yet we never stop to recognize the almost absurd biases coded into this
universal mode. No scene ever shows another invertebrate after fishes
evolved but invertebrates did not go away or stop evolving! After terrestrial
reptiles emerge, no subsequent scene ever shows a fish (later oceanic
tableaux depict only such returning reptiles as ichthyosaurs and plesiosaurs).
But fishes did not stop evolving after one small lineage managed to invade
the land. In fact, the major event in the evolution of fishes, the origin and rise
to dominance of the teleosts, or modern bony fishes, occurred during the time
of the dinosaurs and is therefore never shown at all in any of these sequences
- even though teleosts include more than half of all species of vertebrates.
Why should humans appear at the end of all sequences? Our order of
primates is ancient among mammals, and many other successful lineages
arose later than we did.
We will not smash Freud's pedestal and complete Darwin's revolution until
we find, grasp and accept another way of drawing life's history. J.B.S.
Haldane proclaimed nature "queerer than we can suppose," but these limits
may only be socially imposed conceptual locks rather then inherent
restrictions of our neurology. New icons might break the locks. Trees - or
rather copiously and luxuriantly branching bushes - rather than ladders and
sequences hold the key to this conceptual transition.
We must learn to depict the full range of variation, not just our parochial
perception of the tiny right tail of most complex creatures. We must recognize
that this tree may have contained a maximal number of branches near the
beginning of multicellular life and that subsequent history is for the most
part a process of elimination and lucky survivorship of a few, rather than
continuous flowering, progress and expansion of a growing multitude. We
must understand that little twigs are contingent nubbins, not predictable
goals of the massive bush beneath. We must remember the greatest of all
Biblical statements about wisdom: "She is a tree of life to them that lay hold
upon her; and happy is every one that retaineth her."
121
Download