Demystifying Rationality - Expository Writing Program | New York

advertisement
Demystifying Rationality
MICHAEL SCHEMITSCH
A
nts, despite forming some of the most complex societies in the natural world, somehow manage to maintain their humility. Indeed, not one of
these meek creatures seeks any authority over its millions of siblings: no foreman oversees construction, no overseer supervises foraging, no general leads
armies. Even the so-called queen does little other than produce a seemingly
unending procession of future insectoid comrades. Such a lack of leadership
would end in bedlam in less civilized civilizations, yet ants have done rather
well for 140 million years or so by using a self-organizing system of control
that allows colonies to solve staggering problems through simple means.
This long-time success becomes even more impressive when we consider individual ants. Any amateur entomologist equipped with a strong sense of
curiosity and a boring summer day could tell you that lone ants are helpless;
as Deborah M. Gordon, a biologist at Stanford, says, “If you watch an ant try
to accomplish something, you’ll be impressed by how inept it is.”1 If you’ve
ever placed one on your palm and watched it plod tirelessly towards the fleshy
horizon of your continually rotating hand, then you understand.
How are these clueless insects able to solve engineering and organizational problems that would stump most humans? Peter Miller explains
“swarm intelligence” as a phenomenon in which members of a species pool
their efforts to achieve remarkable goals. “The collective abilities of such animals—none of which grasp the big picture—seems miraculous even to the
biologists that know them best,” Miller writes.1 Each ant cannot see past its
own microcosm, because it lacks the sensory and mental perspectives to do
so. How then, by obeying a few simple rules, can half a million individuals
with no concept of their collective goal form a hyperaware “superorganism”
capable of handling huge and complex challenges?
Take the solution to the seemingly insurmountable challenge of providing food and supplies for the colony, for example. Each morning, patroller
ants venture outside, looking for resources. When they find an interesting
source—say a nearby picnic—they return home. Idle foragers back at the
MERCER STREET - 197
colony encounter returning patrollers. Ants communicate almost entirely
through touch and smell, constantly feeling and smelling one another with
their antennae. Patrollers smell different from ants that have been working in
the colony because the pheromones that usually cover their bodies have evaporated in the sunlight. If a forager encounters enough ants lacking this particular scent, it instinctively knows that it must be time to leave the colony
and look for supplies.
Through this system, the colony efficiently calculates the number of ants
required for foraging. If the patrollers have all found a worthwhile source,
they will all return and stimulate more foragers. Conversely, if few patrollers
return, foragers will stay put, and with good reason: few returning foragers
could mean bad weather, a hungry predator, or a nearby amateur entomologist. This complex decision about delegating labor is resolved through a large
group of individual ants making simple decisions that are largely dependent
on localized stimuli. If an ant is sufficiently stimulated, it will take action.
This system is not so different from the one H. sapiens has become
famous for, if you think about it. Given sufficient stimuli, we will also act
accordingly. Granted, our vastly more powerful sensory tools and mental perceptions open us to a wider array of stimuli: language allows us to convey
more abstract ideas than excreting a pheromone can. Humanity’s ability to
process sophisticated stimuli and our ability to see the big—well, bigger—picture, unfathomable to the tiny ant, allows us to make more complex decisions
as individuals. But these complex behaviors arise from simple decisions carried out through basic functions of the nervous system, designed, in the end,
to maximize our benefit.
John Nash’s game theory helps to determine which strategy will result in
the maximum benefit. Any sentient creature forced to make a decision will
invariably act to maximize its benefit, but humans and other social animals
must devote much brainpower to analyzing intricate social decisions due to
the complexity of their interactions. Daeyeol Lee analyzes the neurology
behind the decision making process, describing multiple games in which
players can develop Nash equilibrium—a state in which “no individual players can increase their payoffs by changing their strategies unilaterally”—forcing players to make a compromise between risk and reward.2 Lee finds that
players in a game called “the prisoner’s dilemma” often forgo Nash equilibriums in favor of strategies that involve cooperation, even at the risk of being
exploited by other players. In other words, participants are able to see past the
immediate and guaranteed reward for selfishness to recognize long-term
benefits.
198 - MERCER STREET
Ernst Fehr and Urs Fischbacher provide a deeper look into the motives
behind this seemingly illogical behavior, exploring altruistic punishment as
well as reward.3 The players in Lee’s experiments were rewarded for risky,
nearly counterintuitive behavior that established trust, which led to future
cooperation and mutual benefit. Altruistic punishment is something different—specifically, “impos[ing] sanctions on others for norm violation,” or
punishing those who don’t play by the rules. Such behavior was evident in a
scenario where participants, at their own expense, imposed financial sanctions
on players who behaved selfishly. The scenario suggests that these players
believed their role as arbiters of fairness was more important than their personal wealth.
Even more telling, most participants expect selfish players to be punished: they expect that, given the means, their peers will look out for their
best interests. Likewise, the players in the “prisoner’s dilemma” game expect
that their trust will be reciprocated. These expectations are crucial to establishing cooperation; we are much more willing to take a chance on strangers
if we believe they will be willing to reciprocate our cooperation. Fehr and
Fischbacher suggest that “when punishment of non-cooperators and nonpunishers is possible, punishment evolves and cooperation in much larger
groups can be maintained.”3 When strangers fail to match our preformed
notions of responsibility, we generally respond with some form of altruistic
punishment to enforce what we believe to be reasonable expectations. Heroes
are honored while cowards are scorned; generosity is a virtue, avarice is a sin.
This innate tendency to punish and reward different behavior manifests
itself in social norms—widely held views of “acceptable” behavior—and
moral codes. Both evolved to help early societies survive hostile environments. Fehr and Fischbacher theorize that “if we randomly pick two human
strangers from a modern society and give them the chance to engage in
repeated anonymous exchanges in a laboratory experiment, there is a high
probability that reciprocally altruistic behavior will emerge spontaneously.”3
Such behavior is a “huge anomaly in the animal world,” because the “detailed
division of labour and cooperation between genetically unrelated individuals”
is almost non-existent. Altruistic cooperation is present among highly social
animals, such as bees, naked mole rats, and our friend the ant, but is largely
dependent on close genetic relatedness.
The fact that we cooperate with individuals genetically unrelated to us
reinforces Lee’s findings. We can refuse to cooperate with members outside
our gene line, preferring the immediate reward of ensuring our kin’s survival.
Yet despite our inability to see the evolutionary “big picture,” we have historMERCER STREET - 199
ically chosen to cooperate with unrelated individuals, producing a vastly more
complex matrix of cooperation built on principles other than genetic selfinterest. The millions of tiny, nearly unconscious decisions we make on a dayto-day basis—like the simple, seemingly insignificant decisions made by individual ants—weave together to form the tapestry of our cooperative behavior—the grand scheme of our achievement and our survival. As in the prisoner’s dilemma, trust bestows benefits greater than immediate gratification. Our
predisposition for altruistic behavior—the cornerstone of morality—has
proven to be our greatest evolutionary strength.
But what determines our responses to these seemingly tiny questions?
The answer can be found by pushing morality to its limits through a simple
thought experiment.
You are standing at the switchboard for a train station with two tracks, A
and B. You realize that a train has lost control and is hurtling down track B,
but by throwing a switch, you can divert the train to track A. One person is
standing on Track A and five people are standing on B. There is no way to
warn them and there is no way to save all six: you can either throw the switch
to save five or you can do nothing and save one.
What do you do?
Regardless of what you choose, there is a good chance that you will consider your choice the logical one. We often view logic as a universal constant
that applies to every situation like some all-purpose rulebook, but under close
examination, logic appears to be mostly subjective. How couldn’t it be? We
generally define logic as the reasoning process we use to solve problems, yet
we each use a unique process, a set of conditions and parameters derived from
our own experiences and education, rather than a universal code of uncompromising wisdom. What seems reasonable to one person may seem outrageous to another. So how does the individual logic of morality arise?
Josh Greene, a philosopher and neuroscientist at Princeton, believes he
may have the answer, explaining his findings on a recent episode of WNYC’s
Radio Lab.4 Greene presented volunteers with the same train dilemma, but
added an interesting follow-up question. Once again, five people are standing
on a track, unaware of an oncoming train. You are standing on an overpass
directly over the train track; no lever is in sight, but standing next to you is a
very large individual. You realize that if you push this man onto the tracks, his
bulk will stop the train. In this case, would you sacrifice one to save five?
Chances are, your answer is different this time. Mark Hauser, a Harvard
professor also interviewed on the program, posed these dilemmas to hundreds of thousands of people over the Internet, finding that while nine out of
200 - MERCER STREET
ten people would use the lever, nine out of ten people would not push the
man.4 He also found that, when asked why their answers differed, most
respondents had no idea.
After presenting volunteers with the two dilemmas, Greene scanned
their brains right at the moment they made their decisions. He found that
when they said they would pull the lever, parts of the brain lit up on the scan,
indicating activity in specific areas. But when asked if they would push the
large man, entirely different brain areas lit up on the scan. From this data,
Greene theorized that the brain does not function as a single, cohesive unit
but as a group of bickering factions.4 The part of the brain activated during
the first scenario may represent the brain’s moral accountant, easily choosing
between one and five. The part of the brain activated during the follow-up
scenario may represent the brain’s police officer, the part that simply cannot
allow a man to be pushed to his death.
To the accountant, the situations are more or less identical: save five by
killing one. But the process through which this exchange occurs differs
enough in the second scenario to stir the ire of the neural police officer.
Throwing a lever is much more of an abstraction than physically pushing a
man to his death—an action so visceral, so real that it cannot be justified
through numbers alone. When we are required to get our hands dirty, so to
speak, the accountant’s rational five-over-one argument falls flat, overruled by
an intense aversion to the violent nature of the act. Even though the situations are mathematically identical, subtle differences between the two produce entirely different responses in the brain. Human behavior, then, is not
the product of a unified system, but is determined by the part of the brain that
shouts the loudest.
We can determine further information about the mechanism behind
human decision making by examining the disruption of this system. Many
organisms, parasites in particular, have evolved ingenious ways of altering
their host’s behavior to ensure their own propagation. James Owens describes
the distasteful life cycle of the hairworm parasite.5 A larva enters the host
grasshopper’s body, most likely through contaminated drinking water, and
soon grows to astonishing size, occupying most of the insect’s body cavity.
When it reaches maturity, the worm begins pumping proteins into the
grasshopper’s brain. These chemicals have a startling effect on the host’s
behavior, compelling it to throw itself into the nearest body of water, where
it quickly drowns. The hairworm, returned to its favored environment, wriggles out of its host and goes on to reproduce, beginning the life cycle anew.
MERCER STREET - 201
Through purely chemical means, the hairworm is able not only to alter
its host’s conduct, but also to induce behavior adverse to its host’s instincts.
Higher animals, even humans, are subject to similar processes of chemical
coercion. The rabies virus can induce behavior in humans (aggression, salivation, increased propensity to bite) that increases its chances of propagation,
as do the pathogens that cause toxoplasmosis and sleeping sickness. The fact
that pathogens can manipulate their hosts’ behavior through chemicals alone
indicates that the brain functions through chemical means.
Such an assumption fits with the model of decision making Greene proposes, where the most persuasive part of the brain determines behavior. Lee’s
work corroborates this theory. Brain imaging indicates that the brain
“rewards” certain behaviors, such as cooperation, and activates negative emotional states when confronted with behaviors that warrant altruistic punishment. Greene’s research indicates that different parts of the brain compete to
produce and influence behavior, with the “winning” side manifesting itself in
brain imaging. Thus, when we punish someone for hoarding or acting selfishly, we can infer that the insula, a part of the brain involved in evaluating
negative emotional states, such as disgust, provides the strongest argument
about what to do. Disgusted by the greedy behavior, we see no other option
than to rectify the situation with an act of altruistic punishment.
Such a model, if correct, has profound implications. First, we must consider that this model is preexisting, free from outside stimuli that could alter
its outcome. The dilemmas, for instance, do not add anything to your perception of the world. They don’t plant any memories that aren’t already there,
teach you anything you didn’t already know, or carry any hidden agenda to
influence you either way; they merely access the hardware that is already in
place. Participants ultimately choose what they believe to be the most ethical
decision. In a way, a decision has already been made. Allow me to up the ante:
are you, dare I say, fated to flip that switch? Are you destined to let the fat man
live?
It is arguable that the sum of an individual’s experiences forms the unique
problem-solving mechanism that will produce a specific answer. A more calculating mind would see the bottom line of five-over-one, but another would
be consumed with guilt, unable to justify choosing one-over-five. Ultimately,
everyone asked this question has a lifetime of experience that will reason out
one answer over the other.
This decision making matrix within our brains, operating on a basic system of neural responses, ultimately dictates our behavior. As personal experience builds the framework for this construct, it has a plastic, malleable nature,
202 - MERCER STREET
constantly adapting to new behaviors, circumstances, and experiences. But at
any given moment, this system is already in place, already up and running,
simply waiting for a problem to solve. At any given moment, your brain contains the answers to questions that just haven’t been asked.
If we have no control over our neural activity, how can we argue that we
have control over the resulting behavior? Arguably all human behavior, from
the greatest sacrifices to the foulest atrocities, arises from this system. This
notion of neurological destiny forces us to consider truly staggering implications.
Can morality exist without free will? Moral behavior is lauded because
we believe it to be a choice: people choose to benefit others over themselves,
even though they do not have to. The element of choice is what makes a sacrifice noble. But if a person only does something good because a part of his
brain “forced” him to do it, the nature of sacrifice must be reevaluated: would
charging into a burning building be as heroic if you were forced to do it at
gunpoint? If we remove the element of choice, can moral behavior still be
considered commendable? If our reaction to a scenario is determined by a
mechanism that we have no control over, what difference does it make if the
behavior is heroic or cowardly?
Equally challenging questions arise about the nature of immoral behavior. Can a person be blamed for a crime that results from a lifetime of damaging experiences or a fundamentally damaged moral compass? If immorality is the symptom of an unwell brain, should we really punish people for
behaviors that may not be entirely their fault?
Perhaps criminals deserve treatment rather than punishment. If the complexity of the chemical nature of human decisions is unraveled, it might be
possible to induce socially acceptable behavior through chemical control. But
the ability to do so does not justify the act. The concept of behavioral control
carries a wealth of moral reservations, many of which concern the nature of
free will. Of course, if we accept that free will is nonexistent, what keeps us
from substituting one “destiny” for another, dare I say, better one? Why
should faulty moral compasses and lives of desperation force unfortunate
individuals into criminal activity when a chemical modification could prevent
such anti-social behavior? Such ideas may cause a vague feeling of uneasiness,
and rightfully so.
Somewhere deep within our brains lies the conviction that we are the
masters of our fates, that we alone can determine what our futures hold. But
we also believe, in a place equally deep, that there is a plan for our lives, that
we are not alone in the world, and that our lives and actions have meaning.
MERCER STREET - 203
I believe that science is as great a reassurance as any religion. I believe
that science can explain any natural phenomena, that a rational explanation
for any mystery exists, simply waiting to be discovered. I also believe that for
too long, we have turned outward for answers to these questions—to the
skies, to the heavens. With every advance, these answers seem to push farther
and farther away. When the clouds and the endless azure prove to be empty,
the kingdom we imagine melts into abstraction, and no tower is tall enough
to reach it.
For the answers to the questions truly worth answering, we must turn
inward. I remember hearing that the human brain is one of the most complex
aggregates of matter in the known universe. Deep within its tangled web of
nerves, a compaction upon compaction upon compaction resulting in the
densest, most complicated structure known to humanity, lie the answers that
we seek, the explanations to the inexplicable: why we risk our lives to help
others, where our sense of what is good and just originates, how heroes are
able to rise above the ordinary. We cannot expect to find answers to humancreated mysteries in a non-human world, for these enigmas are merely manifestations of the struggle to explain our own beliefs.
Science has the power to demystify these mysteries, to provide rational
explanations for phenomena we once accepted to be beyond rationality.
Predestination has long been considered a supernatural process—ordered by
the whims of capricious gods or the inscrutable machinations of an outside
force. If a rational, biological answer could replace these dated superstitions,
why stop there? Why not seek explanations to all the riddles this world holds?
Supernatural phenomena are merely mysteries we have not yet found a proper explanation for; it is my belief that most of these explanations lie within us,
encoded in our genes, hidden in dense neural jungles. Just as we marvel at the
world around us, we must learn to appreciate the world within. Only then will
we learn what truly makes us human.
REFERENCES
1Miller,
P. 2007. Swarm theory. National geographic [Internet]. [cited 2009
June 4]. 212:1, 127+. Available from: http://ngm.national
geographic.com/2007/07/swarms/miller-text.
2Lee, D. 2008. Game theory and neural basis of social decision making.
Nature neuroscience [Internet]. [cited 2009 June 4]; 11, 404-409 (26
Mar. 2008). Available from: http://www.nature.com/neuro/journal/v11/
n4/full/nn2065.html.
204 - MERCER STREET
3Fehr,
E, Fischbacher, U. 2003. The nature of human altruism. [Internet].
[cited 2009 June 4]. 425, 785-791 (23 Oct. 2003). Available from: http://
www.nature.com/nature/journal/v425/n6960/full/nature02043.html.
4NPR. 2009. Morality. Radio Lab. NPR, WNYC, New York, NY. 9 Feb.
2009.
5Owen, J. 2005. Suicide grasshoppers brainwashed by parasite worms.
National geographic news [Internet]. [cited 2009 June 4]. 1 Sept. 2005.
Available from: <http://news.nationalgeographic.com/news/2005/
09/0901_050901_wormparasite.html.
MERCER STREET - 205
206 - MERCER STREET
Download