The Revolutionary Pr..

advertisement
The Revolutionary Prospects of a Science of Morality
By Jeremy Le Veque
2009/10/13
In this brief piece, I want to outline my reasons for a science of morality to evaluate
morality itself. Marx once argued that communism is the period of history in which
the people stop being merely the object of history, to which things are done; so they
can also become the subject of history, those who affect history. Similarly, as much
as groups are held down by a social order, social science has revealed how much
societies influence individuals. Therefore, if we wish to escape social influence, we
must stop being the targets of social order and grow to master it.
How, though, would we expect individuals to master the social order? Well, as a
starting hypothesis, let’s draw an analogy from how Marx proposes that groups
liberate themselves: He argued that groups should come to realize their place in
history, empowering them. This is similar to Francis Bacon’s (popularly repeated)
idea that “Knowledge is power:” Once a group knows its place, it can achieve what it
wants.
Drawing this analogy to human beings, we hypothesize that self-knowledge is the
key to self-advancement: When we know our predicament, we’re likelier to be able
to escape. However, the self is notoriously elusive to analyze, and some feel that
introspection is a distraction. People often feel that morals are not things that can be
scientifically examined, since science and morality occupy different spheres. This
means that any attempts to study them are in error. Ethicist G.E. Moore called this
error the “naturalistic fallacy:” It states that we can never equate “pleasure” directly
with “goodness,” since “goodness” is a property independent of “pleasure” in the
same way “delicious” is a property independent of “apple.”
Social neuroscientist Rudolph Nesse extends this to say that “Is” statements cannot
be translated into “Ought” statements, since no statement on moral motivation will
ever be sufficient to explain the experience and fullness of morality. He says that a
complete science of motivation/morality could not explain an “Ought” statement, no
matter how sophisticated it became. This isn’t a new point, and the author cites the
point as being as old as David Hume, but it seems odd to hear from a neuroscientist:
Why shouldn’t a complete, scientific explanation of behavior suffice to explain
values? If I examine my morals under a microscope and find those actions that are
consistent with my morals, then haven’t I just scientifically constructed “Ought”
statements? What defines an “Ought” statement, then? Is it something that changes
me by making it? What if I could write an algorithm that would take all necessary
data and make a list of would-be “Ought” statements, but then hide them from me,
so I wouldn’t know what they are. In that case, I would have used science alone to
make a set of statements about what I “ought” to do, right?
Still, that scientific process seems similar to other people making moral
prescriptions. The only difference, it seems, is that other people’s prescriptions are
less accurate than a complete neuroscientific approach. When people say “It’s
immoral to steal,” what they mean is “I know what morality is and therefore I know
that you’d defy it by stealing,” right? That restatement seems like a description:
Morality, whatever it is, is something; and that statement is describing the
relationship between morality and stealing. It still seems like we haven’t gotten to
the point where neuroscience can’t explain “Ought” statements.
Is it something special about making the statement? Let’s take the algorithm
example from above, and let’s say it produced the statement “I oughtn’t blow up
orphanages for fun,” except I didn’t think of that statement yet. Then, let’s say
someone asked me, “Should you blow up orphanages for fun?” I would then have to
make a decision on that issue. The inputs and outputs would be the same as they
were in the algorithm; except they wouldn’t be what an accurate model predicts,
they would be the real deal. Is that the difference? Is it my actual emotions that
make the difference between an “Is” statement and an “Ought” statement?
Well, it can’t just be that my emotions better represent my intentions: If so, then the
only difference between the algorithm and my decision is that my decision is a
better predictor of what my morals actually are. For an “Ought” statement to be
truly different from an “Is” statement, an “Ought” statement can’t merely
correspond to actual morals. But what else is different?
Perhaps making the statement itself changes something about me. Perhaps an
“Ought” statement determines how I will behave. For instance, I may be in a state of
great stress, and it may occur to me one day that blowing up an orphanage would be
fun. If I did not decide before that I shouldn’t blow up an orphanage for fun, then
maybe I would be less restrained in my future decisions to actually blow up
orphanages. On a neuroscientific level, this might look like a change in the strength
of synapses related to not blowing up orphanages for fun.
Still, that seems irrelevant because I doubt that was David Hume’s original concern,
or anyone else’s. Besides, it would make the difference between “Is” and “Ought”
gradential: Meaning, a statement could be 10% “Ought” if it only kinda reminds me
not to blow up orphanages for fun. And frankly, that seems completely separate
from what Hume or anyone else is suggesting.
Neurostientist Rudolph Nesse suggests an example of a failure to observe the
naturalistic fallacy: When male students hear that males evolved to seek many
sexual partners, those students’ sexual behavior changes to fit that expectation,
thinking that they’re engineered to most enjoy many sexual partners. This is clearly
in error, but it has nothing to do with whether or not science can influence morality.
There are many reasons why that sort of inference is wrong: Other concerns may be
greater, the environment may have contributed back then in a way it doesn’t now,
etc.
Still, none of these reasons are absolute: They are all empirical, and can theoretically
be disproven. This leads to a worrisome conclusion that perhaps the basis for
morality can be disproven. What would such a proof involve? Well, we would first
have to define morality. Perhaps morality is an umbrella term that incorporates
multiple processes. (For instance, Joshua Greene of Harvard suspects that utilitarian
and Kantian moral analyses involve different processes in the brain.) We would
have to identify what those components might look like, then see if neuroscientific
research can find processes that map more neatly onto those components. (As an
example, moral disgust has been linked to an area of the brain involved in vomiting
toxic substances. This leads some researchers to view disgust as a way of avoiding
immorality like a disease.) After that, we would have to look at what factors affect
morality. Once we’ve identified those, we have to ask ourselves to what extent those
factors can be reversed. If they can’t be reversed, then we’re stuck with them.
Otherwise, we have to ask whether it would worthwhile to eliminate them, and the
terms of worthiness itself can be described by sufficient research.
This, I feel, is a practical approach, that takes in the concerns of the skeptic and
meets them with science. It is not friendly to accounts of morality that are idealistic
or that rely on divine revelation, but it is friendly to accounts that focus on
morality’s role in natural life. Martha Nussbaum, for instance, describes morality as
a flower that can be destroyed by natural circumstances and which is susceptible to
natural forces. Aristotle, and more recently Philippa Foot, claim that ethics is only
good so much as it contributes to a flourishing life. These accounts still appeal to the
goodness of ethics, but they allow for doubt on an empirical level, which makes
morality a more scientific question.
And for those who feel that bringing morality to science misses the point: Saying so
is merely attempting to unring a bell or shut Pandora’s Box. The virtue of selfinterest has been announced loudly by Hobbes, Nietzsche, Freud, and many
thinkers; and it has taken root in game theory, which guides corporate and state
leaders. (Perhaps the only reason it couldn’t be expressed before was since
materials were more expensive to distribute, and therefore could be more easily
censored.) Even if self-interest is wrong, we must know why so we can give a very
compelling reason to leaders and peers. If we want social groups to advance, we
should inform those groups, but the only thing that can inform a group is informed
people, and the best way for people to become informed is through science.
Some may still feel that the goodness of morality is obvious and requires no proof,
although as they maintain that certainty, those who think will meet this intellectual
frontier, and the less protected such a frontier is, the more people we should expect
to surpass it. How can we stop them? We could accuse these people of
“overanalyzing” unimportant issues, but a culture that attacks analysis also poisons
science, which has the most impressive track record of any social institution of any
to date. Therefore, to protect our frontiers, we must first come to know them
ourselves.
Download