Transcript of RS 139, Eric Schwitzgebel on “Moral Hypocrisy: Why

advertisement
Transcript of RS 139, Eric Schwitzgebel on “Moral Hypocrisy: Why doesn’t knowing about ethics
make people more ethical?”
Julia Galef:
Welcome to Rationally Speaking, the podcast where we explore the
borderlands between reason and nonsense. I'm your host, Julia Galef,
and with me today is our guest, professor Eric Schwitzgebel.
Eric is a professor of philosophy at University of California Riverside. He's
the author of the books Perplexities of Consciousness and Describing
Inner Experience: Proponent meets Skeptic. He's also the author of the
excellent philosophy blog Splintered Mind which I've been a fan of for
years.
Eric, welcome to the show.
Eric Schwitzgebel:
Hey. Thanks for having me on.
Julia Galef:
So great to have you.
One of the things that Eric is most famous for is his work studying the
moral behavior of moral philosophers, examining the question: Do
people whose job it is to study the question of how to behave morally, do
those people actually behave more morally than the average person? Or
than the average person in a comparative reference class, like other
professors, for example?
Hopefully it's not too much of a spoiler to say: No, they don't.
What we're going to explore in today's episode is, first of all, how did you
reach that conclusion, Eric? What was your methodology, what were you
measuring? And then: what should we conclude from that fact? What
does that tell us about moral philosophy, or human psychology?
Does that sound like a good place to jump in, just explain how you came
to this conclusion?
Eric Schwitzgebel:
Yeah, sure. Let's do it. Maybe I'll give just a little bit of background.
I've mostly done it by looking at a variety of empirical measures. But
before getting into the empirical measures of moral behavior, or
disputatively moral behavior, maybe I'll say a little bit about how I got
interested in the topic.
Julia Galef:
Yes, great.
Eric Schwitzgebel:
The results might not be super encouraging. I was interested in classical
Chinese philosophy. I've been interested in classical Chinese philosophy
for a long time, and one of the philosophers I like best is Mencius. And he
says that if you stop and you think, and you reflect about morality, you'll
find yourself inclined to do what's morally right.
At the same time as I was teaching Mencius and his opponent in the
ancient Chinese tradition, Zhuangzi, about the power of philosophical
reflection to shape moral behavior, I was also reading about the
Holocaust, and teaching about the Holocaust and that sort of thing. And I
thought, "Well, if someone was in Nazi, Germany, would they, by
stopping to reflect, discover what we now all think is obviously wrong
and then not participate in the evil of the regime?"
Looking at the empirical evidence from that period -- most famously
Heidegger, but also lots of other philosophy professors seem to engage in
lots of political reflection, ethical reflection, moral reflection. And some
of them resisted the Nazis, but lots of them, including of course
Heidegger, went along with that.
Putting those two issues beside each other, I wondered whether Mencius
was right about the power of reflection -- especially philosophical style
moral reflection -- to lead us toward moral improvement.
Also looking around just at my colleagues in philosophy, it seemed like
those who specialize in ethics didn’t seem to behave much better or
much worse than anyone else.
These kinds of reflections led me to think, "Well, has anyone empirically
looked at the moral behavior of ethicists?" No one had, so I started a
series of empirical studies, trying to get at the question of whether
ethicists actually do behave any differently than other types of
professors.
Julia Galef:
Great. How did you go about that?
Eric Schwitzgebel:
Most of this was collaboratively with Joshua Rust. He's a philosophy
professor at Stetson University. The first study that we did was we went
to an American Philosophical Association meeting, and we just asked
passersby. We set up a table, put up a sign that said “Take a five minute
questionnaire, get four Ghirardelli chocolate squares.” The chocolate was
sufficiently interesting to people that they were willing to answer a few
questions about their opinions about their colleagues.
Julia Galef:
I feel like chocolate companies, and especially Ghirardelli, are like the
unofficial sponsor of all psychology research.
Eric Schwitzgebel:
Yeah. We got many more participants than we thought, because
everyone wanted these chocolates. This is in San Francisco and there was
the Ghirardelli thing downstairs, and we're just running down getting
bags. It's been great. Four chocolate squares costs us about a dollar, so
we're getting high level participants for a dollar, basically. That was nice.
Julia Galef:
What were the questions you asked them?
Eric Schwitzgebel:
I'm not going to get the phrasing exactly right, but -- there were two
versions of the survey. One asked about ethicists in general… This is an
American Philosophical Association meeting, so mostly respondents are
professional philosophers. "Think about the ethicists you've known from
your personal experience. On average, do you think they behave about
the same, morally better, or morally worse?"
There were two versions of it. One was philosophy professors not
specializing in ethics and the other version was non-professors of similar
socio-economic background, something like that. We gave them
response scales, 7-point response scales -- from “substantially morally
better,” through the midpoint, which was marked “about the same”, and
then the far point was marked “substantially morally worse”.
We found that professors were divided in how they responded to this
question. Some of them said that ethicists behaved somewhat morally
better; they clicked something on the morally better side. Some of them
said about the same. And a few said somewhat morally worse; they
clicked something on the morally worse side.
The ethicists themselves split about equally between “better” and
“same,” and very few of the ethicists said “worse.” Whereas the nonethicist respondents were split about equally between “better”, “same”,
and “worse” for both of the comparison classes, when an ethicists were
compared both with other professors and with non-academics of similar
social background.
That was version 1. And version 2, we asked the same kinds of questions
except instead of asking about ethicists in general, we asked about the
ethicist in your department whose name comes next in alphabetical
order after yours, looping around from Z back to A if necessary.
Julia Galef:
That’s such a nice way to knock people off of whatever cached archetype
they have in their head, and get them to actually look at the data.
Eric Schwitzgebel:
That's right. That's what we thought, that people might respond based on
some general impression that might be misleading, based on a highly
salient but not representative example, or something like that. Yeah.
So we asked those two questions and then basically, with that question,
we got similar results, with the ethicist respondents skewing a bit
towards saying the ethicists behave better than the comparison group,
although lots of them saying about the same. And the non-ethicist
respondents splitting more equally across the spectrum. We did just a
kind of a peer opinion study, yeah.
Julia Galef:
At any point, did you look at specific dimensions of morality, like specific
behaviors that ethicists were doing more or less of than other people?
Eric Schwitzgebel:
Yeah, most of our stuff has been on that. That study established in our
minds two things.
One is that there's no consensus among philosophers about how ethicists
behave. That's already an interesting thing to establish. Because a lot of
people seem to think it's obvious that ethicists will behave the same, or
better, or worse. But it's not obvious to everyone. People give different
answers when you actually ask them, without their knowing the data.
The other thing it established was that one’s peers’ opinions might have
some relation to reality, or they might not.
We've got now at this point 17 different behavioral measures of different
kinds of behaviors that are arguably moral. Now, there's lots of dispute
about what kinds of behaviors are moral, so none of the individual
measures are going to be convincing to everyone. But they tell a very
consistent story across the board when you look at them all.
Julia Galef:
What are some examples of individual measures?
Eric Schwitzgebel:
We looked at the rate at which ethics books were missing from academic
libraries. That was our second study. We found that ethics books were
actually more likely to be missing than comparison books in philosophy,
similar in age and popularity.
We looked at whether ethicists and political philosophers were more
likely to vote in public elections than other professors. Here we had
access to publicly available voter participation data in five US states.
Julia Galef:
Can you also look at whether ethicists’ self reports of voting are
accurate? That seems like a separate measure.
Eric Schwitzgebel:
Yes, we did look at that actually. Probably our biggest study was a survey
sent to the same five US states for which we had the voting data. And we
asked ethicist respondents, a comparison group of non-ethicists in the
same philosophy departments, and another comparison group of
professors not in philosophy at the same universities. Three equal sized
groups of respondents.
We contacted about a thousand respondents in total. And we got about
200 responses from each group, so a pretty good response rate.
We asked these people, in the first part of the questionnaire, their
opinion about various moral issues. And then we asked them in the
second part of the questionnaire to self-report their own behavior on
those same issues. Then on some of the issues like the voting issue, we
also had, about the same participants, some direct measures of their
behavior. So those don't rely on self-report.
Although I should say that in the interests of participant's privacy, we
converted everything into de-identified codes... So we're not able to
draw individual inferences about particular individuals. All the data was
analyzed at the group level.
Julia Galef:
Got it. And the pattern you saw overall was…?
Eric Schwitzgebel:
Ethicist behavior was basically identical across the board to the other
groups of professors. There were some differences, but not very many,
and not very strong. And overall, when you put it together, and you
combine the data in various kinds of ways… It looks like there's no overall
trend toward better behavior.
Although we did find, when we asked their opinions about various moral
issues, that ethicists tended to have the most demanding opinions. They
thought more things were morally good and morally bad, and were less
likely to regard things as morally neutral, than were the other groups.
Julia Galef:
They just didn't act on those principles.
Eric Schwitzgebel:
They didn't seem to act on those principles.
The most striking example of this was our data on vegetarianism. We
didn't have any direct observational measures of this, but the self-report
measures are already quite interesting.
Most of the questions in the first part of the questionnaire were set up so
that we have these 9-point scales that people could respond on -- very
morally bad on one end, through morally neutral in the middle, to very
morally good on the other end. Then we had a bunch of prompts of types
of behavior that people could then rate on these scales.
One of the types of behavior was regularly eating the meat of mammals,
such as beef or pork. In response to that prompt, 60% of the ethics
professors rated it somewhere on the morally bad side; 45% of the nonethicist philosophers, and I think it was somewhere in the high teens for
the non-philosophers, 17% or 19%, something like that for the nonphilosophers. Big difference in moral opinion.
Then in the second part of the questionnaire, we asked, "Did you eat the
meat of a mammal, such as beef or pork, at your last evening meal, not
including snacks?" There we found no statistically detectable difference
among the groups. Big difference in expressed normative attitude about
meat eating; no detectable difference in self-reported meat eating
behavior.
Julia Galef:
Pretty interesting. I'm wondering whether this is a result of ethics
professors not really believing their ethical conclusions, like, having come
to these conclusions in the abstract?
You know how people might say that they believe they'll go to hell if they
do XYZ, but then they do XYZ. And you want to say, "I think you, on some
level, don't really believe that you're going to go to hell by doing those
things." I wonder if these conclusions are somewhat detached from their
everyday lives.
I was reminded of this anecdote I heard back when I was in the
economics department, about some famous econ professor who ... I
think he was famous for a decision-making algorithm or something. And
at one point in his career, he was facing a tough decision of whether or
not to leave his current department for a different department. He's
agonizing about this. And one of his colleagues says, "Well, Bob” -- I don't
know his name, let's call him Bob -- "Bob, why don't you use your
decision-making algorithm to tackle this?" And Bob says: "Oh, come on
now, this is serious!"
Anyway, I'm wondering if something like that's going on. Or if you think,
no, they really do believe these conclusions. They just don't care enough
to act on them.
Eric Schwitzgebel:
I'm very much inclined to think they believe them on an intellectual level,
at least. It sounds like the econ professor you're talking about regarded it
a little bit like a game. When I talked to philosophy professors about
things like donation to charity, which is another question we asked about,
or eating meat, they have intellectual opinions that I think are ... They
don't regard it just as a game. I think it's actually a pretty real moral
choice. Now some of them think it's perfectly fine, and some of them
think it's not, but I think they take it pretty seriously for the most part.
It'll be interesting to think about whether there's some way to measure
this. My inclination is to think that they take it pretty seriously at an
intellectual level, and then the trickier question is whether that
penetrates their overall belief structure.
Julia Galef:
One way you might detect this -- I don't know if this is actually
measurable, but in theory at least, you could look at how torn or guilty do
they feel about not living up to these standards.
Eric Schwitzgebel:
Yes.
Julia Galef:
If they don't feel guilty at all, then maybe it's more intellectual.
Eric Schwitzgebel:
Yeah. I think that's a nice first stab at it. This actually gets ... I work quite a
bit on the nature of belief and attitudes. This is another dimension of my
research. And so let me just give a little background on that.
There's a class of views which we might think of as intellectualism,
according to which, what you believe, what your attitudes are, are
basically what you would sincerely say they are. That's the view that I
would reject.
The alternative view, I think of as being a more broad-based view, on
which to believe something is to live your life in general as if it were the
case. Taking the example you used, of the person who talks about hell
but doesn't seem to behave accordingly: on an intellectualist view, that
person is sincere, that person really believes in hell, but they're being
irrational.
On the broader type view that I favor, to believe in hell is a combination
of things -- including among them, just sincere assertion. But also
including generally how you live your life. And what your values and
opinions are, as revealed by your behavioral choices, and your emotional
reactions, and your spontaneous inferences about situations, and that
sort of thing.
It’s just part of the human condition that we have a wide range of
attitudes where we're in-between and mixed-up. Often, religious
attitudes and moral attitudes are among them. Things like racism and
sexism are another rich vein of cases like this. Where you intellectually
say one thing -- intellectually say, for example, that women are just as
smart as men. But there are a lot of people who don't live that way;
despite their intellectual commitment to that view, they spontaneously
respond to men as being smarter, or something like that.
So you might think, okay, here we've got the case of the ethicist who says
that eating meat is wrong. If the person goes ahead and eats meat and
doesn't feel bad about it, then maybe that would be one of these inbetween type cases -- where intellectually, they're sincere that it is
wrong. But somehow it hasn't penetrated their whole decision structure,
it's not reflected in their emotional responses, they don't react to it as
though it's something wrong. I think that definitely is a possibility.
Julia Galef:
It also seems to me that once you start thinking seriously about ethics,
you notice the scope of moral consideration widen so much beyond the
typical person's scope. You're looking at things like failing to prevent
harm, or causing indirect harm, or all of these things affecting future
generations, that normal people don't think about.
And at that point, the word “wrong” almost means something different.
Because clearly, it can't mean something you absolutely shouldn't do -because it's just impossible to live up to that standard. You're in this
different position.
For a normal person, behaving morally -- I mean, it's not that it's easy,
but it's more straightforward. There's this set of things: you're not
supposed to cheat, you're not supposed to lie, you're not supposed to
cause unnecessary violence, etc. and you can just follow those things. But
then once you're [studying ethics]-- especially if you're, for example
utilitarian -- there's just no end to the things that you could in theory do,
to reduce suffering in the world.
Now the question becomes: where do I draw the line? Unless I want to
actually try to give as much of my time and resources and energy as I
possibly can, to reducing suffering -- unless I want to do that which
almost no one is willing to try to do -- then it just ... You’ve got this
question of where do I plant my flag, and say this is where? And there
doesn't seem to be any clear way to answer that question.
…I forget where I was going with this. But the point is, I think ethics
professors or anyone who's thought long and hard about ethics -- and I
would count myself in that category -- is just faced with a different
situation. Where you can't just decide to follow this pre-determined set
of rules to be moral.
Eric Schwitzgebel:
Yeah. I think that’s also a possibility, and stands a little bit in tension with
the first possibility we're saying, although maybe it can be reconciled
with it. Let me just say -- I think you said it very well. But let me just
restate it in some of my own terms, too.
I think for a lot of people, once they start thinking really seriously and
regularly about ethics, the world becomes more permeated with ethical
choices. Especially, as you point out, if you're a consequentialist or
utilitarian. If you think basically that doing right or good is about
maximizing happiness, or something like welfare here in the world -- then
every single thing you do… I think this is actually also true on other
ethical views as well, but it's really especially clear for consequentialism.
Every single thing you do, every time you choose to buy a cup of coffee,
you could've done something else. You're always short of the moral ideal.
You could've taken that $2 for the cup of coffee and donated it to Oxfam,
or whatever your favorite charity is. You've now done something that's
ethically short of the ideal.
There's some evidence from Joshua's and my work that ethicists do tend
to see the world as more ethically permeated, at least on our fairly short
list of questions. Ethicists tended to avoid saying that things are morally
neutral. They tended to say they were either good or bad, whereas the
non-ethicists were much more comfortable with describing things as
more neutral.
Once you see the world is ethically permeated, then you have to face the
fact that you are doing things that are short of the ethical ideal all the
time. That basically everything you do is ethically ... I don't know if flawed
is the right word, although I think maybe flawed is okay. Anyway,
ethically non-ideal.
Then I think once you acknowledge that, then you get put in this position
of thinking, "Okay, how far short of the ideal am I comfortable being?"
Maybe it's okay to do things that in fact I think are somewhat bad or
wrong sometimes. Because now that the world is just permeated with all
these decisions, I can't avoid being bad and wrong.”
This gives you another way of thinking about the person who
intellectually says it's wrong to eat meat and yet chooses to do so. They
might think something like, "Well, everything I do is so permeated with
choice. I want to do some things that are wrong. I'm not aiming to be a
saint. This is one of those wrong things that I'll just let myself do. It's not
maybe super wrong, it's not super bad, so I'm going to do it."
That's another way of seeing it. That I think, makes it less like the person
who says she believes in hell and then acts ...
Julia Galef:
As if she doesn't.
Eric Schwitzgebel:
As if she doesn't. It makes it a little more rational than that.
Julia Galef:
To back up for a minute, have you heard of the Effective Altruist
Movement?
Eric Schwitzgebel:
Yes.
Julia Galef:
I’m at least partially in that crowd. I have a lot of ideological alignment
with them, and certainly plenty of social alignment with them. And I see
this problem a lot among people who want to be committed effective
altruists. They've heard Peter Singer's thought experiment about the
child drowning in the pool -- would you jump in and save him, even if it
costs you a thousand dollar suit? Well yes, of course. Okay fine, then why
don't you donate your thousand dollars to save the children who are
dying on the other side of the world? The only difference is that you can't
see those children -- but they're dying just the same.
People have really taken this argument to heart, and really do feel guilty - not everyone, but a lot of people feel guilty when they buy that latte
instead of donating the money to an effective altruist charity.
Then the other common reaction that I see is this kind of justification of
deliberately aiming at mediocrity. Which is something like, "Well, if I
were to sacrifice that much, it would be bad for my motivation and
happiness. And it would ultimately make me a worse ... I would do a
worse job at helping the world. Because I would be so strung out and
deprived of things that are good for human motivation.”
And I get that, that makes some sense to me. But it just feels so
convenient. It just so happens that all the ethical lines that I want to draw
happened to be the ones that are best for maximizing my ethical impact
in the world. I just don't buy it. And yet I don't see how to ... I don't see a
way out of this dilemma, out of the Scylla and Charybdis, of hypocrisy and
madness.
It's a grim metaphor! I'm exaggerating here. There are plenty of people in
the Effective Altruism Movement who are totally well-adjusted and really
making a large, great positive impact in the world. But I think for people
who really take ideas seriously, this is a challenge that they often face.
Eric Schwitzgebel:
I am with you there, and I can't quite see my way through that. I do think
that there's this powerful impulse to rationalize. When people are
confronted with the Singer argument, there's this powerful impulse to
rationalize to say, "Oh well, I'm okay because ..." and then you get these
stories.
I teach Peter Singer's argument in my giant class on evil. The same class I
mentioned earlier, where I talked about the Holocaust and that sort of
thing. Usually about 400 students. So I present Singer's argument and
then I do a vote, "How many people agree?" Only a minority agree.
Then I just say, "Okay, tell me why it's wrong." People are very good.
They come back with all these rationalizations. But I've done enough
background reading on it that I can bat down the kind of rationalizations
that students are able to come up with in intro classes just after a day.
And I can argue them on this question, but it doesn't change their minds,
they just reach for other rationalizations.
I think this is an interesting feature of human psychology. I think we have
these moral opinions and they are grounded emotionally in a certain
way. Then the reasoning you justify them with comes afterwards. And
among the moral opinions that most people have is “It's not morally
wrong for me to live a middle class American, Western, European
lifestyle.” They will then try to justify that.
Now I think one possible way to -- it doesn't completely avoid your
dilemma, but I think it avoids the madness of the dilemma -- is to become
a little more comfortable with seeing yourself as short of the moral ideal,
without giving up on progress, and falling into despair as a result of that. I
think we sometimes put ourselves in a little bit of a bind by being so hard
on ourselves for being immoral. We really don't want to see ourselves as
immoral.
If you become somewhat okay with seeing yourself as immoral, so that
you can acknowledge, "Hey look, you know what, I've got this racist and
sexist stuff in me, and that's wrong. And I can admit that about myself.
And I've got this selfish stuff in me that's willing to let that baby on the
other side of the world drown. And that's wrong,” but not so much that
you then collapse from it. You know what I mean?
Julia Galef:
Yeah. I've been playing with this related idea, of not trying to answer the
question of “How moral should I be?” in terms of states, like 75% moral
or 80% moral, whatever. But in terms of vectors.
Okay, here's a different way to state it that might be clearer. There's a
distinction you can make between ways of approaching utilitarianism, or
effective altruism, or maybe other ethical systems as well; I have less
experience with those. And I've heard the distinction referred to as
obligation versus opportunity.
Obligation is the more traditional one – like, these are rules and I have an
obligation to follow them. Opportunity is more like looking around the
world for -- for lack of a better word, good bargains, ethically speaking.
Looking for things where I can, for not too much sacrifice or effort or cost
on my part, I can prevent a lot of harm. Or do a lot of good.
It's a decision-making rule that doesn't necessarily imply a state that you
need to end up at, a percentage of morality. And it's a rule that, by
stipulation, feels not too effortful to follow. I do think there's a fair
amount of low-hanging fruit in the world where you’re like, "I actually do
like grilled cheese sandwiches just as much as burgers." This won't be
true for everyone, but if it were true for you, it's pretty easy to switch to
grilled cheese sandwiches for your daily lunch instead of burgers. And do
a lot of good that way, or something like that.
Or fill in your own example, where it's not that hard. Or take the money
you're currently donating to a charity that maybe doesn't have as good of
an empirical track record of doing good, and just switch that same
amount of money to a more effective, empirically validated charity.
That's a very good bargain, that costs you nothing. And it does more good
in expectation. It's not a full, principled answer to the question of where
to draw the line -- but it feels at least like an improvement over the two
options.
Eric Schwitzgebel:
It's a nice way of framing it. It also sounds a little like the philosophical
concept of supererogation.
Julia Galef:
Yes! Do you want to explain that?
Eric Schwitzgebel:
Sure. The idea of supererogation is that you're morally required to do a
certain minimal amount of stuff. There's other stuff that would be good
to do, but you're not required to do it. You might think, "Well look,
what's morally required is not to kill people and not to lie out of selfinterest and not to take money from people, and stuff like that." And
then it's morally good, but it's not required, to give large amounts to
charity. Or something like that.
One of the things about Singer's argument, I think, that's interesting, is it
doesn't take advantage of that. It's not just good but not required to save
the child. It's actually required to save the child. There's something
comfortable about the supererogation move. I think it can be a
sophisticated or subtle way of doing rationalizations a little bit better. I'm
not sure it's fully satisfactory.
Julia Galef:
Yeah. Have you, by any chance, heard the term Schelling fence? Like a
Schelling point, but a Schelling fence?
Eric Schwitzgebel:
No, I don't know this one. This is from economics?
Julia Galef:
No, I think it's from a blog post by this really great blogger who's also a
friend of mine. He writes a blog called Slate Star Codex, which I've
plugged on the show before.
Anyway, he's talking about these slippery slope dilemmas, where, like, I
could always do more, but where do I draw the line? Or, the fetus
becomes a human or a person at some point, but where? It's not really
clear.
A Schelling point in economics, for people who haven't heard the term, is
it's a point that is significant, not because of any inherent property of it,
but just because people have agreed that it's significant. They've just
chosen that point. Like if you were to try to meet someone in New York
without having arranged ahead of time where to meet them, for
whatever reason. A plurality of people would pick Grand Central Station,
as just sort of a meeting point. And the reason they pick it is because they
expect other people to expect them to pick it, etc.
So a Schelling fence is just an arbitrary threshold line that you draw on a
slippery slope, to keep yourself from sliding down the slope all the way,
basically.
Eric Schwitzgebel:
Okay, good.
Julia Galef:
That's what the supererogatory line feels like to me. That there doesn't
seem to be any inherent reason to call certain actions required, and other
actions optional. But we need some things to be optional, or else -madness.
Eric Schwitzgebel:
Right. Let me connect it a little bit to some of the stuff we were saying
about belief. On this broad conception of what it is to believe something,
we have, I suspect, all kinds of elitist and racist and classist and ageist and
beauty-driven things, that aren't quite beliefs, that we would
intellectually reject. But they’re an important part of the way that we go
through the world reacting to people.
Those are all, I think, morally problematic. And some of the people who
favor more intellectualist approaches to belief defend that in part by
saying, "Well look, if you tell people, ‘Hey, you think that you believe that
women are just as smart as men, and that all the races are equal, and
that there's nothing wrong with poor people or ugly people, you think
you believe those things but you don't really fully believe them. You're
more in a mixed-up state’, then people will react negatively to that.
Because they feel like their authority and their moral character is being
challenged.”
And I think that's true, that people will react negatively to that. That
creates a problem. But I think part of the reason people react negatively
to that is that people are so invested in seeing themselves as morally
non-criticizable. If we can accommodate ourselves to accepting moral
criticism without thinking, without going too far toward despair with it or
feeling too bad about it, feeling the right amount of bad about it, then I
think that creates ... That's a tool that can help us not need defense.
The thing that's attractive about supererogation, I think, or one of the
things that's attractive about it, is that you can say, "I'm morally flawless.
I might not be morally ideal, because I'm not doing all the good things,
but I haven't crossed the line to what's bad." This investment in seeing
yourself as fully morally non-criticizable, except maybe in certain
acknowledged things that you regret about the past or something like
that. That is part of the pressure, I think, towards supererogation.
If we can pull away a little bit from that view, and allow more of a
gradated view across the spectrum, then you don't need to have this
fence, or draw this line. You might not be bright red but -- just to take an
example of spectrum between say blue and red, right?
Julia Galef:
Right.
Eric Schwitzgebel:
You don’t have to say, "Okay, here's the line where red starts. Here's the
line where, below which you're morally criticizable, and above which
you're fine, and only headed toward being closer to the ideal." You can
stick with a spectrum type view. Maybe a multidimensional spectrum.
You can stick with a spectrum type view, I think, if you're okay seeing
yourself as purple.
Julia Galef:
But am I bluish-purple or am I reddish-purple? Tell me, Eric!
Eric Schwitzgebel:
I think it's good to be not fully satisfied with being purple, and to try to be
moving toward the red.
Julia Galef:
We're almost out of time, but I want to throw one more idea at you to
see what you think.
This just occurred to me as a potential justification for there being a
difference in kind between supererogatory and non-supererogatory
behavior, rather than just a difference in degree. The difference in kind
might come down to: Does my current society basically agree that this
behavior is moral or not?
This actually came up in a debate with ... I have a lot of friends with very
strong opinions about ethics. And one divide between them is on the
issue of animal rights, and whether it's wrong to eat animals or animal
products at all.
The people on the "It's wrong to eat animals camp" are often pretty
assertive about trying to pressure or shame the people who disagree with
them, into not eating animal products. This is causing some tension
among people who care about ethics. And it'd be better if we could all
get along and work together.
But the animal welfare side says, "Well, this is really important. There's all
the suffering happening and if we can prevent the suffering by putting on
a little bit of pressure, like sacrificing a little bit of group harmony, then
that seems like the right trade off.”
It's a little bit hard to argue with that from their perspective. But one
argument that I've seen, that I liked, is: It seems appropriate to use
shaming or ostracization to discourage behaviors that are in violation of
what we all agree is morality. Like if someone is beating their girlfriend or
something. We don't want to say, "Well, live and let live.” We want to
discourage some behaviors.
But if we allow public shaming and harassment and ostracization for any
behavior that you think is wrong, even if the majority disagrees with you - then if everyone follows that policy, we all end up shaming and
harassing each other. And then we have no society or harmony at all.
These two outcomes seem very different to me.
Eric Schwitzgebel:
Yeah. I'm more interested in the first person basis, I guess, how you think
about yourself. It's clearly important to hold other people to moral
standards, and that sort of thing. But I think dilemmas arise pretty vividly
also in the first person case -- and for me, at least, I'm sufficiently
skeptical about my own moral opinions that I don't want to try to inflict
them very much on others. Then it's not as clear that what you're saying
will work to help ...
Julia Galef:
… to help an individual decide for himself, or to draw the line.
Eric Schwitzgebel:
Yeah.
Julia Galef:
Yeah, fair. On that note…
Eric Schwitzgebel:
That depressing note! Hopefully not too depressing, just the right ...
What I want is the right amount of depressing.
Julia Galef:
All right. Let's wrap up this part of the podcast and we'll move on to the
Rationally Speaking pick.
[musical interlude]
Julia Galef:
Welcome back. Every episode of Rationally Speaking, we invite our guest
to recommend a pick of the episode. That's a book or a website or
anything that tickles his or her rational fantasy. Eric, what's your pick for
this episode?
Eric Schwitzgebel:
My pick is Jonathan Haidt's Righteous Mind.
Julia Galef:
Oh, excellent.
Eric Schwitzgebel:
You know that book?
Julia Galef:
I think I got it for my mom for Christmas.
Eric Schwitzgebel:
Sorry, I shouldn't ask you on the spot and put you in an awkward
position. But yeah, it's a wonderful book. Haidt has been very influential
in moral psychology, but it's also very accessible to general readers. It's a
perfect mix of accessible and field-pushing.
He talks a lot about rationalization and the emotional basis of our
morality. In this view, our moral judgments are grounded in a variety of
emotional reactions. Then after the fact, we come up with rational
justifications for those emotional reactions.
He also thinks, pretty interestingly, that there's not just one emotional
foundation, or intuitive foundation, of morality -- but several. And
different people differ in how much emphasis they've put on those. He
thinks that some people put a lot of emphasis on violations of care and
harm for people. And other people put a lot of emphasis on violations of
standards of loyalty, or standards of purity. Whereas other people don't
have much interest in loyalty and purity. And those differences in moral,
emotional reactions can explain political differences and that sort of
thing.
It's a fascinating book that I would recommend to podcast listeners.
Julia Galef:
I would definitely recommend it, too. I think it's really influenced the
public, or intellectual, debate and discussions about politics and about
psychology and morality. So it's great to be well-versed in it even just for
that.
Eric Schwitzgebel:
Yeah.
Julia Galef:
Cool. All right, well we are officially out of time now. Thank you so much
for joining us. I thought this was just a delightful, super fun discussion
from my perspective at least. I hope you've enjoyed it, too.
Eric Schwitzgebel:
Yeah, thanks.
Julia Galef:
Well, this concludes another episode of Rationally Speaking. Join us next
time for more explorations on the borderlands between reason and
nonsense.
Download