Andrew Sepielli

advertisement
Moral Uncertainty
Just as we may be uncertain about the non-moral facts, we may be uncertain about the
moral reasons to which those facts give rise. For example, I may be uncertain whether an action
will maximize utility, but I may also be uncertain whether maximizing utility is right in the first
place. This prompts the question of what a person should do in light of fundamental normative
uncertainty. A natural proposal is that one should act on the moral view that is most probably
true. But some have argued that, in certain cases, we should morally “hedge our bets” – act on
less probable moral theories if those theories have significant moral value at stake. However,
there are potential worries about this sort of moral hedging. It relies on comparing degrees of
value across moral views, but it is not clear that this is possible. Another worry is that it may
favor absolutist moral views, which arguably assign infinite value or disvalue to some actions.
There are also concerns about the moral uncertainty issue quite generally. First, suppose that
moral judgments are not belief states, but rather non-cognitive attitudes. Uncertainty consists of
having intermediate degrees of belief, so it would seem to follow that there is no such thing as
moral uncertainty. Second, just as we may be morally uncertain, we may be uncertain among
views about what to do under moral uncertainty. Does the presence of this higher-order
uncertainty affect what the agent should do under moral uncertainty? Does it imperil an agent's
ability to guide her behavior by norms? These are concerns that any theory of what to do under
moral uncertainty must contend with.
1. The Question Clarified, and Some Possible Answers
2. Objections to Moral Hedging
3. Concerns about the Debate Generally
1. The Question Clarified, and Some Possible Answers
Suppose you are uncertain whether utilitarianism, deontology, or some other moral theory is
true, and face a situation in which the theories disagree about what to do. You wonder, “What should I
do?” In some sense, the answer is clear: whatever the correct moral theory says. But that is clearly not
the sense of “should” we have in mind in asking the question. We have in mind a more “subjective”
sense of “should” – one that depends on the agent's beliefs, or the evidence to which the agent has
access, or some sort of (perhaps imprecise) probability (see Probability, interpretations of).
We might clarify the question further. There is a distinction, at least in principle, between what
it's morally best to do and what I'm morally obligated or required to do. I might coherently say that
giving the majority of one's income to charity is morally best but that one is not morally obligated to do
it (See Help and beneficence). We might, then, distinguish the question: “What is it subjectively best
for me to do when I don't know what to do?” from the question: “What am I subjectively required to do
when I don't know what to do?”. The first philosophers to write on this question were Catholic moral
theologians in the casuistical tradition; they focused on something like the second question. (See
Jonsen and Toulmin 1990). More recent writers on moral uncertainty have focused, though not
exclusively, on the first question, and so that shall be our focus in this article (See Ross 2006, Sepielli
2009, 2010).
So what is the answer to this question? It is often tempting to say, “Don't do anything.
Deliberate about what to do until you arrive at a conclusion you're sure of, and then act on that.” This
reaction seems mistaken in a few ways. For one thing, deliberating is not an alternative to acting. It is a
kind of acting, and as such it must compete with other actions for the agent's choice. Once we see that,
it becomes obvious that deliberating will not always be the right choice. Suppose a deep sea fisherman
can either save five people whose boat capsized because of sheer bad luck, or seven people whose boat
capsized because of their own recklessness. Clearly the right thing to do is not for the fisherman sit
around deliberating about the moral import of recklessness until he or she has got a settled view. By
that time, all twelve people will have drowned.
With that said, sometimes deliberating will be the subjectively best thing to do. There is an
argument, presented most famously by I.J. Good (1967), for the subjective rationality of basing one's
action on more evidence rather than less, and thus for the rationality of sometimes gathering more
evidence prior to action. Some philosophers have suggested that we apply Good's insight to the
question of whether and how much to deliberate before doing other actions under moral uncertainty.
(Sepielli 2010). But it is a live question whether Good's argument is truly helpful here, largely because
is a live question how deliberating and evidence-gathering are related to one another.
Some are attracted by the view that it is subjectively best to do whatever is most likely to be
objectively best. But upon reflection, this view seems implausible. To see why, first consider a case of
non-moral uncertainty. Suppose a doctor is unsure how a drug will affect a patient. The doctor thinks it
is more likely than not that the drug will cure the patient's minor and temporary sinus infection, but that
there's a decent chance the drug will kill the patient. Since the drug will more likely than not help the
patient, prescribing it is most likely best. And yet, it seems obvious that the doctor should not, in the
subjective sense, prescribe the drug.
It seems that there can be analogous cases of moral uncertainty. Suppose I think that retribution
is most likely a sufficient, though weak, ground for punishment. However, I think that if retribution is
no ground for punishment, then punishment that advances no other aim (e.g. rehabilitation, deterrence,
etc.) is horribly wrong. For in that case, such punishment would be seriously harming someone for no
good reason. It may be subjectively better, then, for me to abstain from inflicting solely retributive
punishment, even though I think that it is probably the objectively right thing to do.
The idea beginning to emerge is that, in deciding what to do under moral uncertainty, we should
care not only about the probability that action A is objectively morally better than action B; we should
also care about how the difference in moral value between the actions, if indeed A is better, compares to
the difference in moral value between them, if B is better. If the latter difference is larger than the
former one, then perhaps it is subjectively better to do B, even if A is more probably better. Let us
apply the label “moral hedging” to this sort of strategy.
2. Objections to Moral Hedging
Despite its intuitive appeal, moral hedging faces some rather serious objections.
Perhaps the most prominent among these is what we might call the Problem of Intertheoretic
Comparisons of Value (Lockhart 2000, Ross 2006, Sepielli 2009, 2010, forthcoming-b). Suppose that I
am uncertain whether utilitarianism or contractualism is the correct moral theory, and am faced with a
situation in which they disagree about what to do. In order to morally hedge, I will need to compare the
differences in moral value between the prospective actions, according to utilitarianism, with the
differences in value between them, according to contractualism. The problem is that neither
utilitarianism nor contractualism seems to give us the resources to make such comparisons. A moral
theory tells me how actions compare according to itself. It does not tell me how its own value
differences compare to the value differences of theories that, from its “perspective”, are wrong. This is
not primarily an epistemic problem. It is not simply that I do not know how the differences in value
according to utilitarianism compare to the differences according to contractualism. The Problem of
Intertheoretic Comparisons is deeper than that. It is that such comparisons may not even be meaningful.
Now, it is not entirely clear how seriously we ought to take this sort of skepticism about
meaningfulness. For it seems manifestly obvious that certain intertheoretic comparisons are
meaningful. When I say that the “upside” of eating meat, if eating meat is permissible, is smaller than
the “downside” of eating meat, if eating meat is impermissible, I am making at least a rough
intertheoretic comparison of value. Even those who disagree with this statement will typically grant
that it is at least intelligible.
But even if we reject skepticism about the meaningfulness of intertheoretic comparisons, we
might still wonder what grounds or makes true these comparisons. As was suggested above, it does not
seem that we can appeal to the structures of the moral theories themselves, taken separately. In any
event, there have been a few suggestions about how to ground intertheretic comparisons of value, but
none have attracted universal assent. More work needs to be done.
Another problem for moral hedging is something we might call the Absolutism Problem (Ross
2006, Sepielli 2010). For suppose some moral theory says you may not violate some absolute
prohibition – against lying, against intentionally causing someone pain, etc. – no matter how much
good you can accomplish by doing so. Then it seems that, according to this theory, the disvalue of
violating that prohibition is infinite. But then it seems like we subjectively ought to obey this theory's
prohibition so long as we have even the slightest degree of belief in it. This should strike us as
implausible.
The moral hedger has a few responses available. First, they may deny that, simply because a
theory absolutely prohibits an action, it assigns infinite disvalue to that action. There may be other (and
better) ways to numerically represent absolutist theories – notably, by assigning a finite disvalue to the
violation of an absolute prohibition, and assigning value to the accomplishment of good ends through a
function bounded by that disvalue (Sepielli 2010). Second, the hedger may opt for a kind of moral
hedging that counsels us to do some actions with an infinite expected disvalues. For example, they
might adopt a theory according to which what's subjectively best depends only on the objective value
assignments of theories that clear a certain threshold of probability. On such an approach, they can
safely ignore a theory that absolutely prohibits, say, lying, if her credence in this theory is below the
threshold. There may be more responses besides these.
There are other worries that might be raised for moral hedging: Does it in some way display a
lack of integrity to act in accordance with theories that one thinks are probably mistaken (See
Integrity)? In taking into account the sizes of value differences, should we treat the kind of value that
separates the supererogatory from the merely permitted differently than we treat the kind that separates
the merely permitted from the forbidden (See Supererogation)? And what about theories that do not
talk about value or value differences at all – that simply tell us what is required or permitted or
supererogatory; or theories that say there are different types of moral value that are all
incommensurable or incomparable (See Incommensurability in ethics)? How should we take these
theories into account in deciding what to do? These are all questions about which more needs to be
said.
3. Concerns about the Debate Generally
There are also concerns we might raise not for any particular view within the moral uncertainty
debate, but for the sensibility of the debate quite generally.
One is that, if non-cognitivism is true, then there is no such thing as moral uncertainty. Noncognitivism is the view that moral judgements are not cognitive states like beliefs or degrees of belief
(See Moral judgment, §1). But uncertainty is defined as a state of having intermediate degrees of belief.
So it seems that non-cognitivism is incompatible with moral uncertainty.
The right response, it seems, is to simply to admit this incompatibility, but to urge that there can
be something like moral uncertainty even if non-cognitivism is true. After all, just as there are degrees
of belief, there are degrees of non-cognitive states like desire. But as Michael Smith (2002) has argued,
it is not obvious that such a simple substitution will work. Consider that, on the cognitivist view, there
are two parts of a moral judgment that can come in degrees. First, there is the degree of the belief itself.
My degree of belief is higher that genocide is wrong than that abortion is. Second, there is the degree of
value represented in the belief. I might be certain both that genocide is wrong and that using racial slurs
is wrong, but I think the first is far worse than the second. So which of these two gradable features are
degrees of desire, on the non-cognitivist picture, supposed to correspond to? It does not seem like we
can say “both”. The answer for the non-cognitivist may be to offer a more complex non-cognitivist
theory, on which moral judgment has two gradable elements, just as it does on the cognitivist picture
(Sepielli forthcoming-a).
Perhaps the most fascinating problems arise from the fact that views about what to do under
moral uncertainty are themselves potential objects of belief or uncertainty (See Sepielli 2010).
One such problem concerns the guidance of action (See Action). The reason we asked what to
do under moral uncertainty in the first place was, presumably, that we wanted some answer with which
to guide our actions. But just as I might be uncertain between utilitarianism, deontology, and all the
rest, I might be uncertain among theories about what to do under moral uncertainty. In that case, I
would presumably need a theory about what to do under that uncertainty in order to guide my actions.
But of course, I might be uncertain regarding that sort of theory, too. You can see how this might
iterate. If, as is entirely plausible, I am morally uncertain at all “levels”, then it seems I will be unable
to guide my action. I will just have to take a “leap of faith”, as it were. But then our aim in asking what
to do under moral uncertainty is unfulfilled.
Another such problem is just what to say about the subjective rightness of the actions of people
who are either uncertain or outright mistaken about what to do under moral uncertainty. For suppose it
is correct, as suggested earlier, to morally hedge, but that I do not believe this. Instead, I believe that I
ought to do whatever is most likely to be objectively best. Is it subjectively right for me to morally
hedge? On one hand, it is tempting to say yes: That is what we had said was subjectively right, and my
disbelief does not change that. On the other hand, it is tempting to say no: Subjective rightness is
supposed to be relative to a person's perspective. That is just what it means for it to be subjective. And
from my perspective, it is a mistake to morally hedge. If neither answer is satisfactory, we might
suppose that this redounds to the discredit of the question, and indeed, to the moral uncertainty debate
generally.
ANDREW SEPIELLI
References and Further Reading
Good, I.J. (1967) 'On the Principle of Total Evidence,' British Journal for the Philosophy of Science 19:
319-321. (Proves that, under certain conditions, the expected value of gathering more evidence is
positive.)
Guerrero, A. (2007) 'Don’t Know, Don’t Kill: Moral Ignorance, Culpability and Caution,'
Philosophical Studies 136: 59-97. (Argues that one may be blameworthy for killing a being when one
is uncertain about that being's moral status.)
Jonsen, A. and Toulmin, S. (1990) The Abuse of Casuistry, Berkeley, California: California University
Press. (A history of casuistical moral philosophy and theology, including that tradition's treatment of
moral uncertainty.)
Lockhart, T. (2000) Moral Uncertainty and its Consequences, Oxford: Oxford University Press. (A
comprehensive treatment of the problem of what to do under moral uncertainty, including an extensive
discussion of the Problem of Intertheoretical Comparisons of Value.)
Ross, J. (2006) Acceptance and Practical Reason, Rutgers University Ph.D. Dissertation.
(Distinguishes the question of when we ought to accept a normative theory from that of when we ought
to believe a moral theory, and offers an extended discussion of theory acceptance under uncertainty.)
Ross, J. (2006) 'Rejecting Ethical Deflationism,' Ethics 116: 742-68. (Argues that we ought not to
accept moral skepticism or moral theories on which the differences in value between actions are small;
also includes a discussion of the Problem of Intertheoretic Comparisons of Value.)
Sepielli, A. (2009) 'What to Do When You Don't Know What to Do,' Oxford Studies in Metaethics, Vol.
IV, Oxford: Oxford University Press: 5-28. (Presents a defense of moral hedging, and the outlines of a
solution to the Problem of Intertheoretic Comparisons of Value.)
Sepielli, A. (2010) 'Along an Imperfectly-Lighted Path': Practical Rationality and Normative
Uncertainty, Rutgers University Ph.D. Dissertation. (A general account of what to do under normative
uncertainty; discusses all of the issues mentioned in this article.)
Sepielli, A. (forthcoming-a) 'Normative Uncertainty for Non-Cognitivists,' Philosophical Studies. (An
argument that, if non-cognitivists can solve the Frege-Geach problem, they can accommodate the
existence of moral uncertainty.)
Sepielli, A. (forthcoming-b) 'Moral Uncertainty and the Principle of Equity among Moral Theories,'
Philosophy and Phenomenological Research. (A criticism of Lockhart's (2000) solution to the Problem
of Intertheoretic Comparisons of Value.)
Smith, M. (2002) 'Evaluation, Uncertainty and Motivation,' Ethical Theory and Moral Practice 5: 30520. (Argues against non-cognitivism on the grounds that it cannot accommodate the existence of moral
uncertainty.)
Download