Acting Without Choosing Author(s): Hilary Bok Source: Noûs, Vol. 30

advertisement
Acting Without Choosing
Author(s): Hilary Bok
Source: Noûs, Vol. 30, No. 2 (Jun., 1996), pp. 174-196
Published by: Blackwell Publishing
Stable URL: http://www.jstor.org/stable/2216292 .
Accessed: 25/12/2010 20:44
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at .
http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless
you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you
may use content in the JSTOR archive only for your personal, non-commercial use.
Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at .
http://www.jstor.org/action/showPublisher?publisherCode=black. .
Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed
page of such transmission.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms
of scholarship. For more information about JSTOR, please contact support@jstor.org.
Blackwell Publishing is collaborating with JSTOR to digitize, preserve and extend access to Noûs.
http://www.jstor.org
NOtIS 30:2 (1996) 174-196
Acting Without Choosing*
HILARY BOK
Pomona College
We sometimes look back on regrettable episodes in our lives and find
our conduct simply inexplicable. How, we ask ourselves, could we have
done that? And why? We were not drugged, brainwashed, or hypnotized. No one held a gun to our heads and forced us to act as we did.
We knew, at the time, that what we were about to do was wrong; and
yet we did it. We might wish that we could describe ourselves as having
chosen to set aside our principles in favor of our own interests, or as
having yielded to an overwhelming desire to do what we did. For then
we would at least be able to identify some reason why we acted as we
did; and to console ourselves with the thought that, while we might
have sold our self-respect too cheaply, we did not simply watch as it
slipped away. But much as we might wish to believe that we had some
reason for acting wrongly, it sometimes seems more accurate to say that
we acted as we did because of a kind of inexplicable passivity: because
we failed to extricate ourselves from a course of action we knew to be
wrong; because, rather than take responsibility for our actions, we
simply allowed ourselves to drift; because we never really forced ourselves to confront the alternatives available to us, and to choose among
them.
I will argue that this intuitive description is in fact accurate: that we can
and do perform actions we know to be wrong simply because we fail to
decide what to do. I will then try to show that once we recognize this fact,
we can identify a character trait which any plausible moral theory which is
not strictly self-defeating must require that we develop. Finally, I will
sketch some implications of this argument for the role of virtue in moral
theory, and for the nature of moral objectivity.
(? 1996 Blackwell Publishers Inc., 238 Main Street, Cambridge, MA 02142,
USA, and 108 Cowley Road, Oxford OX4 1JF, UK.
174
ACTING WITHOUT CHOOSING
175
I
The idea that we can knowingly act wrongly simply because we do not choose
among the alternatives available to us runs counter to a widely shared assumption: that our actions must reflect the balance of our various reasons,
desires, and motivations to perform or avoid the alternatives available to us;
or, in Donald Davidson's useful shorthand, of our pro- and con-attitudes
towards those alternatives. For instance, R. B. Brandt has written that "an
agent's tendency, at a given time, to perform a given act is a sum: the sum of
the intensity of his desire/aversion toward any consequence of that action . . . reduced by the subjective improbability of its occurring if the action
is performed, and reduced again by the lack of salience of that consequence
and its relation to action in the awareness of the agent-the sum over all the
anticipated consequences of the act. Then we can affirm that what an agent
actually does is adopt whichever option is the one he has, at that time, the
strongest tendency to adopt-for which this sum is greatest."1
One can hold this assumption and still allow that we often do things
which, in some sense, we do not really want to do, so long as one attributes
these lapses to some fault in the process whereby we weigh our various
attitudes against one another. For instance, some of our concerns might be
less salient to us than they should be; others might be such that further
reflection would lead us to abandon them entirely. We might weigh our
various pro- and con-attitudes on the wrong scale: for instance, by asking
not which gives us the strongest reason for action, but which has the most
psychological force. Our deliberative strategies might also be faulty: we
might, for instance, spend too much time searching for relevant considerations which have not yet occurred to us. However, recognizing that we can
come to the wrong conclusion about which of our alternatives is supported
by the balance of our pro- and con-attitudes leaves intact the assumption
that some such weighing of attitudes precedes each of our actions, and that
to explain why we do what we do is to explain why, when we weigh our proand con-attitudes towards our various alternatives against one another,
those supporting the action we actually perform tip the scale.
In what follows I will refer to this weighing of attitudes as a 'choice'. A
choice, in this sense, need not involve explicit deliberation, nor need it be
conscious. It need only involve some sort of selection among the actions
available to the agent, such that a request for the reasons why the agent
selected the action she did is in order.2 When an agent performs an action
because she chose to do so, we explain her action by explaining why she
selected it from among the alternatives ,available to her. When she performs an action because she did not choose among the alternatives available to her, we explain her action by explaining why she did not make such
a choice, and why her failure to do so led her to act as she did.3
176 NOUS
Why should we assume that our actions must reflect such a choice? The
most obvious reason is that there must be some explanation of why an
agent performs one action rather than another, and that in the case of
voluntary actions this explanation must have something to do with the
agent's pro- and con-attitudes. But this does not imply that our actions
must reflect a choice among our alternatives unless we assume that we can
hold such attitudes only towards those alternatives, and not towards the act
of choosing itself.
This assumption is clearly false. We can and do regard the act of choosing among our alternatives itself as something we might want either to
engage in or to avoid. Sometimes we avoid making choices for reasons that
seem likely to withstand reflection. We may not bother to weigh our reasons for action when nothing important seems to be at stake; we can allow
ourselves to be guided by habit when we see no reason not to; and we can
defer difficult decisions because we think that the costs of evaluating our
alternatives are likely to outweigh the benefits of settling on the right one.
These sorts of moral satisficing are unlikely to lead us knowingly to act
wrongly, since we engage in them precisely because it seems either difficult
or unimportant to decide what to do. But we can also avoid making choices
because, in difficult situations, we are uncomfortable with the act of deciding among our alternatives itself.
For instance, when an agent has some reason not to perform any of the
actions available to her, when each seems to her to involve some real
sacrifice, or to require that she violate some (moral or non-moral) standard
of conduct to which she is committed, she might want to avoid accepting
responsibility for performing any of them. By avoiding a decision, she may
be able to maintain the illusion that she is still free to adopt any of her
alternatives; that she has not yet committed herself to any, and thus has not
yet made the sacrifice this would require. She may also be able to convince
herself that because she has not yet identified any of her alternatives as the
one she chooses to perform, her subsequent conduct is in some sense not
her doing, not really her fault. In this way she can avoid (in her own eyes)
accepting responsibility for her actions.
Moreover, an agent might feel uncomfortable with the very idea of
weighing her various reasons for action against one another. She might feel
that she did not have the right to assess the claims of conflicting standards
of behavior; that they ought to have a kind of unquestioned authority over
her; or that to choose between them would involve a kind of disloyalty or
rebellion. She could also feel uncomfortable with the implications of deciding among their various demands, preferring to see her life as guided by
external standards whose claims she was not entitled to evaluate. This, too,
would make her reluctant to decide which of those standards to violate.
For these and other reasons, some people, might find the act of choosing
ACTING WITHOUT CHOOSING
177
among undesirable alternatives to be itself undesirable. Some of them
presumably manage to overcome their aversion to making such choices.
But it is implausible to suppose that all of them must overcome it; that none
will ever respond to this reluctance by avoiding making such a choice. I
therefore conclude that the claim that our actions must always be preceded
by a choice among our alternatives is false.
Still, one might wonder how a failure to choose among her alternatives
could possibly lead an agent to perform one of them. This question seems
puzzling as long as one assumes that all actions begin, so to speak, 'from
rest': that if an agent does not choose among the alternatives available to
her, she will not act at all. But this assumption is false. When we fail to
decide among our alternatives, we do not simply freeze. Instead, we continue to do what we are already doing.
An agent usually has some idea of what she is doing, often cast in terms
of a social role (spending time with her friends), an activity (cleaning her
room, unifying field theory), or both (being an experimental subject). This
understanding need not, and often will not, be conscious or explicit; it is,
roughly, the description the agent would give of her course of action were
she asked to do so. She also has some idea of which kinds of behavior are
appropriate to this course of action: of how she relates to her friends, what
being an experimental subject involves, and so forth. Her description of
her course of action, together with her understanding of what it involves,
determine a range of behavior which she will see as continuations of her
current course of action. If, for instance, she is participating in a scientific
experiment, the actions she takes participation to involve (following the
experimenter's instructions, for example), and those of her habits which
bear on that course of action (her usual deference to authority) will determine what 'continuing to do what she has been doing' means to her.
Once an agent has adopted a given course of action, she does not have to
choose to perform each of the actions it involves. If, for example, she is
cleaning her room, she need not decide or will anew each time she escorts
another shirt to her closet. Instead, having adopted a given course of
action, she will continue to perform it until she decides to stop, or until that
course of action reaches what she takes to be its end (her room is clean, she
unifies field theory, her friends leave). Her existing course of action and the
kinds of behavior she understands it to involve determine what she will do
in the absence of a choice: she can decide to abandon that course of action
at any time, but unless she makes such a decision, she will continue to
perform it.
The same is true of broader courses of'action. Consider, for instance, a
person who begins to suspect that she ought to give up her current job. If,
rather than confront the decision whether or not to do so, she simply banishes this disquieting thought from her mind, she will probably continue to
178 NOUS
go to work, to carry out her various duties, and in general to perform the
various actions which constitute 'doing her job'. When one accepts a job,
one regards the question whether one will perform the various actions which
that job involves as settled. As long as one regards that question as settled,
one will continue to perform those actions. One can, of course, reopen that
question at any time by asking oneself whether or not one ought to give
notice. But unless one does so, one will continue to do one's job.
If an agent is unwilling to make choices among undesirable alternatives,
she may knowingly do something she thinks is wrong simply because she
does not make such a choice. An agent adopts a particular course of action
on the basis of the information available to her at the time. She might not
know exactly what it will involve, or she might be unaware of some fact in
whose light actions which had seemed to her innocent turn out to violate
her moral principles. Her information about her course of action might
change while she is performing it; in particular, she might learn that it
involves actions which she believes to be unethical. If this happens, she will
be faced with a decisional conflict: a situation in which she has reasons for
performing several different and incompatible courses of action.
An agent might respond to such a conflict by weighing her reasons for
action and deciding what to do. However, the existence of such a conflict
might also make her want to avoid making such a decision. If she does not
overcome this desire and choose among her alternatives, she will not
simply freeze. She will continue to perform that course of action even if
she knows that it violates her own moral principles, and even if she would
not have chosen to perform it had she chosen at all. Whether or not she
would have chosen to perform that course of action depends on the
strength of her reasons for and against doing so. These will not include
her desire not to decide among her alternatives, since it is not itself a
desire to perform or avoid any of the actions among which she must
choose. But whether or not she evaluates those reasons and makes such a
choice depends on whether or not she overcomes that desire, and not on
the strength of her reasons for and against continuing to perform her
existing course of action. If she does not choose among her alternatives
she will perform an action which she believes to be wrong simply because
she did not decide what to do, even though she might not have chosen to
perform that action had she chosen at all.
II
As an example of agents who violate their moral principles because they do
not choose among their alternatives, consider the obedient subjects in
Stanley Milgram's experiments on obedience.4 Milgram's experiment was
designed "to measure the strength of obedience and the conditions by
ACTING WITHOUT CHOOSING
179
which it varies".5 He recognized that experimental subjects would probably
obey the experimenter's orders unless they had a clear reason not to do so.
To determine how strong a psychological force obedience was, he had to
design an experiment in which subjects were ordered to perform actions
which violated some deeply held motivation or principle. The 'counterforce' he chose to oppose to obedience was the subject's conscience.
Because he believed the principle that "one should not inflict suffering on a
helpless person who is neither harmful nor threatening to oneself" to be the
moral principle "that comes closest to being universally accepted",6 he set
up an experiment in which subjects were ordered to inflict increasing pain
on such a person. The purpose of the experiment was to see whether, and
to what extent, subjects would obey such orders.
In fact, the 'innocent fellow-subject' was an actor, and no pain was
actually inflicted on him. But Milgram's subjects did not know this. The
situation as they perceived it was this: having agreed to participate in a
study on memory and learning, they were seated at a shock generator while
another subject, the 'learner', was strapped into an electric chair. They
were asked to administer a memory test to the learner, and to shock him
each time he made a mistake. The shock-board had thirty switches, each
marked with a voltage level (15-450 volts) and a verbal designation. The
latter ranged from 'Slight Shock' through, among others, 'Very Strong
Shock', 'Extreme Intensity Shock', and 'Danger: Severe Shock' to the
ominously titled 'XXX'. Every time the learner made a mistake the subjects had to increase the voltage; when they reached 450 volts they were
told to continue to administer shocks at that level. After two more shocks
the experiment was discontinued.
Milgram did not expect many people to administer all 32 shocks. But
they did. And because one purpose of the experiment was to see which
types of people would disobey at what levels, Milgram had to keep redesigning the experiment, making it more and more ghastly in an attempt to
come up with the requisite 'spread'. In one version7 the learner mentioned
before the beginning of the experiment that he had a heart problem; and at
150 volts began to scream that his heart was bothering him. He protested
more and more vehemently until, at 330 volts, he responded as follows:
"(Intense and prolonged agonized scream.) Let me out of here. Let me out
of here. My heart's bothering me. Let me out, I tell you. (Hysterically) Let
me out of here. Let me out of here. You have no right to hold me here. Let
me out! Let me out! Let me out! Let me out of here! Let me out! Let me
out!"8 The learner was in another room: tJhesubjects could not see him, but
they could hear him quite clearly through the walls. After the response just
quoted, he fell silent. He did not scream, or bang on the walls, or answer
any questions, or give any sign of life. In this version, 65% of Milgram's
subjects delivered all 32 shocks.
180 NOUS
Why, one might ask, did all those people deliver all those shocks? Surely
they did not all think that it was permissible to do so. They had no financial
incentive to obey: they had been paid on arrival, and told that the money
was theirs just for showing up; besides, it would be very odd if 65% of a
randomly chosen group of people could be persuaded to shock someone
into apparent heart failure for $4 plus carfare. Some of the obedient subjects might have felt that the importance of the experiment, or of keeping
their promise to participate in it, outweighed the learner's suffering and
possible death. However, it seems unlikely that all of them obeyed the
experimenter for this reason. Some of the obedient subjects tried to keep
the learner from being shocked by signaling the correct answer to the tests
they administered. As far as the experimental subjects knew, the purpose
of the experiment was to see whether people learn more quickly when they
are punished for making mistakes. To signal the correct answer to the tests
they administered was, as far as they knew, to sabotage the experiment.
Those subjects who were willing to do so in order to spare the learner pain
cannot be said to have obeyed the experimenter because they believed that
the value of the experiment, or of keeping their word, outweighed the
learner's suffering.
Nor can we maintain that most of the obedient subjects acted as they did
because they realized that the learner was not actually being shocked.
Milgram's subjects themselves deny that this was the case. When they were
interviewed after the experiment, the obedient subjects' average estimate
of the effects of the shocks on the learner was 'extremely painful'.9 Moreover, films of the experiment do not show the obedient subjects calmly
flipping switches while smiling knowing smiles. The obedient subjects are
clearly under enormous pressure. They burst into hysterical giggles for no
obvious reason, they sweat, they wring their hands and rub their faces; they
concentrate on the minute details of their task: enunciating clearly, pressing the switches just right. Subjects who delivered all 32 shocks ask questions like: "What if he's dead in there?",10mutter "I'm shaking. I'm shaking",1" and ask the experimenter to terminate the experiment. Finally, the
experiment was designed to give the subjects unequivocal evidence that the
learner was in pain. They had received a real shock from the generator
before the experiment began. They could hear the learner's screams and
protests. Had they decided that the learner was not actually suffering, their
willingness to act on this belief without making any attempt to verify it, and
without any evidence that it was true, would itself require explanation.
I believe that many of Milgram's obedient subjects acted as they did
because they did not decide whether or not to obey the experimenter.
When they agreed to participate in the experiment, the subjects had been
told only that it was a study of memory and learning. They had no reason
to believe that their role in the experiment would be in any way morally
ACTING WITHOUT CHOOSING
181
problematic. By the time they learned that they would have to administer
(what they believed to be) painful and possibly lethal shocks to an innocent fellow-subject, they had already agreed to participate. In so doing
they had undertaken a course of action; it was up to them to decide to
abandon it.
Milgram describes the process leading to disobedience as "a difficult
path, which only a minority of subjects are able to pursue to its conclusion.
Yet it is not a negative conclusion, but . . . an affirmative act, a deliberate
bucking of the tide. It is compliance that carries the passive connotation.
The act of disobedience requires a mobilization of inner resources, and
their transformation (into) action."'12It is, in particular, an act of selfassertion, which required that the subjects affirm their right to evaluate
their various alternatives for themselves, to arrive at their own conclusions
about what they should do, and to act on those judgments in the face of
disagreement and opposition.13 Some subjects were able thus to decide
what to do, and to act on their decision. But others seem to have viewed
their various reasons for action as isolated, conflicting forces between
which they were trapped, and not as considerations which they might evaluate and compare. They were, in a sense, paralyzed by their dilemma:
unwilling to look beyond the mere existence of a conflict to the possibility
that they might resolve it.
Such subjects believed that to inflict suffering on an innocent and
helpless person was wrong, and that they were inflicting such suffering.
They might also have believed that they had some obligation to the
experimenter; in any case, they did not want to defy him openly. Yet
they did not try to decide which of these considerations was most important, or which they ought to act on. Had they done so, they might have
decided to disobey the experimenter. Because they did not, they continued to obey him, despite the conflict between their actions and their
moral convictions.
I might support this claim in various ways: for instance, by citing the
acute levels of stress experienced by the obedient subjects or the contrast
between their verbal dissent and their actions. But two sets of considerations seem particularly striking. The first is the frequency with which those
obedient subjects who found shocking the learner morally problematic
claimed that their actions were involuntary: that they were "totally helpless
and caught up in a set of circumstances where I just couldn't deviate and I
couldn't try to help",14or that "I went on with it, much against my will".15
Taken literally, these statements are obviously false. Presumably, what such
subjects meant by the claim that they acted against their wills was that they
did not want to shock the learner. Neither, however, did they want to
disobey the experimenter. They seem to have wanted to refrain from shocking the learner without confronting the experimenter openly; to be some-
182 NOUS
how removed from the whole situation without having to take the only step
by which they could remove themselves from it. Because they could identify 'their will' only with a course of action which it was not in their power
to bring about, they may have seen the experimental situation itself as
coercive: they could not act as they chose, since they could not choose to
perform any of the actions available to them. Unable to resolve their
dilemmas themselves, they continued to obey the experimenter's orders by
default, while protesting that it was not their will which had led them to do
so. The responsibility for the learner's suffering, they felt, was the experimenter's: for he was the only person who could have prevented them from
shocking the learner, and he had not done so.
The second consideration which suggests that some obedient subjects
made no decision as to whether or not to continue to obey the experimenter concerns the reasons they gave for acting as they did. Milgram's
subjects can be divided into three groups: those obedient subjects who did
not find shocking the learner morally problematic, those who did find
shocking the learner morally problematic, but who obeyed the experimenter's orders despite this fact, and those who refused to shock the
learner. When they were asked why they had acted as they did, those
obedient subjects who did not find shocking the learner morally problematic explained their actions in terms that suggest that they had weighed
the various reasons for action of which they were aware. For instance,
one such subject said, "I faithfully believed the man was dead until we
opened the door. When I saw him, I said, 'Great, this is great'. But it
didn't bother me even to find that he was dead. I did a job."'16Whatever
one might think of this subject's reasoning, he clearly acknowledges the
relevance to his decision of the fact that he believed that he had killed the
learner, but considers it less important than doing his job.17 Likewise, all
of the disobedient subjects quoted claimed that either the learner's suffering or his unwillingness to be shocked outweighed their obligations to the
experimenter. When breaking off the experiment, disobedient subjects
said things like "I don't understand why the experiment is placed above
this person's life",18or "I thought I could help in a research project. But if
I have to hurt somebody to do that . . . I can't continue".19 Both groups
of subjects tried to decide what they had most reason to do, reached a
decision, and acted on it.
By contrast, none of the obedient subjects who found shocking the
learner morally problematic claimed that their reasons for obeying the
experimenter were either more or less imnportantthan the learner's suffering. When asked why they obeyed the experimenter, these obedient subjects simply restated their reasons for obeying the experimenter, saying
things like "It is an experiment. I'm here for a reason. So I had to do it."20
That they also thought that they had reasons to disobey him is clear from
ACTING WITHOUT CHOOSING
183
their other comments.21 Yet none of these subjects said that the considerations they mentioned seemed to them more compelling than those supporting disobedience, or that they thought the 'reasons' for which they were
'here' more important than the learner's suffering. They said nothing to
indicate that they weighed the considerations which support obeying the
experimenter against those supporting disobedience, or that they tried to
determine which they should act on. Nor did any mention having made
such a decision during the postexperimental interview, when they were
asked why they obeyed the experimenter, and when one would have expected them to try to justify their actions to the experimenter and to
themselves.
Normally, when one is asked why one did something which one had
some fairly obvious reason not to do, one takes this question as a request
not for a list of considerations supporting what one did, but for an explanation of why those considerations seemed more compelling than one's
reasons for acting differently. All those subjects who disobeyed the experimenter, and all those who did not find shocking the learner morally
problematic, explained why they acted as they did by explaining why they
thought that, on balance, the considerations of which they were aware
supported the course of action they performed. Yet none of the subjects
who obeyed the experimenter even though they regarded shocking the
learner as morally problematic provided such an explanation. The best
way to account for this fact, I think, is to suppose that these subjects had
no such explanation to give. They could not explain why they actually
concluded that their reasons for obeying the experimenter outweighed
their reasons for disobeying him, since they had reached no such conclusion. Nor could they explain why, had they weighed their reasons for
action against one another, they would have concluded that their reasons
for obeying the experimenter outweighed their reasons for disobeying
him, since they might not have chosen to obey the experimenter had they
made such a choice. A list of the considerations supporting obedience is
not an adequate answer to the question why one obeyed the experimenter
in the face of obvious reasons not to do so. But if an obedient subject did
not believe that her reasons for obeying the experimenter outweighed her
reasons for disobeying him, this would be the only kind of answer she
could give.
The behavior of the obedient subjects in Milgram's experiments is a
particularly vivid example of the kinds of actions which we can perform if
we avoid making decisions in morally, problematic situations. But we
should expect to find the same failure to choose in other, less dramatic
situations. If the conflict between one's actions and one's moral principles
were less striking, it would be all the easier not to admit the need to decide
what to do. Few people have the misfortune to have their faults exposed as
184 NOUS
clearly as Milgram's obedient subjects. But this does not make those faults
less common, or less significant for moral theory.
III
I will now argue that any acceptable moral theory must hold that agents
who adopt it should try to become the kinds of persons who make choices
when confronted with decisional conflicts. My argument consists of five
simple premisses. The first is this: Any moral theory must include some set
of practical principles22 which it holds that agents who adopt it should
consistently follow. In the case of self-effacing moral theories this set of
practical principles is the one in favor of which the theory effaces itself,
since these are the principles that such theories hold that we should try to
live by.
In what follows I do not want to rely on any controversial assumptions
about the nature of morality. Therefore, when I speak of 'a moral theory', I
mean any conception of the kinds of actions one should perform, or the
kind of life one should lead, by which one might decide to live, excluding
only the view that any action, and any kind of life, is as good as any other.
In particular, I will not exclude theories because of their content. Someone
who thinks that she should take every opportunity to advance her own
interests, that she should do whatever it takes to make the Olympic badminton team, that she should always treat others as mere means and never as
ends, or that she should seek to maximize the sum of misery and degradation in the world holds what I will call a moral theory. I will not exclude
theories because of their scope: whether one believes that one's principles
apply universally or only to oneself is unimportant for the purposes of this
argument. Nor does it matter on what grounds one takes one's principles to
be justified, or even whether one takes them to have any justification at all,
so long as one believes that one should live by them. This use of the term
'moral theory' is undoubtedly too broad. But I will not consider the question whether, and how, it should be narrowed. For if the conclusion for
which I will argue holds for all the views I call 'moral', then it must hold for
that subset of those views for which the term moral should be reserved.23
In claiming that any moral theory must hold that an agent who adopts it
should follow its principles consistently, I do not mean to imply that if, for
instance, some theory holds that stealing is wrong, then it must hold that no
agent who adopts it should ever steal, whatever the circumstances. It may
be permissible, according to that theory, to steal under certain conditions.
theory cannot hold that it is permissible to steal
But-tautologically-that
even when it licenses no such exception. And this is what I mean when I say
that any moral theory must hold that an agent who adopts it should follow
its principles consistently.
ACTING WITHOUT CHOOSING
185
Second premiss: she who wills the end wills those necessary means to
that end which are known to her, and which are in her power. Willing the
end and willing the means are a package deal: one can will both or neither,
but one cannot both set oneself to achieve some end and claim that one has
no reason to do something which one knows one must do if one is to
achieve it.
From these two premisses I infer that any moral theory must hold that we
have reason to do whatever we need to do if we are to follow its principles
consistently. If to do that thing were itself prohibited by, or inconsistent with,
that moral theory, then our reason to do it might not be overriding. But if
doing that thing is not inconsistent with that moral theory, then we should
conclude that that theory implies that we should do it.24
It seems clear that we can argue, on these grounds, that particular moral
theories give us reason to do particular things. For instance, one might
argue that a moral theory that holds that we should be kind must also hold
that we should train ourselves to judge the effects of our actions on others
as accurately as possible, since only thus will we be able consistently to
recognize what constitutes kindness.
More interestingly, however, this argument also implies that if there is
something that we must do in order to follow any set of principles consistently, then any moral theory must hold that we have reason to do that
something. But if any moral theory must hold that we have reason to do
something, then we can conclude that we have reason to do it without first
having to figure out which moral theory we have most reason to accept.
Likewise, if there is something which we must do in order consistently to
follow all but a few moral theories, then we can conclude that unless one of
those exceptions turns out to be the moral theory we have most reason to
accept, we have reason to do that something. If, in addition, we could show
that we have reason to reject those exceptions, then we could conclude that
any acceptable moral theory gives us reason to do that thing. In either case,
the argument made above would allow us to show that morality gives us
reason to do certain things, while sidestepping entirely the question what
else it might require of us.
Third premiss: If an agent is not the sort of person who makes choices in
decisional conflicts, then she will not consistently follow any set of moral
principles unless it is impossible to violate those principles by failing to
make such a choice. Such an agent might believe that she ought to follow
her moral principles, and she might sincerely want to do so. Her principles
might require nothing which it is impossible, or even difficult, to do. But as
long as she is not the sort of person who makes choices in decisional
conflicts, she will not follow them consistently.25
When one has adopted a course of action, one will continue to perform
it unless one chooses not to. If it is possible to violate one's moral principles
186 NOUS
because one fails to make such a choice, then in some situations one might
recognize that it would be wrong to continue to perform one's existing
course of action. Whenever one also has some reason to continue to perform that course of action, one is faced with a decisional conflict. A person
who decides what to do in decisional conflicts can bring to bear any considerations that she thinks relevant to her decision, and decide which of the
alternatives available to her she has most reason to perform. She therefore
retains the capacity to respond to decisional conflicts in whatever way she
thinks best, and can use this capacity to ensure that she acts according to
her principles. By contrast, a person who avoids making decisions in such
situations relinquishes that capacity in precisely those situations in which
she needs it the most. Whenever she encounters a decisional conflict, she
will not try to decide what she has most reason to do, and to do it. Instead,
she will simply continue to do what she is already doing, even if she believes it to be unethical. Similarly, a person who makes such decisions only
when her resistance to making them is weak will violate her moral principles whenever she would have to resolve a decisional conflict of sufficient
severity in order to follow them.
A person who does not resolve decisional conflicts might follow her
moral principles in other situations. Whenever she encounters a decisional
conflict of sufficient severity, however, her actions will be determined not
by her moral principles, values, desires, or choices, but by her existing
course of action, whatever that course of action turns out to involve, and
however unethical she thinks it. Whether or not one has adopted a course
of action need have nothing to do with the rightness or wrongness of
continuing to perform it, or with the importance of the moral issues at
stake.26 Yet such a person allows this factor, rather than her moral principles, values, desires, or choices to determine what she does in decisional
conflicts of sufficient severity. If, in such a conflict, her existing course of
action is unethical, she will perform actions which she believes to be,
according to her principles, wrong. She will therefore predictably fail to
follow her moral principles, unless it is impossible to violate those principles by failing to decide what to do.
Fourth premiss: There are only two types of moral theory which we
cannot violate as a result of failing to decide what to do in a decisional
conflict. First, some moral theories might hold that their principles apply
only to actions performed by agents who choose to perform them as
opposed to their alternatives; that any action is permissible if it is not
performed as a result of such a choice; and that we have no reason to try
to ensure that we make such choices. Such a theory might hold, for
instance, that it is wrong to kill someone because one has chosen to do so;
that it is permissible knowingly and voluntarily to kill someone if one fails
to confront the question whether or not one should do so; and that one
ACTING WITHOUT CHOOSING
187
has no reason to try to ensure that one will confront such questions when
they arise. Since such moral theories hold that any action which we do not
choose to perform is permissible, we cannot violate such theories because
we fail to make such choices in decisional conflicts.
Second, some moral theories might hold either that we should try never
to choose among our alternatives in decisional conflicts, or that we should
try to develop those character traits which would prevent us from making
such choices.27 If such a theory held that these principles could be overridden by other considerations, then it would be possible to violate that theory
by failing to decide what to do in a decisional conflict, since to hold that
these principles can be overridden is to hold that there are some decisional
conflicts in which we should decide what to do. But if some moral theory
held that those principles were always overriding, then we could never
violate that theory because we did not decide what to do in a decisional
conflict.
If some moral theory does not hold that its principles govern only those
actions which we choose to perform, then it must hold that those actions
which we perform as a result of a failure to choose among our alternatives
are subject to moral assessment. If that moral theory does not hold that it is
always right not to choose among our alternatives in a decisional conflict,
whatever that failure to choose might lead us to do, then it must hold that
in some situations a failure to make such a choice can lead us to act
wrongly. Thus, any moral theory which holds neither of the two positions
just discussed is one which we can violate by failing to decide what to do in
decisional conflicts.
Fifth premiss: We have reason to reject any moral theory which we
cannot violate by failing to decide what to do in decisional conflicts. I will
not attempt to provide conclusive arguments for this premiss. For the only
moral theories which we cannot violate in this way are the two types of
moral theory just described, and both are plainly absurd. However, since I
do wish to claim that one can show that we should become the sorts of
persons who decide what to do in decisional conflicts without relying on
moral intuition, I will briefly indicate how one might argue that we have
reason to reject moral theories of these two types.
Those moral theories whose principles govern only actions which we
choose to perform, and which hold that we have no reason to try to ensure
that we make choices in decisional conflicts, are vulnerable to the following
objection: There are two ways to support the claim that we should not
choose to perform some action. First, one might claim that it is wrong to
perform that action. This would imply that we should not knowingly and
voluntarily perform it, whether or not we choose to do so. Second, one
might claim that it is wrong to bring that action about by making a choice.
This would imply that if we ever try to decide whether or not we should try
188 NOUS
to ensure that we make choices in decisional conflicts, we should resolve
that question in the affirmative. For if we chose not to do so, we would
make a choice which we knew would lead us to perform actions which,
according to that theory, we should not bring about through our choices. In
either case, our reasons for thinking that there are some actions which we
should not choose to perform would imply that we have reason to try to
ensure that we make choices in decisional conflicts.
If one wishes to hold a moral theory of this type without conceding that
one has reason to try to ensure that one decides what to do in decisional
conflicts, one must maintain that while it is wrong to choose to perform
certain actions, there is nothing wrong with performing those actions per
se, and nothing wrong with bringing them about as a result of one's
choices. But it is hard to imagine on what other grounds one might think it
wrong to choose to perform those actions. If the fact that we cannot provide any coherent justification for the principles espoused by some moral
theory is a reason to reject that theory, then we have reason to reject any
theory which holds that we should not choose to perform certain kinds of
actions, but that we have no reason to try to ensure that we decide what to
do in decisional conflicts.
Those moral theories which hold that our overriding end should be the
destruction of our ability to act on the basis of reasons in decisional conflicts arguably involve a contradiction between their status as moral theories and their substantive aims. As moral theories, they tell us what we
have most reason to do, how we should try to lead our lives, what ends we
should aim to achieve, and how we should direct our wills. But what they
tell us is that we should attempt to eliminate our ability to determine what
we have most reason to do, to live the lives we think we should, to pursue
our ends effectively, and to use our wills to guide our conduct. If the fact
that some moral theory involves such a conflict gives us reason to reject
that theory, then we have reason to reject any theory which holds that we
always have most reason not to choose in decisional conflicts.
I have argued that if there is something which we must do in order to
follow some moral theory consistently, then that moral theory must hold
that we have reason to do that thing; and that we must become the sorts of
persons who choose among our alternatives in decisional conflicts if we are
to follow consistently any moral theory which we can violate by failing to
make such a choice. I conclude that any such moral theory must hold that
we have reason to try to become such persons. If the only moral theories
which we cannot violate in this way are the two kinds of moral theories just
described, and if we have reason to reject those moral theories, then any
acceptable moral theory must hold that we have reason not only to adopt
its principles and to intend to live by them, but to try to become the sorts of
ACTING WITHOUT CHOOSING
189
persons who choose among our alternatives when confronted with decisional conflicts.28
This conclusion is fairly weak. For it leaves open the possibility that we
might have most reason to accept a moral theory which holds either that it
is wrong to try to become such persons, or that it is wrong to do the kinds of
things which are involved in trying to become such persons. If so, then our
reasons for trying to become such persons might always, and necessarily, be
outweighed by the requirement that we not do so.
However, the arguments made above imply a stronger conclusion if we
accept a sixth claim: that we have reason to reject any moral theory which
holds that it is wrong to do something which we must do if we are to follow
its principles consistently. This claim follows from my second premiss: that
to will some end is to will those necessary means to that end which are
known to us, and which are in our power. To adopt some moral theory is to
set ourselves the end of following its principles consistently. If to will the
end is to will the means, then in adopting a moral theory we must set
ourselves to do those things which we must do if we are to follow its
principles consistently. But if one of those things is prohibited by that moral
theory, then in adopting it we must also set ourselves not to do that thing.
For this reason, we cannot coherently will to live by any moral theory
which holds that it is wrong to do something which we must do if we are to
follow its principles consistently. If we are not the sorts of persons who
decide what to do in decisional conflicts, then we will not consistently
follow any acceptable moral theory. For this reason, we have reason to
reject any moral theory which holds that we cannot permissibly become
such persons.
This does not imply that we have reason to reject any moral theory
which holds that our reasons for trying to become the sorts of persons who
decide what to do in decisional conflicts are sometimes outweighed by
other considerations. For such a theory might nonetheless allow that some
sufficient means of becoming such a person are permissible, and therefore
it need not be self-defeating. But a moral theory is self-defeating if it holds
either that it is wrong to try to become the sort of person who makes
decisions in decisional conflicts, or that it is wrong to do the kinds of things
which constitute trying to become such a person. For such a moral theory
holds not only that some of the things which might enable us to become
such persons are wrong, but that all of them are. It therefore forbids us to
do something which we must do if we are to follow its principles consistently. If we have reason to reject any moral theory which is self-defeating,
then we have reason to reject any moral theory which holds that it is wrong
to try to become the sort of person who decides what to do in decisional
conflicts, or to do the kinds of things which are involved in trying to
become such a person.
190 NOUS
Moreover, if we have reason to reject any such theory, then any acceptable moral theory will hold not only that we have some reason to become
the sorts of persons who decide what to do in decisional conflicts, but that,
other things being equal, we should try to become such persons. For if any
acceptable moral theory must hold that we always have some reason to try
to become the sorts of persons who resolve decisional conflicts, and if no
acceptable moral theory will hold that we have any general reason not to do
so, then any acceptable moral theory must hold that, while our reasons for
trying to become such persons might sometimes be outweighed by other
considerations, in the absence of such a conflict the balance of our reasons
will favor our trying to become such persons.
If we have reason to reject any moral theory which holds that it is wrong
to do something which we must do if we are to follow its principles consistently, then we can conclude that, other things being equal, we should try
to become the sorts of persons who decide what to do in decisional conflicts; and that we have reason to reject any moral theory which holds that
it is wrong to try to become such persons. We can, that is, show that certain
moral theories should be rejected altogether, and establish some claims
about the content of the moral theory which we will ultimately decide has
the best claim on our allegiance.
IV
In conclusion, I would like to draw out three corollaries of my argument.
First, I have tried to show that any acceptable moral theory must hold that
we should become the sorts of persons who make choices in decisional
conflicts, since only such persons will follow their moral principles consistently. Becoming such persons must involve more than adding to the set of
principles which we do not follow consistently. Our principles guide our
deliberation. Once we have asked ourselves what we ought to do, our
principles can help us to answer that question. But because they come into
play only when we have already begun to deliberate, they cannot ensure
that we will do so.
To become the sort of person who resolves decisional conflicts must
therefore involve not simply adopting some principle, but developing certain character traits. Which character traits these are is, in the most straightforward sense, an empirical question, and one which I have not attempted
to answer in this article. My arguments imply that any acceptable moral
theory must require that we cultivate those character traits. If a moral
virtue is a character trait which morality holds that we should cultivate, my
arguments imply that any acceptable moral theory must hold that there is
at least one moral virtue; and therefore that no acceptable moral theory
can be an ethic of principles alone. However, since those arguments are
ACTING WITHOUT CHOOSING
191
based on the claim that we must develop that virtue if we are to follow our
moral principles consistently, they do not imply any criticism of principlebased moral theories.
The second corollary concerns the role of virtue in moral theory. Recent
writers on virtue ethics have asked whether we should adopt an ethic of
virtues or of principles. The idea that virtues and principles are in conflict
with one another-that we can enthrone one in our moral theories only by
deposing the other-seems plausible if we restrict our attention to those
virtues which guide our deliberation: for instance, the virtues of benevolence and honesty. In such cases we can formulate principles of benevolence and honesty which seem to do the same job as the analogous virtues;
and therefore the question whether we want an ethic of virtues or of
principles seems to come down to the question whether virtues or principles make better guides.
Once the question is framed in this way, a defender of an ethic of
principles can argue that virtues can at best play a role analogous to Hare's
rules of thumb. We finite beings must cultivate virtues like benevolence or
honesty in order to ensure that we will in general be disposed to act rightly,
and to allow us to reserve deliberation for the really tough cases. But an
archangel, capable of working out the implications of her principles with
perfect accuracy and at infinite speed, would have no need for such deliberative shortcuts. A defender of an ethic of principles might therefore
argue that the virtues are mere concessions to our finitude, necessary only
insofar as they allow us to approximate the results of perfect deliberation.
I have argued that any acceptable moral theory must hold that we should
cultivate at least one virtue. This virtue does not help us to approximate the
results of perfect deliberation; it is a precondition of engaging in deliberation at all. It cannot be replaced by any principle, since it performs a
function which no principle could possibly perform. If Hare's archangel did
not have the virtue I have discussed, her deliberative abilities would not
ensure that she followed her principles consistently, since whenever she
encountered a decisional conflict of sufficient severity, she would not deliberate at all. In this case, at least, we should regard virtues and principles
not as rivals, but as playing distinct and complementary roles.
The third corollary concerns moral objectivity. Both moral realists and
anti-realists often assume that if moral claims are to be objective, they must
be objective "in the way that (claims in) other disciplines, such as the
natural or social sciences, . . . can be."29This assumption is unexceptionable if it means only that moral claims should be as fully objective as
scientific claims, judged by whatever standards of objectivity are appropriate to them. But authors on both sides of the moral realism debate often
take it to mean, in addition, that moral claims must be as successful as
scientific claims in meeting scientific criteria of objectivity: that is, that
192 NOUS
moral claims can be objective only if they describe some mind-independent
reality, are based on some sort of observation, or play a role in our best
explanation of our experience.
One might argue that the assumption that moral claims should be
assessed by the same criteria as scientific claims overlooks a crucial difference between moral and scientific reasoning. When we engage in scientific reasoning, we attempt to describe and explain our experience. We
can therefore require of any scientific claim that it contribute in some way
to the formulation and justification of a correct description and explanation of our experience, whether by describing it, by explaining it, or by
serving as a presupposition of such an explanation. If a scientific claim
does not contribute to our best explanation of our experience, it is not
performing its function. When we engage in moral reasoning, by contrast,
we are not primarily trying to describe and explain our experience, but to
determine what we should do. We can therefore require of a moral claim
that it meet whatever standards of justification are appropriate to claims
about what we should do. It may be that only scientific claims can be
objective, or that the justification of moral claims does essentially involve
showing that they play a role in our best explanation of our experience. In
either case the criteria of objectivity appropriate to scientific claims would
be the only criteria of objectivity there are; and if moral claims could not
meet those criteria, they could not be objective. But one must argue for
either claim; and in the absence of such an argument, one cannot show
that moral claims are not objective by showing that they do not meet the
criteria of objectivity appropriate to scientific claims: by imagining that
they aspire to the status of description or explanation, and showing that
they fail to achieve it.
If these arguments are correct, then we cannot simply assume that moral
claims must meet the criteria of objectivity appropriate to scientific claims
if they are to be objective at all. However, the bare possibility that there
might be a distinctively practical form of objectivity does not help us to see
us how this possibility might be realized: how any claim which does not
describe mind-independent facts correctly, record observations accurately,
or play a role in our best explanation of our experience could nevertheless
be said to be objective, and not just an expression of the attitudes of some
individual or group.30If writers on moral realism believe that moral claims
can be objective only if they meet scientific criteria of objectivity in part
because it is not clear to them that there is in fact any other way in which
claims can be shown to be objective, the idea that there might in principle
be a practical conception of objectivity will not convince them to change
their minds. They would require some evidence that moral claims might
actually be objective even if they did not describe mind-independent facts
or contribute to our best explanation of our experience. One way to pro-
ACTING WITHOUT CHOOSING
193
vide that evidence would be to produce an instance of a moral claim that
seems to meet these conditions.
I have argued that anyone who has any views at all about how she should
live her life should try to become the sort of person who makes choices in
decisional conflicts. If these arguments are sound, this conclusion meets
several important criteria of objectivity. It is a claim that any agent who
asks herself what she should do has reason to accept, regardless of the
particular standards or principles by which she seeks to govern her life. It is
supported by a justification which allows us to attach a clear sense to the
claim that it is correct, and that those who do not believe that they have
reason to accept it are mistaken. And this justification does not depend on
our belief that this claim is true, on our particular views about the content
of morality, or on the desires we happen to have.
However, the claim for which I have argued does not seem to meet the
criteria of objectivity to which writers on moral realism typically appeal.
Nothing I have said gives us reason to believe that it accurately describes
some mind-independent moral reality. Our reasons for believing it are not
based on any moral perceptions, observations, or intuitions. And while the
knowledge that we can violate our principles by failing to choose among
our alternatives might help us to explain our behavior, the claim that we
ought to become the sorts of persons who do make such choices in decisional conflicts does not.
It might nonetheless be objected that the fact that a claim is justified
does not suffice to show that it is objective. But this objection, if valid, calls
into question not the adequacy of this claim, but our reasons for caring
whether or not moral claims are objective. If to show that a moral claim is
objective requires only that we show that anyone who is at all concerned
with the conduct of her life should accept it, my arguments imply that, in
this case at least,31the objectivity of moral claims requires neither a basis in
observation nor a role in the best explanation of our experience. If, on the
other hand, showing that a moral claim is objective requires more than
this, then arguably its additional requirements are irrelevant to ethics. As
ethicists, we try to understand what we should do, and why we should do it.
If moral objectivity requires more than this, it requires more than we need.
Notes
*1 would like to thank Jay Atlas, Robert Audi, Alyssa Bernstein, Sissela Bok, Sarah Buss,
John Cooper, Bill Haines, Paul Hurley, Elijah Millgram, Julius Moravcsik, John Rawls, T. M.
Scanlon, Dion Scott-Kakures, Frederick Sonta& and Darryl Wright for their comments on
varous versions of this paper.
'R. B. Brandt, 'The Structure of Virtue', Midwest Studies in Philosophy, vol. XIII (1988).
2By 'selection' I mean any process whereby an agent who can perform any of several
different courses of action comes to perform one of them. The distinction on which my use of
the term 'choice' relies is that between those processes of selection which render the question
194 NOUS
why I selected the alternative I did appropriate and those which do not. Thus, for instance, if I
order my favorite entree in a restaurant without bothering to think, I have in this sense chosen
to order it, since it makes sense to ask for the reasons why I ordered that entree rather than
the others on the menu even if I did not reflect on those reasons before ordering. By contrast,
if I determine where I will spend my summer vacation by throwing a dart at a map, I have not
in this sense chosen among destinations, since it will not make sense for anyone who knows
how I arrived at my vacation plans to ask for what reason I chose Tulsa over Tahiti. The cases
on which I will focus in this paper differ from this one in that they involve not a deliberate
adoption of an alternative procedure of selection, but a simple refusal to choose among the
available courses of action.
3Note that the difference between these two kinds of actions is not that only the first are
explained by citing the agent's reasons; but that we appeal to the agent's reasons to explain
different things. If the claim that some action is intentional requires that we explain it by citing
the agent's reasons, actions of both kinds are intentional.
4Milgram describes his experiments in Obedience to Authority, 1974: Harper & Row, New
York.
5ibid., p. 13. Milgram describes his experimental procedure in chapter 2 of Obedience to
Authority.
6ibid., p. 13
7This version is described in Obedience to Authority, pp. 55-57.
8ibid., p. 57
9ibid., p. 171. Interestingly, there were no significant differences between obedient and
disobedient subjects' estimates of the learner's pain. Milgram reports the average estimates of
the learner's pain given by obedient and disobedient subjects for nine versions of the experiment (ibid., p. 171). In three of the nine versions, the obedient subjects' average estimate of
the learner's pain was greater than that of the disobedient subjects. In all nine versions, the
average estimates of both obedient and disobedient subjects fell in the 'extremely painful'
section of the scale.
1Oibid.,p. 76
"ibid., p. 80
12ibid.,p. 163
130ne can believe that one has the right to act on one's own considered judgments, in this
sense, and still believe that there are situations in which one should obey orders one thinks
misguided. For instance, a soldier who believes that she should obey the orders of her commanding officers whether or not she agrees with their overall military strategy acts on the basis of her
own considered judgment when she obeys an order which she thinks ill-advised. To think that
one does not have the right to act on the basis of one's considered judgments is to think that
one's judgments in general, including one's judgments about when one should defer to others,
lack the authority to determine one's conduct in the face of opposition or disagreement.
14ibid.,p. 54
'5ibid., p. 83
16ibid.,p. 88
17Another subject, described on pp. 45-7 of Obedience to Authority, does not seem to
have weighed the learner's suffering against his reasons for obeying the experimenter. However, his failure to do so seems to reflect not his unwillingness to weigh his reasons for action
and to decide among them, but the fact that he did not regard the learner's suffering as
relevant to his decision at all.
18ibid.,p. 48
19ibid.,
p. 51
20ibid., p. 83
21Forinstance, when the subject just quoted met the learner after the experiment, she said
"Every time I pressed the button I died. Did you see me shaking . . . Oh, my God, what he
(the experimenter) did to me. (Sic!) I'm exhausted. I didn't want to go on with it. You don't
know what I went through here. A person like me hurting you, my God. I didn't want to do it
to you. Forgive me, please. I can't get over this. My face is beet red. I wouldn't hurt a fly . .
(ibid., p. 82; parenthetical comment mine.)
22 In what follows I use 'principle' to refer to any general claim about the sorts of things one
ACTING WITHOUT CHOOSING
195
should do. For this reason I do not think that my claim that moral theories involve principles
excludes either virtue-based theories or theories that hold that we should act in accordance
with the values of our traditions rather than a set of abstract rules. Such theories will involve
principles-for instance, those directing us to cultivate and display the relevant virtues, or to
act in the ways our tradition holds that we should. My point is simply that any moral theory
which is not vacuous must tell us to perform or avoid certain types of actions in certain
circumstances.
23Any conclusion which holds for all moral theories, in my sense of that term, must hold
for any set of practical principles at all. For this reason, one cannot call into question one's
reasons for accepting such a conclusion simply by asking, in the usual sense, what reason one
has to be moral. For given the sense of 'moral' defined above, that question concerns not
one's reasons for placing those concerns normally regarded as moral ahead of one's own
interests or projects, but one's reasons for holding any views at all about the sort of life one
should lead, rather than drifting like a piece of plankton through the ocean of life.
240ne might object to this claim on the grounds that it implies absurd conclusions. For
instance, if utilitarianism requires that we always act so as to maximize happiness, then (by my
arguments) it implies that we have reason to do whatever we must do if we are consistently to
act so as to maximize happiness. If we perform some action that turns out, unexpectedly, to
produce catastrophic results, we have not acted so as to maximize utility. Utilitarianism
therefore gives us reason to try to predict and avoid all the catastrophes which might conceivably result from our actions, however unlikely they might be. But surely, one might think, the
claim that utilitarianism implies that we have reason to do this is absurd.
In fact, I think that this reasoning shows that utilitarianism is self-defeating if it is taken to
require that we always perform that action of those available to us which actually maximizes
utility. For, so construed, utilitarianism requires that we perform a series of actions the
identification of any one of which is so time-consuming as to make it impossible to perform
some of the others. But utilitarianism is not self-defeating in this way if it is taken to require
only that we make the general happiness our end. So construed, utilitarianism would require
that we inform ourselves about the consequences of our actions only when we believe that
doing so would, on balance, promote the general happiness. It would not, directly or indirectly, give us reason to try to predict the results of our actions even when to do so would
interfere with our attempts to maximize happiness; and therefore it would not imply the
absurd claim that we have reason to inform ourselves about all the possible consequences of
our actions.
25Some virtue ethicists might object to this claim. A truly virtuous person, they might say,
is one in whom the disposition to act rightly is so firmly settled as to have become instinctive.
Virtue comes so naturally to her that she never wonders what to do, never feels perplexed,
and therefore never needs to make choices at all. Moreover, they might argue, such an agent
might actually be morally harmed by making choices in decisional conflicts. For if she reflects
on her reasons for acting virtuously, she will have to distance herself from her virtuous
dispositions; and her instinctive identification with those dispositions will of necessity be
weakened.
This objection must assume a conception of choice which is different from that which I use
in this article. One might argue (wrongly, I think) that a truly virtuous person will never need
to make a conscious choice, since she will never be at a loss as to what to do; or that to
entertain doubts about what to do would alienate her from her virtuous dispositions. But I
have used the word 'choice' to refer not to an explicit or conscious decision, but to any
process, conscious or unconscious, whereby an agent selects one action from among the
alternatives available to her as the action she will perform, so long as that process is such that a
request for the reasons why she selected the action she did is in order. A person for whom
virtuous action has become a kind of second nature is one who always chooses, in this sense,
to act as virtue requires, but who no longer needs to make such choices consciously.
260f course, the fact that one has already adopted a course of action can provide one with a
reason for action, for instance by leading others to expect that one will do so. But this does not
affect my present point. For unless it is impossible for an agent to violate her moral principles
by failing to choose among her alternatives, there will be some situations in which her principles imply that she should abandon her existing course of action. If she does not decide among
196 NOUS
her alternatives when she encounters such situations, she will continue to do what she is
already doing, and will thereby violate her moral principles.
27I assume, for the sake of argument, that the best way to cultivate any character trait is to
act as one would if one had it; and therefore that if one wants to cultivate those character traits
which would lead one not to decide what to do in a decisional conflict, not making such a
decision is always a better way of achieving that end than performing any alternative one
might have chosen. If this assumption is false, then we can violate a moral theory which holds
that we should cultivate those character traits which would lead us not to make such choices by
failing to make one.
28Consequentialists might object to this claim on the grounds that it might turn out that the
most effective way to promote the good is not to make choices at all. If so, then even though
not making choices might at times lead us to perform some action which does not produce the
best consequences, we should nonetheless adopt a policy of not choosing among our alternatives in decisional conflicts.
Recall that I defined a choice as any process, conscious or unconscious, whereby an agent
selects one action from among the alternatives available to her as the action she will perform,
so long as that process is such that a request for the reasons why she selected the action she did
is in order. Those consequentialists who hold that the best way to promote the good is to
internalize and follow some simple moral 'rules of thumb', rather than weighing the consequences of one's actions, should therefore hold that we should, in this sense, choose to follow
those rules of thumb in decisional conflicts. Likewise, those consequentialists who maintain
that we would sometimes promote the good most effectively by not making choices would still
have reason to try to become the sorts of persons who are able to make choices in decisional
conflicts. For if they, like Milgram's obedient subjects, are unable to compare their various
reasons for action in decisional conflicts, they will be unable to determine whether or not any
given decisional conflict is one in which they should decide what to do.
Only a consequentialist who believed that we would most effectively promote the good if
we never made choices in decisional conflicts, even when (as in Milgram's experiment) the
question which of one's alternatives would have the best consequences seems easy to answer,
can claim that her moral theory does not give her reason to try to make choices in decisional
conflicts. Such a pessimistic view of our ability to achieve the goals we set ourselves by trying
to determine what we can do to promote them seems, at best, in need of argument. And it is
unclear that any consequentialist who held this view could consistently act on it. For the claim
that one will most effectively promote the good if one never decides what to do in decisional
conflicts must rely on the idea that one's views about what would most effectively promote the
good are likely to be so misguided that it would be counterproductive to try to act on them. It
is unclear why someone who believed this to be true of herself would regard her belief that she
would promote the good most effectively by disabling her ability to act on the basis of reasons
in decisional conflicts as itself above suspicion.
29Brink,Moral Realism and the Foundations of Ethics, p. 5. Similar claims can be found in,
for instance, Harman, The Nature of Morality, (p. 12) and Richard Boyd, 'How To Be A
Moral Realist', (in Sayre-McCord, Essays on Moral Realism, p. 183); and the assumption that
moral claims can be objective only if they meet the criteria to which it is appropriate to hold
scientific claims arguably underlies much of the literature on whether or not moral claims play
a role in our best explanation of our experience. Thus, for instance, on p. 235 of his 'Moral
Explanations' (in Sayre-McCord, Essays on Moral Realism), Nicholas Sturgeon describes as a
"retreat" the claim that moral claims might be true even if they had no such role.
30See, for instance, Boyd, 'How To Be A Moral Realist', p. 183.
31WhileI think that some other character traits (e.g., strength of will) could be shown to be
moral virtues by arguments similar to those I have made here, it seems clear that most moral
claims cannot be justified in this way. But the limited reach of this form of argument does not,
I think, affect my present point. If writers on moral realism have identified objectivity generally with the form of objectivity appropriate to scientific claims because they believe that there
is no other form of objectivity worthy of the name, I need only a single counterexample to
show that this belief is mistaken.
Download