1 Introduction

advertisement
Intention and Commitment
Sergio Tenenbaum
1 Introduction
We can distinguish between three different functions of a future directed intention (FDI):
[SETTLE] FDI settles on a future action.1
Example: I must decide between investing in bonds or mutual funds by next
week. I find it difficult to weigh the advantages and disadvantages of each
option. In order to avoid spending too much time considering the benefits of
one option or the other I form a firm intention one way or the other, and
(assuming certain background conditions obtain) don’t reconsider the issue.2
[LONG] FDI can allow us to execute long term plans that require coordination
between actions at different point in times.
Example: if I ever want to go to Paris, I need to make sure that I buy tickets,
make hotel reservations, etc. Forming the intention to go to Paris next summer
enable me to coherently carry out a long-term plan.
1
2
Bratman
“Never” is obviously too strong…
1
[RESOLVE] A future directed intention can ensure that we don’t change our mind in
the face of temptation or temporary preference shifts.
Example: If I intend to quit smoking, my intention can see me through those
difficult moments when someone offers me a much-craved cigarette.
The rationality of forming and carrying out intentions in cases [SETTLE] and [LONG] is
quite obvious. Given that we’re limited beings, we must plan for the future and stop
deliberating at some point.
It’s much harder to justify the rationality of forming and carrying out intentions that
serve the function of [RESOLVE], but at least some philosophers3 argue that the reasons
that apply to [LONG] and [SETTLE] extend to cases under [RESOLVE]. I argue, however, that
this extension can’t work. When we look closer at the reasons for carrying out intentions in
[LONG] and [SETTLE], we should conclude that they cannot generalize. In fact, I argue that
we can reach a stronger conclusion. I argue that future directed intentions (FDIs) have no
autonomous rational function; they create no new reasons and give rise to no new rational
requirements. We can call a view that claims that intentions do generate such autonomous
requirements an “Intentional Autonomy View”, and a view that claims that these
requirements extend to cases that fall under [RESOLVE], an “extension view”. I argue that
we must reject both views. There are two basic cases in which FDIs might be thought to give
rise to new reasons and requirements; cases involving simple intentions to perform a
particular action in the future, and cases that fall under Bratman calls a “general intention”,
3
Holton and, to some extent, McIntyre.
2
plans, policies, commitments, etc.4 Showing that there are no such requirements in cases of
the first kind is relatively straightforward; for the second kind, however, we will need to
introduce a new theory about the nature of policies which I call the “Policy as Action
Model”, or PAM for short. According to PAM, policies are best understood as ordinary
actions, and the rational requirements that apply to such policies are no different than the
rational requirements that apply to ordinary actions.
2. Why is [RESOLVE] different?
Let us distinguish between two cases of [RESOLVE]:5
A.
Cases in which I predict a preference shift due to an overall change in my
evaluative judgments.
Example: I am confident that my startup company is going to make a lot of money
in a few years a time. I would now prefer that to donate when my startup company
goes public, I would donate half my money to charity. However, I know that people
like myself are rarely willing to give up half their money when they become rich. I
form now an intention to donate the money to charity when I become rich.6
B.
Cases in which I expect temporary preference reversal (change in evaluative
judgment).
Example: It is always true that I prefer always to floss than never to floss. This
general preference (evaluative judgment) never changes. However as it comes
For simplicity’s sake, I’ll just talk about policies.
There are more, but I can only cover two here..
6 This example is from Parfit (?)
4
5
3
close to flossing time, I find myself preferring not to floss over flossing. Since I
know that the result of following these immediate preferences would result in my
never flossing, I form an intention to floss always (which at any point in time I
prefer over never flossing).
I’ll assume that in all these cases there are no reasons for actions that don’t flow from my
preferences (except, of course, those that the formation of intention generate), but nothing
in my argument depends on this.7 At any rate, it is not clear that we can simply extend the
norms from the standard cases to (A)-(B). Let us start with (A). In cases like the ones in
[LONG] and [SETTLE], FDIs can ensure that the long term plans of the agent are not
undermined by excessive reconsideration and change of course, so that planning agents can
succeed in carrying out projects that they deem important at any time they need to execute
the intention. However, (A) clearly does not fall into this category, since even though I
consider the project important now, I will not consider it important when it comes time to
carry out my intention.8 The argument about coordination and planning should have no
effect on my later self, and should not prevent my later self from reconsidering. (B) seems
somewhat different since it involves a preference that I will continue to have through time;
namely the preference to always floss over never flossing. However, it seems that my choice
situation now is essentially different from my choice situation when I formed the intention
. Now extension views might differ on whether the autonomous normative requirements
apply to both or only one of these cases, but I’ll ignore these differences here.
8 Gauthier gives the example of a boy who forms the intention now that finds
girls yucky not to date then later when he knows he will be gaga over girls. It
seems absurd to think that this earlier intention should require anything from his
later self. See Gauthier, ”Resolute Choice”.
7
4
to floss always, since now I do have the preference to floss now, so it seems quite
reasonable to reconsider my intention now.9
These arguments aren’t decisive. One could claim that the kind of coordination and
planning involved in the standard cases also apply to coordination and planning in light of
the fact that human beings constantly change their preferences, especially in the kind of
cases in which seem to be prone to hyperbolic discounting.10 Bratman could be understood
as arguing that the policies generated by my intentions have some special kind of agential
authority, given their stability, that underwrite similar norms of reconsideration that we
find in the standard cases.11 We might even want to say that the special norms generated by
intentions in the standard case can generate some kind of motivational ammunition that
will allow us to overcome our irrational tendencies in cases such as (C).12 Settling between
these arguments is difficult, and depends on various ways in which we understand that
pragmatic considerations should or should not play a role in determining the norms of
rationality.13 I propose to take a different tack. I want to examine more closely the
argument for Intentional Autonomy. I will argue that any plausible version of this view will
generate rational norms that could not possibly play the role that they are supposed to play
It is important to avoid a source of misunderstanding here. Obviously, when I formed the
intention I could have anticipated that I my preferences would shift in this manner. So it is
true even later, when it comes time to floss that there are no unanticipated circumstances
relevant to reconsidering my intention. However, the fact that I did anticipate that I would
form this evaluative judgment does not mean that I must now stick to intentions formed on
the basis of my earlier evaluations; after all, these earlier evaluations are not ones that I
now endorse. In this way, (B) is no different than (A), since also in (A), when my company
finally goes public there are no circumstances that I did not anticipate when forming the
intention.
10 Ainslie
11 Bratman himself does not put the point in precisely this way, but...
12 MacIntyre?
13 For a discussion of this issue, see Velleman, ”Deciding How to Decide”.
9
5
in accordance with the extension view. This will clear the path for a view in which intention
stability plays a much more limited role in the assessment of rational agents, but, on the
other hand, the scope of what counts as actions, or rational activity gets significantly
expanded. With this view in hand, we get a better understanding of the rational
requirements governing FDIs and policies, or so I’ll argue. However, the results are
independent. One could accept my argument for rejecting the Extension View without
accepting the later argument for rejecting the Intentional Autonomy view.
3 Intentional Autonomy Reconsidered; Extension Rejected
How does planning and coordination gives rise to rational norms of reconsideration? I can’t
here summarize the whole literature on the topic or canvas all the arguments. What I’ll do
instead is look into two seemingly very intuitive considerations, and try to see what kind of
demands they generate, if any.
1. The Need for Stability
Were we to change our plans and intentions all the time, we would not be
able to carry out our long-term projects. Consequently we would find
engaging in them pointless. This would result in a much worse life than one in
which our intentions are stable.14
2. The Need to Decide Without Constant Deliberation
14
Bratman, (around p. 16)
6
We cannot deliberate in every single circumstance we need to act.
Consequently, in certain circumstances, it is rational for limited beings like us
not to reconsider our intentions, plans, and policies.15
These seem quite intuitive reasons to accept that it would be rational to have limits on
reconsideration. But what exactly these limits are? Let us look first at what sorts of
demands [2] makes on us. Of course some times in exigent circumstances deliberation is
certainly a bad idea. If I know that my banker will come at 9:00 AM to write down my
decisions how to invest the money I inherited, and she’ll leave at 9:01 and invest my money
however I instructed her to do (or she’ll simply put it under her mattress if I give her no
instructions), I better do most of my deliberation before 9AM. And once I come up with a
decision, I better stick to it, so that my money does not end up at the bottom of my banker’s
mattress.
In this case reconsidering an intention, insofar as involves deliberation, is obviously a
bad idea. So sticking to my intention to, say, put all my money in index funds seems like a
reliable way to arrive at a better outcome than deliberation would. But why is it reliable?
Presumably because I have good grounds to think that my previous decision is likely to
have been a good one. If I think that my calculating skills are just as bad as random, there’s
no reason for me to stick to this intention, rather than take a new, fresh guess.16
This might lead us to suspect that what I have a reason for is to prefer cheap,
reasonably reliable information over expensive, perhaps slightly more reliable information.
same as above
of course, Bratman allows that new evidence will give us to reconsider the
intention. However, this case...
15
16
7
In fact, suppose we have a similar setup except that I am not aware that I’ll only have one
minute to make up my mind; I actually think that my banker will sit with me and talk to me
about all my options, give me further opportunity to think about my options, and I also
think that she’ll be there with me for as long as I want. The day before I spend thinking
about the options and, so far, I think the best option is to go for index funds. However, I
don’t form any intentions; I decide it is better to discuss my views with the banker. I arrive
at the bank at 9:00 AM without having thought more about the issue; as I step into the bank,
my banker announces that I will have only one minute to make my decision. It seems that I
have in this situation just as much reason to follow the judgment that I arrived the previous
day than I had not reconsider my intention in the case that I did form an intention.
Suppose on the other hand that I form an intention, and there are no exigent
circumstances making deliberation too costly. I am waiting for my banker, and I expect that
I’ll have at least half an hour before she can see me. Would it be irrational for me to reopen
the issue? There’s really no information, or no unexpected circumstances (suppose I know
that I generally have to wait about half an hour before I meet my banker); but I am bored, I
don’t know what to do, and reconsidering my intention seems to be a perfectly good way to
spend this half an hour. It couldn’t possibly be a failure of rationality to reconsider my
intention in these circumstances.
Of course, it would be a horrible fate to spend one’s life reconsidering one’s decisions;
it would be absurd if a theory of rationality required us to use all our leisure time, let alone
all our time, in the service of better deliberation. Thus any theory of rationality should allow
that in certain circumstances, we are permitted not to reconsider an intention. However,
this need not amount to a requirement not to reconsider in any particular circumstance.
8
Moreover this is a completely general truth of any attitude that makes sense to
“reconsider”. Take belief, for instance. It would also be a sad kind of life spent reconsidering
our beliefs all the time. There is thus no rational requirement to reconsider our beliefs at
every single opportunity. But there is also no general requirement not to reconsider them
unless there is enough change in information, etc. Bratman says that intention is a kind of
mental state that involves a disposition to resist reconsideration. As a normative claim, this
is misleading at best; although there are general permissions not to reconsider intentions,
there are no rational requirements not to reconsider them.
In sum, requirements not to reconsider kick only in exigent circumstances; otherwise
all we have are permissions. Neither the permission nor the requirement is unique to
intention. This is all that is warranted by (2).
If we want to be capable of coordinating with ourselves and others, we need to be
sufficiently consistent and predictable. Moreover, we have been thinking about (2) as
generating permissions for particular situations in which it might not be irrational to avoid
reconsideration. However, were we not to exercise these permissions, we would end up
reconsidering our intentions at every single opportunity, and this would be plausibly
described as a failure of rationality. There must be, thus, a more general requirement that
prevents us from becoming “reconsideration monsters”. Here it seems that one has simply
identified an activity that is incompatible with many of the goals we have. What we need is
to make sure that we don’t reconsider so much as to prevent our engaging in other
important activities. As long as I succeed in doing this, it is not clear how I could have
violated any requirement of rationality.17 Doubtless reconsidering has a problematic
17
More on that later.
9
structure. At any given time, reconsidering is not too costly; but the cumulative effect of
reconsidering in too many opportunities can be devastating; were I to reconsider my
intentions at every opportunity, I would never do anything else. However, this problem is
an instance of a much more general problem. First, the same goes for reconsidering beliefs,
and in fact, any mental state which it makes sense to reconsider. But more importantly we
can give endless examples of activities such that, at least apparently, at various (or even
every) particular instance it would be rational to engage in such activity, but it would be
disastrous to engage in this activity in every instance in which it is rational to engage in it.
[examples] . Although I argued that, given the general nature of the problem, we should not
expect that the best solution to the instance of intention will come from requirements that
apply only to the case of intention. However, I have not presented an alternative, so the case
against intentional autonomy has not been conclusively made. More importantly perhaps, I
have not ruled out that the general solution to the problem will not be simply a generalized
version of the requirements that authors who subscribe to the intentional autonomy view
claim to hold of intentions. That’s what I’ll endeavour to do when I introduce PAM.
But I think we already have the materials to show that the extension view is false.
[RESOLVE] needs a requirement to stick by our intentions, not just a permission. So the
various reasons to allow for nonreconsideration will not produce the requirements that the
extension view demands; the extension view needs to prohibit certain kinds of
reconsideration not just permit nonreconsideration. Moreover to the extent that there are
rational requirements that prohibit too much reconsideration, these rational requirement
are conditional on the end they serve. That is, if the need for coordination in (1) generates a
rational requirement to limit how much one revises one’s intentions, it limits only to the
10
extent that such revision would be incompatible with the general aim of allowing for
interpersonal or intrapersonal coordination. But the extension view needs rational
requirements that forbid actions that would undermine the specific intentions in question.
Succumbing to temptation, especially if I do this in a predictable manner, will not prevent
me from coordinating with myself or others; moreover, restricting reconsideration to the
cases in which I am in danger of succumbing to temptation does not begin to threaten to
make me into a deliberation monster.
4
Policies as Actions
Let us distinguish between two kinds of actions: “gappy” and "gapless" ones. Gapless
actions are such that any part of the action is either a constitutive or instrumental means to
the larger action. Swimming from A to B might be like this; while I am swimming,
everything I do is a means to swimming from A to B . But the same is not true for taking a
walk. Suppose I go for a walk, and as I am engaged in this activity, I stop and look at a
beautiful suit on display in the store. I am still taking a walk when this happens, even if
stopping to see the display is not a means to my taking a walk. Once we introduce the
notion that actions are gappy, there is no principled reason to limit how wide the gaps
between the proper means to the actions are. So we might think that the peripatetic
philosopher is engaged in an action much like taking a walk; it is in fact the action of taking
a walk with much wider gaps and stretched through much longer time.18 My suggestion is
that any kind of policy, long-term action, commitment, etc. can be understood in these
terms: as a continuous (though "gappy") action. However, I am not concerned here about
18
Accompanied, of course, by some contemplation
11
the metaphysics of actions or policies; all I want to argue for is that we best conceive of the
reasons that policies19 give rise to if we treat them as ordinary actions. More particularly, if
we treat a commitment as an ordinary action, we can think of the cases in which one acts in
accordance with one's policies simply as parts of this action that are instrumental or
constitutive means to the action.
If I am baking a cake, then in order to count the various parts of my actions as rational
all that needs to be present, ceteris paribus, is the following structure:
when I talk about policies, I mean to include also long-term projects, plans, commitments,
etc. But it would tire the patience of the reader/audience to keep repeating those.
19
12
Baking a cake is gappy; not all my actions will be means to baking the cake. I'll also stop to
pay attention to some piece of news in the radio, look out at the window and see if the
neighbour's cat is destroying my plants, etc. Baking the cake can be represented thus:
Obviously, there is a difference between the rational relation of the larger action to its
means, and the relation of the larger action to its other parts. After all, my reason for
measuring the flour is that I am baking a cake, but my reason for checking the cat is
certainly not that I am baking a cake. But we can say that the larger actions controls these
actions insofar as the amount of time and effort that I spend in these other activities when I
am acting rationally is partly determined by my interest in performing the larger action;
I should only listen to the radio as long as it does not interfere with my baking the cake. So
if we adopt the view that a policy is an ordinary action, we would accept that the same
13
rational control holds between the commitment and its instances as with the larger actions
and its instances. Let us take, for instance, a policy of exercising regularly. The structure of
this view would posit would be one of direct control of the policy over the action as in the
following:
POLICY AS ACTION MODEL (PAM)
Now a view committed to intentional autonomy will claim that the normative demands that
general intentions make on actions are sui generis and cannot be represented simply as
instance of the demands of instrumental rationality. Given time constraints, I’ll discuss here
14
only a particular “two-tier model (TTM)” (not Bratman’s), 20 a model according to which the
rationality of particular actions that comply or violate a policy is to be evaluated in two
stages. First we evaluate whether an agent should have reconsidered the policy by
examining if habits, dispositions, or principles of reconsideration that would be rational for
the agent to have. It is rational for an agent to reconsider21 a policy if and only if these
habits, dispositions, or principles call for reconsideration. If the agent does reconsider the
policy, a second tier evaluates the rationality of her decision on the basis of the reasons for
and against blocking or rejecting this policy in this occasion.
Let us distinguish between two kinds of policy: vague and strict.22 Strict policies are
such that they call for the performance of every token of a certain type of action. A vague
policy allows for not precisely specified "exceptions". So, for instance, for many people a
policy of loyalty to their partner is a strict policy. On the other hand, one can have a more
Leonard Cohen-like fidelity policy, which only requires you to be "faithful, Ah give or take a
night or two". Now TTM and PAM cannot give different verdicts to strict policies, for each
must classify every action that fails to conform to the general policy as irrational. It is in the
case of vague policies that the potential for disagreement comes up. So let us focus on these
and see what kind of argument can be made in favour of TTM. Let us take the policy of
having fruit for dessert instead of chocolate. Suppose my reason for adopting this policy is
health related: I like chocolate better, but I am concerned about my health. The best policy
I defend an extension of the argument here for Bratman’s version in the longer version of
the paper. The two tier model here is really just a toy model to help present the argument in
a succinct manner.
21 “Reconsidering” here include both examining whether the policy should be rejected and
whether it should be blocked. For the distinction, see Bratman…
22 More precisely, we should distinguish between loose and strict, and among the loose
policies, vague and non-vague. This is a simplified version
20
15
in this case then would be a vague one; this is a policy whose application a rational being
would block in certain circumstances. Now on PAM, as long as my actions exhibit the overall
pattern in which I don’t eat too much chocolate, there is no further question of the
rationality of my conduct. On TTM, on the other hand, at least in some cases of my eating, I
might have failed to act in accordance with rational habits and dispositions of
reconsideration. Suppose, for instance, my habit is only to reconsider my intention not to
eat chocolate for dessert if there’s something unusual in the situation; this is a specially
good chocolate, or chocolate goes particularly well with the liqueur being offered.
Suppose now this is not one of these evenings,23 but I reconsider anyway and in light
of my current preference for eating chocolate, I simply do it. On TTM, this is a case of
irrational, while in PAM, there's nothing irrational in this action; only the general pattern of
activity might turn out to be irrational (if there were many instances like this one).
Since TTM makes an extra demand, it is fair to ask what are the grounds for such a
further demand. Here I can only go into one such argument, adapted from Gauthier's
argument for resolute choice. In discussing whether my current preference for fruit now
over chocolate now could warrant a choice for chocolate over fruit, he says:
In considering my choices between a chocolate and a fruit, I am supposing
that if it were rational for me to deliberate on some basis for any such choice,
it would be rational for me to deliberate on that basis for every such choice.
23
And there are no other relevant differences.
16
For there is nothing to distinguish one choice from another... [Thus] choosing
on the basis of one's proximate preferences is not rational.24
Now this kind of consideration seems to lend support to TTM. After all, if we don’t rely
on habits and dispositions that can ensure a certain regularity in various choice situation,
one might choose the same way in the various repeated choice situations (in this case,
chocolate in all cases), and end up with a disastrous outcome. However further reflection let
us see that this example gives us a very simple argument in favour of PAM.
Let us define a notion of "top down independent" irrationality. We can define this notion as
follows:
The rationality of a policy P is top-down independent if and only if:
For an agent that (rightly) adopts P there is a set of momentary choices S for the agent so
that:
(a) No particular choice in S would be irrational in relation to P.
(b) If the agent were to make all the choices in S, she would be irrational by
violating P.
The basic idea is that some vague policies can be violated by a pattern of irrational activity,
rather by any particular irrational act. Suppose, for instance, I am writing a novel, and I
decide to commit myself to a policy of writing five hours a day. Now this is a vague policy,
since I might take certain days off, or be too busy in certain days to make sense for me to
24
"Resolute Choice and Rational Deliberation", p. 21
17
write for five hours, etc. We might ask what would count as a successful execution of this
policy? It seems quite clear that there is no single, precise set of momentary choices that
constitute the successful execution of a policy. It’ll be more precise to say that there will be
a range of acceptable choice sets. Given the vague nature of what counts as success in the
policy, there’ll be clearly acceptable choices and clearly unacceptable choices and a certain
penumbral area. Because of the penumbral area, there’ll be no point at which it would be
right to say that I have achieved a maximum of violations allowed, or not set of acceptable
choices so that one more violation of the policy would place the agent within the range of
unacceptable choices. So in this case, at each momentary choice node, there might be no
decisive reason for me not to violate the policy, but obviously if I always choose to violate
the policy I will not have successfully executed the policy. Of course, the existence of this
pattern by itself does not show that the rationality of the policy is top-down independent.
After all, one could argue that it is exactly the existence of such patterns that motivate the
two-tier view. For a rational agent,25 the successful execution of the policy is guaranteed by
the existence of disposition that pass muster in the first tier of evaluation. If the irrationality
of violating the policy too much can always be reduced to the irrationality of the habits and
dispositions of reflection and reconsideration then there are no cases in which the
irrationality of the policy is genuinely top-down independent. However, I want to argue that
there must be genuinely top-down independent policies. And here we can simply hijack
Gauthier's argument for our own purposes.
Suppose there is a set of choice situations such that each member of the set S includes
as a possible choice a token of an action type T within the range of a series of alternatives
25
limited rational agent.
18
policies P0 through Pn, such that P0 calls for never performing actions of type T, and Pn calls
for always performing actions of type T. Now let us say that an agent facing this choice
scenario knows of no relevant or salient features26 that distinguishes any particular choice
situations in S from all the other choice situations in S. Now there would be no specific
habits or dispositions that could be formed that would guarantee that the agent would
choose an acceptable set of choices. In fact, if there are really no salient features of the
situation that are accessible to the agent, and the choice situations in S could extend
indefinitely through time, the only more specific habits or dispositions that the agent could
form would be to choose always or never actions of type T. But it would be absurd to think
that rationality demands in that case that one we can infer one either choose P0 or Pn, but
nothing in between; that is, that rationality requires that one could only be a teetotaler or a
complete drunk, but nothing in between.
Obviously a more conclusive rejection of intentional autonomy would require us to
look at other possible functions of intentions, at other possible versions of TTM, and other
possible models for the autonomous requirements that intentions supposedly create.
Unfortunately given our allotted time, these issues need to be postponed at least till
question period.
26
Or even reliably expects that there will be no such features.
19
Download