Chapter 3: Simple Moral Choice

advertisement
Chapter 3: Simple Moral Choice
People always act with the intention of bringing about a better world. This is very close to being
a truism. We suspect we are dealing with a strange person who says “I tried my best to get a
particular result because at the time I really wanted something else.” Preferred actions might be
blocked, or fuzzy thinking and misinformation can cause surprises and disappointments. But the
actions we can take now are always and only done to realize our vision of what is best. When
others make a case that someone should have acted otherwise, they do this by bringing in
supplemental conditions that are different from those of the decision maker. We might say, “If he
had known more about the situation, he would have acted differently” or “If I were in her place I
wouldn’t have done it that way” or “If they just wait, they will see how they could avoid going
wrong here.” None of these can stand in the place of the decision maker. There is a huge
literature on decision theory, 1 but all agree we solve the problem we face not other’s problems
or hypothetical ones. Choice is point-blank action, action with an eye toward the future. 2
Moral choices are a special case of acting to bring about a better world that also involve
other people. My choice regarding where I live, the TV shows I watch, or the BMI I am
comfortable with would not normally be thought of as moral matters. Your approval or
disapproval of these would not count as moral either, at least not until you took action on them in
a way that mattered to me. Morality begins when we take action to make the world better by our
standards and that action has an effect on the future others enjoy.
I will use the term “engagement” to identify any moral situation, any situation where the
choice of one individual has an effect on the future of others. This and the following three
chapters are about the structure of moral engagements. In chapters 3 and 4 the goal is to find the
best decision rule for managing moral engagements. Altruism has been a popular favorite, but
various forms of self-interest and self-protection are surprisingly effective. Philosophers have
proposed special decision rules that offer advantages in some cases, and I will look very briefly
at two of the best. There is something to be learned by considering “cheating” decision rules for
moral engagements such as COERCION, DECEPTION, and RENEGING. 3 In the end, it will turn out
that none of these is the best and none is the worst. The overall best approach will be revealed in
Chapter 4, and that will provide the framework for the rest of the book. Chapters 5 and 6 explore
the nature of various engagements. Moral choice is difficult and applying the same rule the same
way in every situation is a blunt approach. Multiple small cases will be used to illustrate that the
“conditions on the ground” matter in deciding which action is best for making my world better
when others are involved.
Moral Choice Rules
Somewhere in my youth I heard a campfire story about a contest for who is the smartest and
most powerful person in the group -- lessons in alpha maleness for eight-year-olds. The story has
stayed with me for decades, and it continues to yield new insights. In barest outline, here is the
situation. A bully challenges the hero-in-training to solve a problem. He says “I have a bird in
my hands behind my back. To see how smart you are, just tell everyone here whether the bird is
alive or dead. Simple choice; speak loud enough for everyone to hear you.”
At a certain level, this is a problem in estimating probabilities by reading human nature
and the circumstances. Here is a brute that would have no compunction about bring a dead bird
into camp to enhance his status. But there is no way to see any movement behind the guys back.
He may just have a firm grip on a small, live bird. His scornful face seems to reflect more
disdain for the hero than it reveals about what is in his hands. What has been the case in similar
situations? Is the bad actor a known bluffer? As structured here, this is a purely instrumental
problem, like forecasting a change in the stock market.
But a little reflection shows this is a trick question. The bully has the power to prove the
hero wrong whatever the guess. Perhaps that is the point of the contemptuous challenge. The bird
is almost certainly alive now, so the bully can let it fly away while laughing in derision if the
hero guesses “dead.” If the hero guesses “alive,” the brute can easily arrange for that answer to
be wrong as well. This is a fool’s gamble, and the guesser is damned either way.
In recent years I have come to a more pragmatic understanding of this sort of issue. The
question is not about the facts of the matter, nor is it about being right. It involves bringing about
the kind of world heroes prefer to live in. The hero has the power to save the bird’s life. And that
is what he or she should do by guessing “dead” on the reasonable supposition that the bully is
desperate to “be right.” It is about risking being thought dumb or powerless for the reasonable
chance of “doing good” when possible.
Moral choice is not a description or a justification: it is action in an engagement that
bends the future. There is more than one way to play every game. What is best will only emerge
in the future, and it will involve the reciprocal hopes of more than one person. Neither the hero
nor the bully is morally independent of each other or the circumstances.
Perhaps the camp counselor who told this story many years ago drew exactly this lesson
at the time. I do not remember. It has taken me many years of living to learn in my own way that
life is not about “getting it right.” It is about “making it better” given the circumstances and other
people’s intentions.
A Paradigmatic Engagement
In July 2009, three Americans were hiking in what they believed was Iraqi Kurdistan.
They were arrested by Iranian officials and placed on trial for espionage. In September 2010,
Sarah Shourd was released for health reasons, after posting $500,000 in nonrefundable bail. A
year later Shane Bauer and Josh Fattel were released on “humanitarian” grounds after another $1
million was paid.
The American, the Iranian, and the world press have each told various versions of the
incident, and in a recent book (Bauer, Shourd, and Fattel 2014), the three imprisoned Americans
recounted their own story. There is almost no overlap between the various accounts, with charges
of spying, prisoner diplomacy, interference in the sovereignty of another country, and hostage
taking for money and prestige flying back and forth. Analyzing this situation in classical ethical
terms will generate parallel counter narratives, largely satisfactory only to those who share
common prejudices. 4 The best we could hope for would be high-tone finger pointing, with no
prospect of the rhetoric influencing anyone’s behavior. The incident seemed to have served to
exaggerate differences of principle across cultures.
It must not be forgotten, however, that a mutual action was eventually agreed. Both sides
chose a common way forward that, apparently, neither felt could be improved on. Principles
aside, we can still ask whether the action eventually taken contribute to the best possible world.
This is a stereotypical moral engagement, Engagement #17 in the Appendix where all
possible moral engagements are listed. This particular case will be analyzed extensively in this
and the next chapter to demonstrate how agents can approach a shared situation in different
ways. Iran could release the Americans in exchange for monetary and diplomatic credit, or it
could hold them for an indefinite period and uncertain purposes. The United States could pay the
ransom for their release or refuse to pay on principles such as the hikers’ innocence and
abhorrence of state bribery. Thus there are four paths forward, as shown in Figure 3.1.
United States
“Deal”
“Free”
[4
3]
[3
2]
“No ransom” [2
1]
[1
4]
“Deal”
Iran
Figure 3.1: Moral engagement matrix for the Iran Hiker Hostage situation.
Such schematic representations of moral engagements involving two agents, each with
two strategies, will be a regular feature in this book, so a little explanation would be especially
helpful here. In this moral matrix, the numbers in [square brackets] represent rank preferences of
outcomes for each agent, with [4] being the most preferred future. The first number in each cell
stands for the preferences of the Row agent, Iran in this case; the second number in each pair is
for the rank preference of the Column agent, the United States. In the joint strategy that was
eventually settle on, “Deal” and “Deal,” Iran got the best outcome it could hope for [4] and the
United States settled for second best [3].
It needs to be borne in mind that the international community sponsors such
engagements. In this case other nations kept a low profile because both sides made plausible
claims that could not be verified and intervention, say by the United Nations, was judged to carry
risks of destabilizing the region. The facts of the matter after the arrests were open to varying
interpretations. The state departments in both countries knew exactly how to spin the situation
for their audiences.
This sort of case is further complicated by the time dimension. Initially the joint
outcomes of Iran’s wanting to hold with no deal or the United States wanting free release were
both live options. But interest in each country and worldwide faded fast, and it cost each country
real dollars to hold prisoners and to mount negotiations. The engagement had a life of its own
and rank preferences changed over time. 5 The resolution in a moral engagement may have to
wait until its time arrives, and solutions that were sound at one time unravel as circumstances
develop.
For purposes of this example, I have assumed that the United States wants to be seen as
both innocent of spying and supporting the importance of each of its citizens’ liberties. I have
further assumed that the Iranian government derives little value from holding Americans that the
world had largely forgotten, and that they made their point in the first weeks by claiming that the
United States was sending spies across their borders.
The best outcome under the circumstances from the American perspective would have
been for the Iranians to release the three young hikers, tacitly admitting that they had made a
mistake -- the “No ransom” and “Free” combination of strategies [1 4]. In that future world the
Americans would have gotten their first choice and the Iranians their least desired future state.
The worst American outcome would be to fail to recognize that Iranians would lose face unless
some sort of deal is struck, “No ransom” and “Deal” [2 1], since the Americans would have
rotted in Tehran. Both remaining alternatives involve the United States making concessions. The
better of these two joint actions would be for a negotiated settlement to be reached involving
payment of bail-ransom for the hiker’s release [4 3]. It would be slightly worse if the United
States were willing to negotiate, but the Iranians interpreted this as a sign of weakness, leading to
protracted talks and escalating demands [3 2].
But it is appropriate as well to perform the analysis from the Iranian perspective. The
best combination would be a negotiated release which has the benefits of cash in hand, removal
of the burden of “caring for” three foreigners, and a degree of recognition of Iranian sovereignty
-- “Deal” and “Deal” [4 3] looks very good. Net-net, the outcome where Iran pushes too hard
for what it most wants but the United States is unwilling to deal is the least attractive outcome [1
4]. The United States would be saying in effect “you must live with the problem you created.”
The rank order of the middle outcomes probably favors protracted negotiations with a willing
United States [3 2] being better than releasing the hikers without compensation [2 1].
Picking Strategies or Using Choice Rules
It can be seen immediately that there is no Win-Win solution in this engagement. We
would be stuck if we demanded that the parties first agree on a common ethical principle since
their value structures do not overlap sufficiently. Any imposed solution, say by a commentator
who believes he or she has insight into the “true” nature of the issue, and would amount to
solving the intervener’s problem, not the one facing the two nations. Such solutions from on high
are likely only to get as far as the analysis stage without being implementable short of third-party
police action. It is also apparent that changing other’s positions as a precondition to agreement is
a remote prospect here.
Many moral engagements have these characteristics, yet we intuitively sense that some
among the less-than-perfect joint actions are better than others. In order to do the right thing in
complex moral engagements it is often necessary to look at the problem from the other’s point of
view, and the decision sometimes comes down to the arrangement of the next best alternatives.
The framing described above in the case of the three American hikers is just one possible
way to look at the situation. It is quite possibly incomplete because both sides distorted the facts
and likely withheld information about the actual settlement. The rank ordering of preferences
could certainly be disputed, for example by radical patriots in either country who would obsess
over the value of some outcomes and deny the validity of others. We do know for certain that the
joint action of release on bail-ransom was the eventual outcome. We also know, from the
perspective being developed in this book, that that was exactly the right moral choice (see
Chapter 4).
Accepting the framing of the engagement as described in the moral matrix does not
automatically lead to a unique solution. Facts alone and even the interpretation of facts are not
enough to choose wisely among moral alternatives. Equally, values alone are not enough to get
the job done. Each agent wants to pick the best strategy, the most promising action, but there are
multiple ways to do this.
For example, the United States could have used the rule of “stick by our guns,” insisting
on a self-interested principle that “Free” is the only alternative in play. “What’s right’s right.”
“My way or no way!” Such a strategy appears attractive because one of its outcomes is the best
the United States could be hoping for – but only if Iran caved. The United States’ self-interest
above all approach might also backfire if Iran got its back up as well. Besides, Iran had exactly
the same rights to use a pure self-interest strategy. If both had insisted on being right there would
have been an indefinite standoff.
Another way of solving the problem would have been to search for strategies that
inflicted the maximal embarrassment on the other party. If both the United States and Iran had
picked strategies with this end in mind, the hikers would still be in prison. DECEPTION and
COERCION
are also possible choice rules. Each side certainly attempted to limit the options
available to the other, and we can safely assume that there was a little faking and manipulation of
public opinion going on in hopes of throwing the other side off balance. There are still more
ways to decide how to manage moral engagements. Each engagement permits multiple,
justifiable alternatives for deciding how to make the right choice. We want to know which of
these choice rules works best, not in the sense of which rule sounds right but in terms of which
rule produces the kind of future world we most value.
“Dancing with the devil” cases are among the most challenging in ethics. There are
readers who will find this case disgusting from the outset. Any outcome that fails to give them
the result they want is inherently wrong. It must be borne in mind that radicals in Iran habitually
refer to America as “the devil.” They, and their American counterparts, would reject the
negotiated solution because they are actually addressing a problem that is different from the fate
of three hikers. They frame the problem as one of national interests or personal honor. There will
be four chapters – 7 thorough 10 – devoted to the impact of alternative framing or ways of
setting up moral engagements before we attempt to resolve them.
We would also have to look at the problem differently in the event of repeated hostage
taking. During the early decades of the nineteenth century, the Barbary States of North Africa
regularly kidnapped English, French, and sometimes American sailors and sold them back to
their countries. The European powers performed roughly the same analysis as that presented
above and negotiated treaties with standing agreements that Tripoli would not kidnap Europeans,
and in exchange they would receive rather handsome annual foreign aid payments. Protection
money was viewed as a more efficient arrangement all around. The impact on the newly
independent America was devastating, as the Barbary States simply shifted their business to a
new supplier. 6 During the presidency of Thomas Jefferson, approximately 20% of the cash funds
of the federal government (a puny total budget by today’s standards) was spent in ransoming
American sailors (Oren 2007). This eventually reached a point where it was less expensive for
the United States to impose a COERCIVE military solution.
Naming the Moving Parts
All of the elements of this approach to moral action are well established in the literature, except
for choice rules themselves. To avoid stumbling over terms, this brief section is a straight-up
glossary establishing the conventions that will be used for all of the moving parts of moral
engagements discussed in this book.
Engagement: A structured arrangement involving two moral agents, each with his or her own
view of the preferred world and the second best alternative, and a set of circumstances on the
ground is called an engagement. The term “game” is typically used in the literature, but
engagement carries less baggage. The selection of one choice for each agent produces an
intersection at one of the four possible cells representing future worlds and that determines what
will happen when action is taken.
Moral Agents: The individuals or groups involved in engagements are those that interpret or
frame the engagement, rank order preferences over outcomes, decide on choice rules, and take
action. Although moral analysis is performed from the perspective of one agent (ourselves), it is
always recognized that the other agent is morally interchangeable with us – they have the same
moral status, but not necessarily the same circumstances and preferences, and the approach they
take to the engagement could be the same as the one we take or different. The roles of the agents
are conventionally represented by the neutral labels “Row” and “Column” when discussing
engagements and by the assumption that each agent has an equal chance in the long run of being
either Row or Column in such engagements.
A moral object, in contrast to an agent, is any individual or group that is affected by
actions but does not have the opportunity to influence the outcome of the engagement. A person
who receives charity or justice is an object. Usually children and incompetent individuals are
objects. Individuals before the law are objects. Any action that fails to consider others’ potential
for moral choice turns them into objects. The framing of an engagement involving moral objects
only has two cells – the two options available to the agent.
Framing: Agents respond to the world as they interpret it. This may differ from the framing that
is given by a third-party and may even differ from the framing that would be selected by the
agent in hindsight or when it is recognized that circumstances have changed. Frames are neither
right nor wrong – they are the momentary givens on which one builds moral choice.
Strategies: Each agent has the option of several courses of action. In this book, the analysis has
been limited to two agents and two strategies. More complex analyses are possible, but the
convention is used here that the two agents are “me” and “the most significant other individual or
group agent,” and the strategies are “the most preferred” and “the next best.” The convention
will be followed of designating the names of “strategies” in quotation marks when they are
mentioned in the text or shown in matrices.
Outcomes: An outcome is whatever future world comes about as a result of both agents choosing
a decision rule and acting on it. Outcomes are not existing norms; they are conditional states of
the world. They are “how things would be if Row did this at the same time that Column did
that.” There are thus eight outcomes to consider in the standard moral engagement: the
combination of two strategies for each agent multiplied by two strategies for the other agent
producing effects for each of Row and Column – 2 x 2 x 2 = 8. By convention the outcomes are
arranged in a two-by-two moral engagement matrix with four cells representing the possible
combinations of strategies for agents and two outcome rankings in each cell – one for each agent.
For theoretical and descriptive purposes, a great deal of detail may be needed to characterize
outcomes. Most of the literature in academic ethical philosophy consists of elaborate descriptions
and analyses of outcomes that are then criticized or defended. For practical purposes of moral
action, all that is needed are the rank preferences for each agent. If we will pay a dollar for a cup
of coffee, we will also take the cup at 90 cents.
Outcome preferences will always be shown in [square brackets] with the rank for Row
being the left-hand number and the rank for Column being the right-hand number. [4] Is the most
preferred outcome and [1] the least. It has been proven that there is always at least one best
mutual outcome in every engagement (Nash 1951). This will be signaled by [bold values in
square brackets]. When an outcome is chosen using suboptimal decision rules, this will be
indicated by circling the values in the cell.
Value of the Engagement [Fitness]: The sum of the outcomes for both agents is called the value
of the engagement. In Win-Win situations, where both parties get their most preferred future
world as a result of interacting, the value of the engagement is [4
4] or 8. It is defensible to
combine the outcomes without regard to which agent gets the most preferred rank because, on
principle or as a general moral rule, we do not know whether we will be Row or Column. From
the perspective of the community, it makes sense to sponsor engagements that result in the
maximum common benefit. The value of the engagement is sometimes referred to as fitness,
especially in contexts where we observe the results of repeated engagements in a community
following a trend of increased fitness or where the overall moral quality of the community trends
downward.
Munificence: Sometimes the world deals both agents a winning hand; sometimes the
circumstances conspire so that the best possible for both agents is the least worst available
option. Munificence is an index of the maximum potential value for engagements if both agents
act wisely. This is the “best possible world under conceivable circumstances.” Some
engagements are more munificent than others. The best moral choices will minimize the gap
between the value of the engagement actually achieved and the munificence that is possible. It
will be shown in this and the next three chapters that no moral decision rule guarantees full
munificence, but some are generally better than others. We will be looking for the best, not
giving reasons why the perfect is not available except under contrived conditions.
Choice Rule: Given an engagement, it still must be decided how the most promising strategy will
be chosen. One rule might be to select the strategy that gives the best payoff regardless of what
the other agent does. An alternative approach would be to protect oneself from the worst damage
the other agent could inflict if he or she were using the strategy mentioned in the preceding
sentence. Yet another alternative might be to give the other party the best chance of thriving, as
we often do for our spouses, children, and friends. We have options, and so do others. Different
decisions rules will often result in different strategies being chosen when faced with the same
moral engagement. The convention will be followed of using small caps for choice rules in order
to make clear the distinction between “strategies” which are the actions taken and CHOICE RULES
which provide the criteria for selecting those actions.
Types of Engagement: Despite the seeming bewildering array of moral challenges we face, it can
be proven that there are only 78 master types (Rapoport and Guyer 1966). And we will get to
that in the next chapter. Each possible moral engagement is shown in the Appendix. In Chapter
5, it will be convenient to group engagements that share common characteristics into families.
Standardized names will be used to refer to engagements that share common structures and these
will be denoted by bold font, such as Win-Win and Hawk and Dove and Battle of the Sexes.
Some Simple Choice Rules
Twelve common choice rules in moral engagement will be considered in this and the next
chapter. Each rule can be defined in unambiguous operational terms and measured quantitatively
– at least at the level of rank preferences -- so it will be possible to make precise comparisons
rather than vague generalizations. The formality of the engagement matrix makes it possible to
say useful things about various decision rules across all possible circumstances. We are looking
for the choice rule that, all things considered, gives the best value of the engagement.
Disengagement
Thomas Hobbes (1651/1962), who wrote in the middle of the seventeenth century, is best
remembered for his characterization of the state of nature as being “solitary, poor, nasty, brutish,
and short.” His solution was that we should agree to give a monarch absolute power to enforce a
social contract or agreement among members of the community. The absolute monarchy thing
did not actually work out as planned, as Charles I and others lost their royal heads over it. 7
Humans are constitutionally social beings and if we understand evolutionary science correctly,
Homo sapiens developed because they were social rather than first evolving the physical and
neurophysiological capacities for engaging each other.
Accepting that we must all continuously engage to some meaningful extent, it is still
possible to explore the meaning of localized or situational disengagement. What can we expect
as a result of both parties occasionally walking away from specific engagements? Alexander
Selkirk, as was explained in the last chapter, tried it but found it not to his liking.
I pass a person on the street in need. I am in a hurry and do not notice that there is an
opportunity here for me to engage this individual in a way that will change his or her life for the
better, and perhaps mine as well. Or perhaps I just do not respond to an e-mail request. 8 Nothing
actually happens a lot. Most of the time, this requires no effort at all, but sometimes we make a
strategic decision not to engage or even screen off certain types of engagements all together.
Certainly, most of us in the Western world have the capacity this very afternoon to prevent
several people from starving to death next month somewhere in the poor parts of the world. Or
less dramatically, we often take a pass on volunteer work, charitable causes, or just being
friendly to those around us. The moral cost of disengagement in community is probably much
greater than many of the intentional abuses we wring our hands over when we hear them
reported on the evening news.
There is a moral choice rule for not being engaged. The DISENGAGEMENT position is to
imagine that the engagement does not exist and to accept whatever comes as a result of not
participating. Sometimes it requires foresight to arrange to be absent when moral issues arise.
Sometimes, we just pretend not to see them or that they are not really our business. The future
emerges without being deflected by any actions on our part. It is reasonable in such cases to say
that, on average, each future outcome is equally likely or will not have its trajectory altered. The
average of the ranks [4], [3], [2], and [1] is 2.5 for each agent. Combining the outcomes for Row
and Column we get a value of the engagement of 5.0 when we manage such opportunities by the
DISENGAGEMENT
moral choice rule.
This is our moral baseline: the value of the engagement when we avoid relationships with
others. This book will repeatedly demonstrate that we can do better than this, often much better.
A rating system will be used in this chapter and Chapter 4 to evaluate the attractiveness
of a dozen choice rules. The maximum value on this scale will be 8, meaning that such a moral
choice rule always leads to the most preferred outcome for both agents. The minimum score on
this scale will be 2, signifying the worst possible pair of outcomes. Five, being the midpoint on
this scale, seems like the correct value to assign to the DISENGAGEMENT choice rule. A habit of
deciding moral matters that way does not help bring about better worlds, but it does not make
matters worse either. It is the default. There would have been no moral ripple in the world had
the three American hikers stayed at home, for example.
Best Strategy
Is it possible that the joint effect of each individual pursuing his or her own goals will
result in the best of all possible worlds – not just for the egotistically motivated, but all around?
Philosophers from Aristotle to Bentham have sought to anchor morality in some aspect of
human nature. Self-interest is an obvious candidate because it is so easy to make a very small
closed loop that says, “I may not know everything, but I know what I need better than anyone
else does,” or “That world with the fewest restraints on personal liberty is the best world.” But
this is a narrow use of the term “self-interest” – it just fits exactly where it is seen to fit. We need
a more serviceable rule if we want to do moral work. We want some version of the claim that
self-interest is moral because it benefits most people or society in general, not just one individual
at a time.
Certainly Ayn Rand and her philosophy of objectivism (Rand and Branden 1964)
preached this point of view. In the first book of oldest political text, Plato’s Republic,
Thrasymachus makes a case that each should do as he wishes and the cream will raise to the top.
In fact, it is sometimes argued that being on top is proof that one has acted morally (Freeman,
Harrison, and Wicks 2007). This choice rule is familiar to the English-speaking world as “Mr.
Smith’s theory of moral choice.” In 1776 Scottish thinker Adam Smith argued in The Wealth of
Nations that everyone will benefit automatically when individuals pursue their own ends. His
theory of the “invisible hand” has come down to us as laissez-faire capitalism or old school
libertarianism. After all, “It is not from the benevolence of the butcher, the brewer, or the baker
that we expect our dinner,” Smith said “but from their regard to their own interest.” 9 Every
country that has more than one political party divides over this issue, with one group holding that
limits on self-interest rob society as a whole and the other saying that self-interest is an
insufficient standard, and that community action protecting others benefits even the selfinterested.
The notion that self-interest is the best approach for deciding moral engagements can be
tested using the Iran hiker hostage situation. We begin by making an operational definition of
what it means to follow a self-interested policy or what will be called the BEST STRATEGY choice
rule. The defining characteristic is simple: find the course of action that promises the best overall
pair of outcomes regardless of what the other party does. When each agent pursues their own
self-interest, the intersection of the two chosen strategies will determine the joint outcome, and
what each party is entitled to can be read in the intersection cell in the moral engagement matrix.
In the framing of the hiker hostage situation, Iran favors the “Deal” over the “No
random” strategy. The BEST STRATEGY action for Iran would be “Deal” as indicated by the arrow.
This action picks out the two cells [4 3] and [3 2], with an expected payoff for Iran’s making
this choice of 3.5 = ([4] + [3]) / 2. This is better than 1.5 for “No deal.” The situation for the
United States is a bit narrower, but on the whole it favors a “Free” approach. The best outcome
and the third best outcome are in the “Free” column; the second-best and worst outcomes are in
the “Deal” column. This gives the United States strategy of “Free” a value of 3.0 = ([4] + [2]) / 2.
This is slightly better than 2.0 for “Deal.” The short cut of simply adding the ranks in each row
for the first number in each cell (Row’s outcomes) or the second number in each column
(Column’s outcomes) works fine. The ranks will be preserved. An example of a moral matrix
solution based on BEST STRATEGY is shown in Figure 3.2.
BEST STRATEGY
BEST OUTCOME
United States
“Deal”
United States
“Deal”
“Free”
[4
[3
3]
“Free”
[4
3]
[3
2]
“No ransom” [2
1]
[1
4]
“Deal”
2]
Iran
“Deal”
Iran
“No random” [2
1]
[1
4]
CONTEMPT
ALTRUISM
United States
“Deal”
United States
“Deal”
“Free”
[4
[3
3]
“Free”
[4
3]
[3
2]
“No ransom” [2
1]
[1
4]
“Deal”
2]
Iran
“Deal”
Iran
“No ransom” [2
1]
[1
4]
Figure 3.2: Four decision rules for solving the Iran Hiker Hostage engagement.
What this quick little exercise shows is that Iran and the United States acting on their
BEST STRATEGY,
independent of each other or with an intent to maximize their self-interests
without reference to the other party, will settle on the joint outcome of Iran seeking a deal while
the United States is not interested. The hostages would remain in Evin Prison in Tehran; the
negotiations would continue with decreasing interest on both sides. Iran would receive no cash or
vindication; the United States would be seen as placing national pride above those of its citizens.
Rather than the [4 3] outcome that was actually achieved, the BEST STRATEGY pursuit of
independently maximizing self-interest in this case would have produced a [3 2] outcomes
where each party is worse off.
In this example, insistence on maximizing self-interest works against both parties. That
does not happen all the time. In most cases, BEST STRATEGY functions about as well as any other
approach. In some situations winners become losers and in others one party gains or loses while
the other remains unchanged. There are even some very rare cases where everyone comes out
ahead pursuing their own self-interests. But it is more likely that this approach erodes the
common good, often without enriching more than a few. This can be read as a straightforward
political statement: BEST STRATEGY is acceptable but not a wonderful way of organizing a
community.
On the scale of 2 to 8, where 8 means that the strategy benefits everybody perfectly,
where 5 means that there is nothing to be gained by becoming involved or declining
participation, and 2 is the worst that could possibly happen, the BEST STRATEGY approach to
moral choice gets a 6.0. All things considered, that is an improvement over the “solitary, poor,
nasty, brutish, and short” option, but there is still ample room for doing better.
Best Outcome
Self-interested decision rules for moral engagements are easy to work with. No
consideration need be given to the interests of others: it is assumed that nature takes care of that.
A variation on this approach is easier still. It could be argued that in BEST STRATEGY, outcomes
that are not chosen carry too much weight. What we are really aiming for is the BEST OUTCOME,
not spreading our interest across the averages depending on other’s choices. The focus should be
on the top prize. Few serious philosophers defend this position. Ayn Rand, Machiavelli, Don
Corlione, and some others come across as appearing to advocate that the end justifies the means.
After-the-fact enthusiasm often plays on this Go-for-Broke approach, but only if it is successful.
10
Think of the difference between BEST STRATEGY and BEST OUTCOME in these terms.
Individuals who calculate the likelihood of winning realize that “playing the lottery” is a losing
game. As the size of the prize goes up, the chances of winning it go down and there is never a
cross-over point. The BEST STRATEGY is to “avoid the lottery” unless one has such a small
amount of money and needs such a large amount that keeping the small amount is certain failure.
Losing everything to pay off a portion of the “pay-day” loan may still leave one in impossible
debt. That reasoning is why the lottery is more likely to be played at the low end of the economic
spectrum. BEST OUTCOME works on this logic. Only the reward matters. The larger the prize the
more one should invest in getting it -- period. The BEST OUTCOME moral choice rule shows up in
incentive bonuses for CEOs, large contracts for star athletes in years following usual good
performance, and politicians who value getting elected above service. The same argument is
usually made for BEST STRATEGY and BEST OUTCOME: it is morally justified because everyone
will benefit in the long run. And, of course, that is right – kind of.
Consider the moral engagement matrix for the Iran Hiker Hostage case again as shown on
the top right in Figure 3.2. This time we focus on only the cells with [4] in them – the strategy
where Iran gets its way and the strategy where the United States gets its way. The result is read
from the cell where these two strategies intersect. We look at the intersection because each party
will follow its strategy independently. Whether either party intends that joint outcome or not,
they intended the strategies that produce it. In the engagement matrix, Iran favors “Deal” because
that is where its best outcome [4] is located. The best payoff [4] for the United States in in the
“Free” column. If each party followed the strategy associated with its BEST OUTCOME choice rule,
the result would be the Iran “Deal” and United States “Free” combination. In terms of ranks for
the parties, this alternative is the same unattractive [3 2] outcome. It is a slight disappointment
for Iran and a big come-down for the United States. Although the two self-interested choice rules
often lead to the same results, that is not always the case. The important difference between the
two self-interest strategies is that BEST OUTCOME produces wider swings and more unpredictable
results, in some cases even forcing one party to take its worst option [1].
Like the BEST STRATEGY approach, there are some cases where BEST OUTCOME is
satisfactory and rare cases where it performs well. Those who favor these decision rules argue
their point by selecting those peculiar cases where their favored policy works and ignoring the
rest. The justifications depend heavily on case selection, often after the fact. On the scale of 2 to
8, with 5 representing the base point, BEST OUTCOME also gets a 6.0.
Playing It Safe
Some folks might be tempted to boost the estimates for these “shoot for the win”
approaches just a little because of their adventuresome personalities. On the other hand there are
what might be termed “moral conservatives.” Rather than picking the strategy that promises the
most if others go along with it, some naturally prefer a choice that minimizes one’s losses should
others play mean.
The operational rule is simple. Identify the strategy where the worst possible outcome is
[1] and then select the other strategy so that the worst will be avoided no matter what the other
agent does. Both agents can use this decision rule. If we cross out the bottom row in the matrix at
the beginning of this chapter, Iran’s “No ransom” strategy, because the United States could really
damage Iran by mimicking the “Free” option, and we cross out the “Deal” column for the United
States because that would lay it open to Iran’s opting for “No ransom,” we are only left with one
possible joint outcome: Iran is willing to make a move but the United States is not. This leads
again to the less than attractive [3 2].
In general, however, the PLAY IT SAFE decision rule works slightly better than the more
aggressive options of BEST STRATEGY or BEST OUTCOME. Let’s give it a 6.1.
Contempt
Now let’s look at two choice rules that take other’s interests into account. One such
other-oriented approach would be to fasten on ways to hurt others. How silly, one would think,
to build a theory of morality on the idea of damaging or destroying those we disagree with or
punishing them so much that they give up their dreams. Or is it? Hitler had such a plan for
improving the world. Recently, American political campaigns are structured around getting the
opponent out of office rather that enhancing the prosperity of the nation. And when listening to
ethical debates, it is sometimes possible to get the idea that the goal is just to prove the other guy
wrong. The recent United States Supreme Court decision, “Citizens United,” established the
precedent that large-money interests can anonymously contribute unlimited sums to influence
political campaigns -- provided that they do not endorse a named candidate. The only way to
make this work is to run attack ads against named opponents.
To include CONTEMPT among the approaches for managing moral choice, there has to be
a claim somewhere that someone actually considers forcing others to “take a [1]” and live with
the worst of possible outcomes. The Inquisition offered its victims a choice: salvation and a
quick death by strangulation or damnation and burning. All of this was done for the sake of both
the community at large and for the ultimate good of the religiously misguided. Suicide bombers
seem to be working with a CONTEMPT moral decision rule. Penal codes based on retributive
justice assume that punishment for crimes benefits society as a whole.
It would not be an unrealistic perversion of the Iran Hiker Hostage moral engagement to
apply a CONTEMPT approach to its resolution. More than a few in both countries felt and likely
still feel that no opportunity should be missed for damaging the enemy. And the lives of three
unknowns matter little for the sake of the greater cause.
The operational procedure for CONTEMPT is similar to the BEST OUTCOME approach, but
turned on its head. Instead of looking for the intersection of my best [4] while the other agent is
playing his or her best [4], I look for the intersection of the other’s worst [1] while he or she is
searching for my worst [1]. In the hiking hostage case, it happens that this approach intersects on
the “No ransom” / “Free” future – a terrible outcome for Iran, and less than ideal for both parties.
Some might still defend this approach as the United States is damaged less than Iran, but the
value of the engagement is diminished and Iran will certainly look for opportunities to reverse
this settlement.
It is exceedingly rare that this approach can contribute anything of value to moral
communities. All things considered, CONTEMPT merits about 4.0 on our 2-to-8 scale. A score of 5
is breakeven so this makes CONTEMPT the first, and as it turns out the only, choice rule that works
against morality by engaging others in general. It is always better in the long run to be morally
engaged in community – unless one is tempted to act out of contempt for others.
Altruism
We are disposed to applaud acts of self-sacrifice, and we honor those who help others. It
is almost as though advancing others’ interests means the same thing as being moral. Especially
in the case of children, the traumatized, and the compellingly disadvantaged, there is something
approaching a duty to place others’ interests above our own – at least in small doses. The
community encourages acts on behalf of the helpless as the surrogate moral agent. And it should.
Much has been made of ALTRUISM recently among writers in the evolutionary biology
field. Robert Trivers describes this behavior as “reciprocal altruism,” a term that implies that
agents hope to bank good deed credit for future personal use. Garret Hardin describes exactly the
same behavior as “reciprocal self-interest.” 11 ALTRUISM becomes a bit abstract when we talk
about the rights of future generations, as in the global warming debates. 12 Certainly, wholesale
ALTRUISM
is just silly. The question is whether advancing the interests of other moral agents at
one’s own expense is a good long-term choice rule.
There is no doubting the good that comes to both the giver and beneficiary of charity.
What we want to know is whether ALTRUISM is a good moral choice rule when both agents are
given a free voice. Do the poor really want food stamps, for example, or would they prefer jobs
with dignity or even just to be left alone with their right to privacy intact? Perhaps those we help
often have a better idea for improving their lot than do we who would help them. 13 Let’s see
what happens in an engagement where both parties are capable of moral agency.
The operational definition of ALTRUISM in a moral engagement is to select the action that
maximizes the other party’s best outcome. Pick the strategy that leads to the other agent’s [4]
regardless of what happens to you. In the Iranian hiker hostage case, this would mean that Iran
moves to promote American interests and at the same time America seeks to make Iran better
off. It is not unheard of in international relationship to help another country. France certainly did
so during the colonies’ War of Independence, and we reciprocated twice during European wars
in the twentieth century. In our example, Iran would select the strategy under which the United
States gets the hostages back unconditionally. The United States could pursue the strategy that
allowed Iran to negotiate for the best deal. So Iran would pick the “No ransom” strategy, and the
United States would pick the “Deal” strategy, each party accommodating to what it believes the
other prefers. The intersection of the “No ransom” and “Deal” strategies is the lower right cell in
the matrix shown in Figure 3.2. The values there are [2 1]. As a result of trying to accommodate
what they believe is best for the other, the ALTRUISM choice rule produced the worst possible
outcome.
One is reminded of Jerry Harvey’s (1996) Abilene Paradox, in which a family tries to
decide where to have dinner by guessing what the others would prefer. After much posturing, it
is agreed to drive 50 miles to Abilene to eat at a restaurant that no one likes. Harvey uses this
example to explore the problems associated with not being honest about our true interests. O.
Henry (1905) tells, in the short story “The Gifts of the Magi,” of a woman who sells her beautiful
long hair to buy a chain for her husband’s watch for Christmas while her husband sells his watch
to buy his wife a comb for her hair. The vulnerable spot in ALTRUISM occurs when we act on our
interpretation of what others value instead of engaging them as moral agents who can and should
speak for themselves.
The capacity for altruism is necessary as part of carrying out other moral choice rules. It
requires a bit of discipline and sacrifice to accept second best or worse even when that is the
right thing to do in some versions of BEST STRATEGY for example. But that does not make
ALTRUISM
a good policy in general. Sometimes it is necessary to curb one’s own advantage to
protect the moral community; most of the time it is not. Morality includes the tool of altruism,
when appropriate, but it is unwise to equate morality and altruism. 14
ALTRUISM
gives a good feeling just in the very act, without considering its effects. In that
sense it is like eating ethical ice cream. As an approach to making moral choices it does not turn
in a stellar performance any more than gross indulgence in ice cream does. On a scale from 2 to
8, I give ALTRUISM a 6.0 – the same as the selfish rules.
Progress Check
It is reassuring to learn that a general policy of CONTEMPT is a lousy practice. Communities
usually, but not always, try to limit that sort of behavior; individuals, except where sociopathic
instincts cannot be suppressed, also avoid it because they realize that anyone might become a
victim of that sort of behavior, including themselves.
Perhaps there will be some secret smiles of satisfaction that self-interested choice rules,
although not sure winners, are not so bad after all. Both BEST STRATEGY and BEST OUTCOME beat
DISENGAGEMENT.
Hard bargaining, where our interests are on the table, generally works better
than no bargaining, when we decline to explore the possibility of mutual self-improvement.
The largest disappointment so far likely comes from the apparent case that ALTRUISM is
not an all-purpose choice rule. Perhaps we suspected this all along given the fact that it is far
from being a universal or even a common practice. It is most often observed on special
occasions. It is certainly true that individuals who lack the capacity to make personal sacrifices
for the sake of others will turn out to be morally handicapped. But it is a mistake to identify part
of morality for the entire activity.
But we need to go on to the next chapter. So far only half of the twelve important choice
rules for morality have been introduced. As it happens, we have already stumbled across the
worst: CONTEMPT. But we have not yet found the best. Nor have we looked at three versions of
“cheating” choice rules – DECEPTION, COERCION, and RENEGING. Most important of all, it remains
to be demonstrated that the ratings on the 2-to-8 scale are more than the mere opinions of the
author. A more rigorous, technical argument must still be presented.
Point of View: Can We Quantify Morality?
There is something a bit discomforting about statements such as “BEST OUTCOME is more often a
good choice rule than is CONTEMPT.” Saying that there are exactly 78 types of moral
engagements has the appearance of unjustifiable false precision. We may think we have the
general feel for which future worlds we would prefer, but we instinctively draw back from being
more definite than necessary about these sorts of things. Is not there a greater danger in defining
moral choice rules in operational terms than leaving such matters open to personal and
situational needs? Morality somehow loses some of its dignity when numbers begin to appear in
the same sentences with terms such as ALTRUISM. Morality counts, but perhaps not in the way
being developed in this book.
There is practical wisdom in this concern. Great care is necessary to minimize the misuse
of number theory. There are certainly things that cannot be said about morality in quantitative
terms. But number blindness is not a guarantor of morality.
Is it not possible to use miles-per-hour to determine whether a driver is breaking the law
or fibril temperature as an indicator of health. Numbers need not be error-free to be useful, at
least as long as they are not biased. It would be inappropriate to say that temperature equals
health by virtue of measuring every aspect of it. All that is required to justify numbers here is
that we are capable of managing health better by accurately quantifying temperature or morality
by enumerating the situations where one approach more often leads to the futures we prefer than
the alternatives do.
There is a special difficulty with numbers in naturally occurring cases like these. The
problem runs all the way down to knowing when an event of moral significance has actually
occurred. It would be very difficult to say in any meaningful way how many of them you or I
have been involved with so far today. This is a problem in the field of logic known as “the
vagueness issue,” and is more popularly discussed under the heading of the sorites puzzle. Soros
is Greek for “heap,” and the original puzzle was this: It is agreed that a single head of grain does
not constitute a heap and that many thousands of heads of grain do amount to a heap. So at what
precise point does the addition of a single head of grain first establish the heap? The fact that
only philosophers worry about these things should be a tip off that we get on with solving these
problems in a practical way every day. We use a scoop of known size or a scales. We
operationalize based on our needs. I want to make a case that ethics is not countable, but moral
acts are in general and practical ways.
I am a pragmatist on these issues. If you can show me a sense in which it makes a
difference whether certain behavior matters morally, I will be able to count it. If your or my
sense of what counts morally is perfectly private, vague, or in regular flux, the whole enterprise
will falter, not matter what other theoretical abstractions are trucked in to prop it up.
If two metal pointer tips are touched simultaneously to a person’s skin several
millimeters apart this will be recognized as two distinct points. As the distance is decreased, a
point will eventually be reached where only one stimulus can be recognized. The threshold for
this transformation is called the JND or just noticeable difference. What we take to be one is one.
A similar operational criterion can be used in morality. Measurement is purpose-driven, not the
other way around. Those who know what they are looking for can measure it. 15
Today it is seldom argued by philosophers that there is a distinct line between entities or
that belonging to a category of interest can only be defined one way. The currently received view
is known as supervaluationism, with the particular version that emphasizes contextualism being
prominent. 16 The common perspective in this line of thought is that the evaluator participates in
the categorization of any classification scheme or counting rule. If an individual’s behavior
affects another’s future we are dealing with a moral event; otherwise it does not count. What
actions will constitute a change cannot be known for certain in advance, so we pay attention to
consequences. Recently citizens in California, by referendum vote, changes the state’s three
strikes sentencing guidelines by altering the way offenses are counted based in part on the effect
of the offence.
Counting is only a tool, and the value of the tool will be judged by its aptness for getting
the work done. I propose that operational definitions, measures of value of the engagement,
ranks across preferences, permutations of ranks over situations, and even generalizations across
categories of moral behavior are workably stable and thus quantifiable and that such measures
are useful in deciding how to bring about futures we prefer.
There is a deeper concern some may have that I can only acknowledge and on which I
may very well not be able to provide general satisfaction. Ethics has a special and dignified
status in social discourse. We have not been able to come to general agreement on which
principles are the primary ones in the two and a half centuries since Socrates and his
contemporaries in other cultures and religions began working on it. The disputes that continue
today have polarized individuals and communities, often radically. Some take the matter of how
others behave very seriously. It may seem to them that talking in terms of operations and
numbers, even formulas as weak as “X is preferred to Y” disrespects ethics. The last thing some
who do battle in the field want is for someone to offer them a ruler to measure what they say is
so.
For this reason I promise not to make any claims about measuring ethics or ethical theory
whatsoever. I think I agree with those who say it cannot be done – “on principle” in both senses
of the phrase. However, I will defend my position that behavior can be measured in
quantitatively useful ways. Actions can be counted, it can be recorded when an individual
chooses one behavior over another, preference rankings can be enumerated, with due allowance
for reasonable changes of opinion. We can even find reliable correlates of preference-action
patterns in the brain. This is an advantage that a thoroughgoing naturalism has over normative
theory. We can count moral behavior even when it is better not to make quantitative claims
regarding ethical theory.
1
For starters on decision theory see Michael Resnik’s Choices (1987) or Itzhak Gilboa’s (2009)
Theory of Decision Under Uncertainty.
2
There are, of course, situations where we intend to do X and accidentally do Y instead.
Industrial psychologists discuss this as the difference between a mistake and a slip. I might type
“This difference is to subtle to matter.” If a slip – a typo – I will say “oops” and fix it. Lecturing
me on the rule will not help. If it is a mistake, I might respond to instruction. Intentional
immorality is a mistake caused by not understanding how to bring about a better world. Slips are
just random lapses that will wash out in the long run. The purpose of this book is to diminish
moral mistakes. See James Reason Human Error (1990).
3
The use of small caps to designate moral decision rules and bold to designate types of
engagements will be described below.
4
A 65-year-old study has become a classic demonstration of prejudiced social perception. Albert
Hastrof and Hadley Cantril (1954) showed movies of a football game between Dartmouth and
Princeton that featured especially rough play. To no one’s surprise, the overwhelming percentage
of Dartmouth students were certain Princeton started it, while Princeton students were equally
lopsided in claiming that the instigators were the Dartmouth squad.
5
Robert Rosenthal (1981), among others, has worked out the logic of “centipede games” that
involve backward deduction across chained decisions. Robert Aumann (1995) pointed out that
the clock may be ticking for one or both parties toward a game changer. Sara Shourd recently
reported on PBS radio that the Iranians became increasingly willing to reach a settlement as the
three Americans had become more trouble than they were worth. Generally time pressure works
more to the disadvantage of one agent than the other, and the harried agent is the one most likely
to make accommodations. The nature of engagements that “shift” over time will be taken up in
Chapter 9, where it will be shown that in some cases there is something like a moral imperative
to work to get the engagement in an optimal form before trying to reach a decision.
6
It is important to keep this example in mind because it demonstrates that moral value can be
measured in both “absolute” and “relative” terms. Without being involved at all, one can be
damaged or benefitted by the actions of others. For example, a student may miss out on getting
into a competitive graduate program because his classmates cheated on a qualifying examination
and took the last remaining slot. Or patients might be denied health care or have it delayed
because other clients are given first access because they have more money. Relativists might
argue that they are coming out ahead when both parties suffer in a fight, but the other guy suffers
more. Robert Frank (2011) sets out the general case for viewing moral values in relative terms.
The logic of moral engagements presented in this book assumes that value decisions are relative
to the individual making the moral choice, hence both the “absolute” and the “relative”
approaches to valuation works fine.
7
Hobbes’s countryman, John Locke (1689/1924), writing about 30 years later avoided this
mistake by identifying the State of Nature with rational reciprocal moral agency, a condition that
anyone could fall back on at any time the rules of community were failing. Although Locke was
certainly in the back of the mind of the framers the American Declaration of Independence, it is
wrong to identify him with laisse-faire economics. He said, in Second Tract on Government,
“But we are denying that each individual is entitle to do what, in the light of the circumstances,
he thinks is in his best interests.” Locke did have to sit out the end of the Stuart monarchy in
exile in Holland and publish his Two Treaties on Government anonymously, being so far ahead
of this time.
8
According to recent research 27% of e-mail and 26% of phone requests for help from
colleagues working on the same team in an organization go unanswered (Grodal, Nelson, and
Sino 2015).
9
Adam Smith’s “invisible hand” make its appearance in Book 1, Chapter 2 of The Wealth of
Nations (1776/2000). A more outrageous position was advanced by Bernard de Mandeville in his
satire with a straight face in the form of a lengthy doggerel poem called The Fable of the Bees:
Or Private Vices, Publick Benefits (1732/1987). Mandeville argued that the vices of the wealthy
should be encouraged because they stimulate the economy.
10
Nassim Taleb’s best seller Fooled by Randomness (2004) is filled with stories of those who
were on top Monday due to chance (somebody had to be), who crowed on Tuesday, and were
thrown out as relatively mediocre on Wednesday (somebody had to be).
11
Trivers (1971) staked out a strong position for altruism decades ago. Confidence in that
position has been steadily eroded since in work by Hammerstein (2003): “Many investigators are
now convinced that the sort of reciprocal altruism first proposed by Trivers may be both rare and
fragile in nature. . . . Doubts persist about whether animals possess the cognitive abilities to
sustain contingent cooperation,” Hardin (1977), Field (2004), and Clutton-Brock (2009).
12
The view that we are responsible for others’ futures in our own image requires that future
generations have a present advocate speaking on their behalf. See the edited collections by
Sikora and Barry (1978) and by Fotion and Heller (1997) and papers by Kavka (1982) and
Broome (1994).
13
Perhaps this line of raising questions about altruism owes something to Friedrich Nietzsche.
See his The Gay Science (1882/1974) and The Genealogy of Morals (1887/1956).
14
Altruism is currently enjoying popularity as a “theory of morality” on two fronts. Cristina
Bicchieri (1993; and Bicchieri, Jeffrey, and Skyrms 1997) argue for it on the basis of “just so”
reconstructions of evolutionary biology. Patricia Churchland (2011) marshals evidence in
neurobiology for something like an ethics center in the brain. See also Spink (2011) for an
interesting reinterpretation of Mother Teresa’s work.
15
William James (1980/1950 and 1899/2007) argued strongly that it is human nature to give
certain aspects of the flowing stream of consciousness distinct objective reality by calling them
out for some particular purpose. A more contemporary form of this argument is found in Donald
Davidson’s essay “The Individuation of Events” (1967/2006). This view is accepted as the
standard in modern physics. As Werner Heisenberg explained in a public lecture in 1932, “In
every act of perception we select one of the infinite number of possibilities and thus we also limit
the number of possibilities for the future” (1952, 28).
16
Representative sources on the vagueness issue include: Fine (1975), Burns (1991), Raffman
(1994), Soames (1999), Keefe (2000), and Stanley (2003).
Download