A Critique of Line-Drawing as a Method of Case Analysis

advertisement
A Critique of Line-Drawing as a Method of Case Analysis
Introduction
Charles Harris, Michael Pritchard, and Michael Rabins in their excellent text
Engineering Ethics have recently described a “method for moral problem solving” in
engineering ethics.1 They call the method line-drawing (LD) and they claim that it has
been successfully used to solve difficult moral problems and resolve moral
disagreements. One example they cite is that of the National Commission for the
Protection of Human Subjects of Biomedical and Behavioral Research which, the authors
claim, used something very much like LD to develop guidelines for institutional review
boards at institutions that receive federal funding for research involving human subjects.
By taking this approach, the Commission was able to avert the impasse that otherwise
would have resulted from fundamental ethical and philosophical disagreements among its
diverse membership. Instead of suffering impasse, the Commission reached agreement on
a small set of guidelines for research involving human subjects.2 The LD method, which I
shall describe and illustrate later in this paper, involves first locating a particular “case,”
consisting of the action under consideration and the circumstances under which it would
occur, along a spectrum of cases ranging between “clearly acceptable” and “clearly
wrong” extremes. Then one judges the action’s moral acceptability or wrong on the basis
of its relative proximity to the two extremes. That is, if the case is closer to the “morally
acceptable” extreme, then the action is judged to be morally acceptable. If the opposite is
true, the action is judged to be morally wrong.
I believe that LD is an admirable attempt to provide practical action guidance for
real life agents, like engineers, who face problematic moral decisions. Many ethicists
oppose any attempt to find such a “decision procedure” in ethics. Some argue that the
morally pertinent details of problematic decisions are too multifarious and complicated to
be accounted for by a general formula. Others hold that moral decision-making requires a
1
Charles E. Harris, Jr., Michael S. Pritchard, Michael J. Rabins, Engineering Ethics: Concepts and Cases,
2nd ed. (Belmont, Calif.: Wadsworth, 1999)
2
Engineering Ethics, p. 36.
faculty of judgment that cannot be precisely specified. However, moral agents who find
themselves “in the trenches” often need specific advice about what to do. What good is
applied ethics if it cannot offer guidance for individuals who face ethically problematic
choices? An applied ethics that cannot guide action is useless. However, I shall argue that
LD suffers some shortcomings that seriously compromise its legitimacy as a method of
moral problem-solving. I shall then propose an alternate “method of moral problemsolving” that focuses on the fact that various moral and nonmoral uncertainties are almost
always present when agents face difficult moral decisions. How to deal with those
uncertainties in a practical way is the problem that my proposal will attempt to solve. I
shall argue that my method, which I call the “method of rational decision-making under
moral uncertainty,” is superior to LD as a method of moral problem-solving.
Line-Drawing
The authors of Engineering Ethics discuss LD in Chapter 3, entitled “Methods for
Moral Problem Solving,” in which they characterize LD as one of the “techniques for
resolving” “[d]ifficult ethical issues.”3 LD is a moral problem-solving method in the
sense that we are to use it to judge whether a certain action in a particular situation or
case would be “morally acceptable” or “morally wrong.” The “problem” is solved when
we determine whether the action in question is “morally acceptable” or “morally wrong.”
It is important to note that the authors use the term “case” to mean a situation or
set of circumstances in which some individual faces a moral decision together with a
particular choice of action by that individual in that situation. Thus, a different choice of
action by that individual in that situation results in a different “case.” This use of “case”
departs from ordinary usage in which “case” refers to the situation, the individual, and the
moral decision that she faces but not to a particular action in that situation. The authors
refer to a case that involves a “clearly acceptable” action as a “positive paradigm” and to
one that involves a “clearly wrong” action as a “negative paradigm.” If the case at hand—
3
Engineering Ethics, p. 59. In the preceding chapter, the authors introduce and discuss line-drawing as a
method for clarifying and applying difficult concepts, like bribery. In this paper, I shall concern myself
exclusively with LD as a method for solving moral problems.
the “test case”—involves an action that is closer to the positive paradigm than to the
negative paradigm, then we are to judge it as “morally acceptable”; if the opposite is true,
then we judge it as “morally wrong.”4
According to the LD method, in solving a particular moral problem we identify
the relevant spectrum of cases by attending to certain “features” of the test case. These
features include morally significant aspects of the case. Each such feature is itself located
on a spectrum of related features ranging from “negative” to “positive.” The constellation
of negative features constitutes the negative paradigm while the constellation of positive
features constitutes the positive paradigm. For a particular moral problem, the spectrum
of cases between the negative and positive paradigms is a composite of, or is at least
based on, the various spectra of related features. By first locating each feature of the case
being analyzed on the appropriate spectrum of features, we are in a position to make a
holistic judgment about where the case lies on the relevant spectrum of cases and whether
it is closer to the positive or to the negative paradigm.
The best way to understand LD is to look at an example. I shall discuss the one
given by the authors of Engineering Ethics:
“Suppose Amanda signs an agreement with Company A (with no time limit) that
obligates her not to reveal its trade secrets. Amanda later moves to Company B,
where she finds a use for some ideas that she conceived while at Company A. She
never developed the ideas into an industrial process at Company A, and Company
B is not in competition with Company A; but she still wonders whether using
those ideas at Company B is a violation of the agreement she had with Company
A. She has an uneasy feeling that she is in a gray area and wonders where to draw
the line between the legitimate and illegitimate use of knowledge. How should she
proceed?”5
The “moral problem” is to determine whether it would be “morally acceptable” for
Amanda to use the ideas she conceived at Company A after she becomes an employee of
Company B.
Although this aspect of LD is never explicitly stated, it is strongly suggested by the authors’ discussion of
a “line of demarcation” between cases in the spectrum. (Engineering Ethics, p. 63)
5
Engineering Ethics, p. 60.
4
One major step in the LD method is to identify the pertinent spectrum of cases for
Amanda’s test case—the one in which she chooses to use her ideas at Company B.
However, to do so, Amanda must first identify important features of her situation and
compare them against the “key features” of the negative and positive paradigms. For
example, one spectrum of features concerns the degree to which Amanda has received
permission from Company A to use her ideas at Company B. Here the negative feature is
her having previously signed an agreement with Company A not to use her ideas outside
the company. The positive feature is her having received explicit permission from
Company A to do so. Other types of features identified by the authors for this case
include the degree to which Company A and Company B are competitors, the degree to
which Amanda conceived her ideas alone rather than jointly with co-workers, the degree
to which she conceived her ideas during her working hours at Company A rather than
during her off-hours, and the degree to which Company A’s facilities (labs, equipment,
etc) enabled her to generate those ideas.
The authors represent these spectra of features in the following diagram6:
Negative Paradigm
(Clearly wrong)
Positive Paradigm
(Clearly acceptable)
Signed agreement
_________________________ Permission granted
A and B are competitors
_________________________ A and B not competitors
Ideas jointly developed
_________________________ Amanda’s ideas only
Ideas developed on job
_________________________ Ideas developed off job
Used A’s lab/equipment
_________________________ A’s equipment not used
Additional negative features _________________________ Additional positive features
.
.
.
.
.
.
.
.
.
The negative paradigm for this case—the related case in which Amanda’s using her ideas
at Company B would be “clearly wrong”—includes the following key features (1) she
has signed an agreement with Company A not to use her ideas outside the company even
after she leaves its employ, (2) Companies A and B are direct competitors in the market
for some product to which Amanda’s ideas would contribute, (3) Amanda ideas were
conceived jointly with her former co-workers at Company A rather than by herself, (4)
she formed her ideas entirely during her working hours at Company A, and (5) Company
A’s facilities (labs, computers, equipment) were instrumental in their generation.
Correspondingly, the positive paradigm—the case in which Amanda’s using her ideas at
Company B would be “clearly acceptable”—is one in which (1) she has received explicit
permission from Company A to use her ideas at Company B, (2) Companies A and B are
not competitors in any market, currently or foreseeably, (3) Amanda’s ideas were
conceived entirely by herself; her co-workers at Company A had no part of their
formation, (4) she came up with her ideas entirely during her off-hours, and (5) she did
not use Company A’s facilities in any way in forming her ideas. The spectrum of cases
consists of the negative and positive paradigm cases and all cases that fall somewhere
between them by virtue of their having features that are intermediate between the
(negative and positive) features of the two paradigms.
The second major step of the LD procedure is to locate the case being examined
among the spectrum of cases between the positive and negative paradigms. This requires
locating the features of that case among the respective spectra of relevant features. The
authors depict the result of this step in the diagram below7:
Negative Paradigm
(Clearly wrong)
Signed agreement
6
Positive Paradigm
(Clearly acceptable)
___X_____________________ Permission granted
The diagram appears as Figure 3.1 on p. 62 of Engineering Ethics. I have modified the diagram slightly,
leaving out certain labels in order to conserve space. However, I have used the authors’ labels for the
positive and negative features.
7
As before, this is a modification of the original diagram, which appears as Figure 3.2 on p. 63 of
Engineering Ethics.
A and B are competitors
___________X_____________ A and B not competitors
Ideas jointly developed
________________X________ Amanda’s ideas only
Ideas developed on job
___________________X_____ Ideas developed off job
Used A’s lab/equipment
__________X______________ A’s equipment not used
Additional negative features ___?___?___?___?___?___?__ Additional positive features
.
.
.
.
.
.
.
.
.
The X’s represent comparisons of the key features of Amanda’s test case to the
corresponding features of the positive and negative paradigms. For example, the degree to
which Amanda has received permission from Company A to use her ideas at Company B
are much “closer” to the negative extreme of her having signed an agreement not to do so
than to the positive extreme of her having been given explicit permission by her previous
employer to do so. Perhaps, although Amanda signed no agreement with Company A
covering her ideas, there was a general understanding among its employees that any
potentially marketable knowledge developed during one’s employment should be treated
as proprietary. Similarly, although Company A and Company B do not currently compete
in the same markets, there is a reasonable likelihood that they may do so in the
foreseeable future. It is not difficult to think of other details that would determine the
locations of the other X’s.
The final step is to make a composite assessment of the relative proximity of the
test case to the positive and negative paradigms, based on the relative proximities of its
features to those of the two paradigms. If, on the whole, its features are closer to those of
the positive paradigm than to those of the negative paradigm, then the test case is judged
to be morally acceptable. If the opposite is true, then the judgment is that the action is
morally wrong.
Interestingly, the authors do not say whether it would be morally acceptable for
Amanda to use her ideas at Company B. One reason may be that, in their view, only
Amanda herself is in a position to make such a judgment, since only she has intimate
knowledge and understanding of the features of her situation. Another possible reason is
that, as far as one can tell, the test case is equidistant from the two paradigms. On the
whole, the X’s do not clearly appear to be closer to one extreme than to the other. If so,
how does Amanda tell whether her action is acceptable or wrong, morally speaking? The
authors observe that “imposing a line of demarcation between some of the cases in a
series involves an element of arbitrariness.”8 Presumably, this “line of demarcation”
separates cases involving morally acceptable actions from those involving morally wrong
actions. This arbitrariness, they say, often occurs in the policies of companies “and in
some cases professional societies” who, in some instances, may choose where to draw the
line. Therefore, the “exercise of judgment” often cannot be avoided.9 Nevertheless, in the
authors’ view, a spectrum of cases normally represents “real moral differences” among its
members.
One complaint against LD is that it does not adequately recognize that some
features of a case may affect its moral acceptability more than others and therefore should
receive greater weight. However, the authors acknowledge that a case’s features should
sometimes be weighted differently, although the preceding diagrams do not reflect this.
One way of doing so would be to allow the lengths of the lines corresponding to the
different features to have different lengths—longer lines for the weightier features and
shorter lines for the less weighty features. For example, if feature F is twice as weighty as
feature G, then its line would be twice as long. Of course, it may be impossible to say
precisely what the relative weights are. Once again, judgment must be exercised.
There is another objection to the authors’ method that they do not discuss,
however. Consider again Amanda’s problem of deciding whether to use her ideas
generated during her employment at Company A now as an employee of Company B. The
authors discuss only the question of whether Amanda’s using her ideas at Company B
would be morally acceptable. They do not even raise the question of the moral
acceptability of Amanda’s other option—namely, not using her ideas at Company B. Let
us do so. What would the analysis look like? Here the “test case” includes the same
circumstances as the previous test case; the only difference is the outcome of Amanda’s
decision—this time Amanda chooses not to use her ideas at Company B. If we use the
8
Engineering Ethics, p. 63.
same paradigms, positive and negative, as before, then we appear to get something like
the following diagram:
Negative Paradigm
(Clearly wrong)
Positive Paradigm
(Clearly acceptable)
Signed agreement
________________________X Permission granted
A and B are competitors
________________________X A and B not competitors
Ideas jointly developed
________________________X Amanda’s ideas only
Ideas developed on job
________________________X Ideas developed off job
Used A’s lab/equipment
________________________X A’s equipment not used
Additional negative features ___?___?___?___?___?___?__ Additional positive
features
.
.
.
.
.
.
.
.
.
The reason that this new test case (in which Amanda does not use her ideas at Company
B) coincides with the positive paradigm case is that none of the considerations—degree
of permission, degree of competition between the two companies, degree to which
Amanda collaborated with her former co-workers, etc.—detract from the moral
acceptability of the action being evaluated. For example, even if her former employer had
explicitly permitted Amanda to use her ideas at her new company, it would still be
morally acceptable for her not to do so, it would seem. There does not seem to be any
feature counting against Amanda’s choosing not to use her ideas at Company B.
Let us suppose, for the moment, that the preceding diagram correctly depicts the
second test case in which Amanda chooses not to use her ideas at Company B. How
should this fact bear on Amanda’s decision? One answer, which is at least consistent with
the authors’ analysis, is that she may disregard it altogether. According to this view, all
Amanda need concern herself with is the moral acceptability of the first test case—which
includes her using her ideas at Company B. If, for example, she determines that the first
test case is ever so slightly closer to the positive paradigm than to the negative one, then
she is morally justified, and therefore justified all things considered, in choosing to use
9
Ibid.
her ideas at Company B. It matters not that the second test case is much closer to the
positive paradigm than the first test case. She need not make comparisons across test
cases.
But may Amanda ignore the moral merits of the second test case and the “do not
use ideas” option? That alternative seems superior to the “use ideas” alternative, which, it
appears, is at best only marginally acceptable, morally speaking—assuming of course that
the correct features have been identified for the two cases. But if so, why not select the
“morally better” option? If moral acceptability is the “bottom line,” what justification is
there for choosing a morally inferior alternative? Perhaps the authors would say that the
wrong features have been identified for the second test case. Perhaps, in addition to the
features identified in the previous diagrams, Amanda should consider the consequences
of using or not using her ideas at Company B. For example, if using her ideas at
Company B would enable it to develop and sell a valuable product that would serve
important needs of consumers, then “social utility” may properly have a role to play in
Amanda’s deliberations. If so, then perhaps the diagram for Amanda’s “do not use ideas”
option is the following:
Negative Paradigm
(Clearly wrong)
Positive Paradigm
(Clearly acceptable)
Signed agreement
________________________X Permission granted
A and B are competitors
________________________X A and B not competitors
Ideas jointly developed
________________________X Amanda’s ideas only
Ideas developed on job
________________________X Ideas developed off job
Used A’s lab/equipment
________________________X A’s equipment not used
Minimizes social utility
X________________________ Maximizes social utility
Additional negative features ___?___?___?___?___?___?__ Additional positive
features
.
.
.
.
.
.
.
.
.
This modified diagram complicates Amanda’s decision, for the features no longer
unanimously and categorically favor the “do not use ideas” option. Of course, the X’s are
still predominantly to the right. But this ignores how much weight Amanda should attach
to the respective features. If Amanda regarded social utility as especially important in this
situation—as much more important than any of the other features—then she might judge
that, on the whole, taking into account the relative weights of the relevant features, not
using her ideas at Company B is closer to the negative paradigm than to the positive
paradigm. The right/left locations of the X’s in the preceding diagram would be somewhat
misleading. Of course, the weight attached to social utility probably should depend on
how much utility is at stake. For example, would the use of Amanda’s ideas at Company
B make a dramatic difference in people’s well-being by effectively treating some lethal
disease, or would it only mildly amuse a relatively small number of customers while
earning a modest profit for the company?
If social utility applies to Amanda’s “do not use ideas” option, why not apply it to
her “use ideas” option as well? If we do so, we get the following modification of the first
diagram:
Negative Paradigm
(Clearly wrong)
Positive Paradigm
(Clearly acceptable)
Signed agreement
___X_____________________ Permission granted
A and B are competitors
___________X_____________ A and B not competitors
Ideas jointly developed
________________X________ Amanda’s ideas only
Ideas developed on job
___________________X_____ Ideas developed off job
Used A’s lab/equipment
__________X______________ A’s equipment not used
Minimizes social utility
________________________X Maximizes social utility
Additional negative features ___?___?___?___?___?___?__ Additional positive features
.
.
.
.
.
.
.
.
.
We immediately notice that this modification presents a very different picture from the
initial diagram--one that appears to support the “use ideas” option much more strongly.
We are led to wonder why the authors of Engineering Ethics did not include a “utility”
feature in their analysis of Amanda’s decision. Does their analysis betray an anticonsequentialist bias in their ethical views? And it is worth noting that other “biases”
might be alleged: Why not recognize a “loyalty to one’s employer” consideration that, if
applied to Amanda’s situation, would support her “use ideas” option? After all, Company
B is Amanda’s current employer and it legitimately expects her to perform her job so as
best to serve its financial interests. Of course, loyalty considerations may be taken to
apply also to one’s former employers, so what a “loyalty to employer” consideration
indicates for Amanda’s decision is somewhat problematic. And what about a “serve one’s
own interests” consideration? Should Amanda not take her own interests into account by
considering possible benefits for herself? And it is not difficult to think of even more
possible relevant features.
But to raise such questions threatens to unravel the whole LD approach, which is
predicated on the assumption that moral agents, like Amanda, can identify morally
relevant features in individual cases. As we have just seen, this is doubtful. To ask
whether Amanda should take consequences into account and, if so, which consequences
she should consider—consequences for whom?— threatens to embroil us in the
contentious philosophical debates that LD is supposed to avert. How can we even
construct diagrams for decisions like Amanda’s without first taking stands on
fundamental theoretical issues, such as the relative importance of consequences in
assessing the moral acceptability of one’s alternatives?
At this point, it is useful to recall the (Engineering Ethics) authors’ inspiration for
LD. LD was patterned on the approach that the National Commission for the Protection
of Human Subjects of Biomedical and Behavioral Research followed in reaching
agreement on a set of guidelines, despite the divergent philosophies of its members. The
authors report that, by discussing and comparing a wide variety of cases, the Commission
was able to agree on three “basic areas of concern”—respect for persons, beneficence,
and justice. This consensus on principles enabled the Commission to concur on
guidelines. It would be helpful to know more about the details of their deliberations—
which cases they considered, how they evaluated them, how much disagreement occurred
initially among the members’ evaluations, what sort of discussions ensued, what sorts of
group dynamics were at work in those discussions, how consensus was reached, and how
the individual members regarded their personal decisions to join the consensus. One
possibility is that the consensus on principles represented a compromise that enabled the
Commission to draft a set of guidelines that the members could all “live with.” Perhaps
the members considered it highly desirable for the Commission to publicly proclaim
unanimity so that the scientists whose activities the guidelines would govern would
accept their authority and legitimacy. Another possible scenario is that the Commission’s
discussions actually caused some of its members to abandon their initial philosophical
positions and to adopt new ethical foundations sufficiently compatible for them to agree
on the guidelines. Or maybe some combination of these forces—desire to proclaim
consensus and revision of basic philosophical views—effected the final outcome.
These questions about how the Commission managed to achieve consensus are
important here, because it is difficult to see how the “desire for consensus” explanation
could ground a method of moral problem-solving for individual moral agents, like
Amanda. Whom would an individual decision-maker need to compromise with? Of
course, Amanda must reach “consensus” in the sense that she must do something or other.
Either she uses her ideas at Company B or she doesn’t. But this interpretation of
“consensus” would imply that whatever Amanda decides is morally justified, and this is
hardly plausible. Moreover, the “desire for consensus” explanation of the Commission’s
actions raises questions about whether its deliberations were reasonable or valid.
Sometimes the desire to reach consensus leads to “groupthink” instead of sound
reasoning and trustworthy results. On the other hand, what if consensus was achieved by
changes occurring in the basic philosophical views of the members? Perhaps detailed
discussion of the cases produced enough such changes that the entire Commission was
able to reach a consensus on ethical foundations. This is conceivable but not very likely.
What are the chances that, say, a strong proponent of doing the greatest good for the
greatest number and a strong advocate of the Golden Rule would resolve their
philosophical differences by examining and discussing cases? Would they be able to
agree on what is right and wrong for every case they considered? Would not their
respective verdicts on cases likely conform to their initial utilitarian and Kantian moral
principles and therefore occasionally clash?
Of course, people sometimes abandon moral principles whose implications for a
particular case they cannot accept. But sometimes people disagree about cases because of
underlying philosophical disagreements. We should recall that one of the claimed
advantages of LD is that it enables decision-makers, like Amanda, to bypass contentious
theoretical issues. If Amanda and others who face ethical problems can generally be
assumed to know what kinds of features are ethically significant, what the ethically
significant features of their situations are, and what weights should be attached to each
such feature in judging the moral acceptability of their options, then it is difficult to see
why this would not be tantamount to resolving the underlying philosophical and
theoretical issues themselves. Thus LD seems to imply that those fundamental
philosophical and theoretical issues can generally be resolved. But we know from
experience that this is not so. And if it were so, why would we need LD in the first place?
I conclude that LD fails as a method of moral problem-solving. It fails because it
does not make good on its claim to circumvent the problem of ethical foundations. If
Amanda already knows whether consequentialist considerations are ethically relevant for
her decision and how they should be weighed against other considerations, then she
already knows, at least implicitly, how to resolve some very difficult and controversial
philosophical issues. However, if Amanda is at all sophisticated in her ethical thinking,
she is likely to be troubled by the same questions that ethicists are about what factors are
ethically significant and how those factors should be weighed against each other. She is
likely to have uncertainties that would prevent her from applying LD to her situation.
What then should Amanda do? How should she deal with her ethical and
philosophical uncertainties? In the next section, I shall propose answers to these
questions. I shall argue that Amanda should what is most likely, given her uncertainties,
to be morally acceptable. By doing so, she will act rationally under moral uncertainty.
Rational Decision-Making under Moral Uncertainty
Let us return to the case of Amanda who is deciding whether to use her ideas at
Company B. Let us suppose that, in addition to the original hypotheses about her signed
agreement with her previous employer, Company A, the degree to which her ideas were
generated collaboratively with her former co-workers, the degree to which they were
generated during working hours, etc., we accept the hypothesis that Amanda is uncertain
about the role that “social utility” should play in her decision. Of course, Amanda is not
likely to know for sure how much social utility would be created by her using her ideas at
Company B—for example, whether her ideas would encounter insuperable technical
difficulties during the product development phase, whether the product would succeed in
the market place, whether its use by consumers would reveal unforeseen drawbacks, etc.
Let us make the simplifying and unrealistic assumption that these sorts of uncertainties do
not occur and that Amanda has a firm social utility assessment for each of her two
options. Her uncertainty is whether the moral acceptability of her options hinges on her
relationship to her former employer or on social utility.
Let us suppose that, in Amanda’s mind at least, there are two possible bases for
her decision: (1) her relationship to her former employer, Company A, and moral
obligations to Company A deriving from that relationship and (2) social utility. Let us
refer to (1) as “fairness to Company A.” Amanda’s uncertainty is whether fairness to
Company A or social utility should determine her choice of action. Let us also assume
that Amanda has determined that using her ideas at Company B would generate much
more social utility than not doing so. How might Amanda assemble all this information
into a rational decision about what to do? But what do we mean by “rational” here? After
all, we are assuming that Amanda’s primary concern is to do what is morally acceptable.
Unfortunately, she cannot determine with certainty whether her options are morally
acceptable or not. The best she can do is to maximize the likelihood that her action will
be morally acceptable in light of the information available to her. To do so she must
consider probabilities. Let us pretend that there is a .70 probability that fairness to
Company A should determine her decision. If that were the only pertinent probability,
then it would be reasonable for her to do what fairness to Company A requires. However,
let us recall the first LD diagram and the scattered locations of the X’s. It was difficult to
say for sure whether, on the whole, the X’s were closer to the positive paradigm case or to
the negative one. This may mean that Amanda is very unsure whether the “fairness to
Company A” consideration prohibits using her ideas at Company B. Furthermore, let us
pretend that there is a .60 probability that fairness to Company A prohibits using her ideas
at Company B and a .40 probability that it does not. Amanda’s decision may be
represented by the following decision table:
“fairness to Company A” is the relevant
moral consideration (.70)
use ideas at
Company B
“fairness to
Company A” allows
using ideas at
Company B (.40)
“fairness to
Company A”
proscribes using
ideas at Company B
(.60)
morally acceptable
morally wrong
social utility is the
relevant
consideration (.30)
morally acceptable
do not use ideas at
Company B
morally acceptable
morally acceptable
morally wrong
It turns out that the probability that Amanda’s using her ideas at Company B would be
morally acceptable is 0.58 while the probability that not using her ideas at Company B
would be morally acceptable is 0.70.10 Therefore, the option that is the likelier to be
morally right is not using her ideas at Company B. Of course, the probability that using
her ideas at Company B would be morally acceptable is greater than 0.50—i.e., there is a
better than even chance that using her ideas at Company B would be morally acceptable.
But, as I argued before, if Amanda’s overriding purpose is to do what is morally
acceptable, she should do what has the greater likelihood of fulfilling that purpose.
Obviously, I have made a lot of simplifying assumptions in this analysis.
Amanda’s beliefs and uncertainties, both ethical and non-ethical, are likely to be much
more complicated than represented by the above table. There may be factors other than
“fairness to Company A” and social utility that she regards as potentially morally
relevant. She may not have a firm social utility assessment and may see the need to
consider several sets of possible consequences of her decision, each associated with its
own probability of occurrence and its own social utility. She may regard both “fairness to
Company A” and social utility as morally significant with each carrying a particular
relative weight for her situation. And she may be uncertain what those relative weights
are. Furthermore, she is not likely to be able to assign specific probabilities to all the
relevant factors—for example, to say that the probability that “fairness to Company A” is
the relevant moral consideration is precisely 0.70. Therefore, the decision table for
Amanda’s decision may be quite a bit larger and more complicated than the table above.
However, a more realistic, more complicated table could still be used to produce
calculations of the probabilities of the moral acceptability of Amanda’s options, and our
decision principle that moral agents should choose the action that is most likely to be
morally acceptable could still be applied. There is even a way around the problem of
10
The total of the two probabilities is greater than 1.0 because under one set of circumstances, represented
by the leftmost column of the table, both of Amanda’s options would be morally acceptable.
assigning specific numbers to probabilities. I shall spare you the mathematical details, but
the idea is to use ordinal probability measurements rather than cardinal measurements.
That is, we measure the probabilities in comparison with each other rather than on the
standard 0-to-1 scale. Another refinement is to recognize and consider degrees of moral
acceptability/moral wrongness.11 Of course, Amanda is likely to have neither the time nor
the inclination to perform a complicated mathematical analysis before acting. However,
even without doing so, she may have a definite opinion about which of her options is
more likely to be morally acceptable, given her moral uncertainties and the information
available to her.
Conclusion
Some virtues of the method of rational decision-making under moral uncertainty
are (1) that it recognizes and takes into account the moral (and nonmoral) uncertainties
that moral agents, like Amanda, typically encounter and (2) that, despite those
uncertainties, it solves their practical decision problems by delivering action-guidance. I
have argued that LD fails to achieve (1) altogether and achieves (2) only for special cases
in which moral uncertainty is not involved. Therefore, I submit, the method of rational
decision-making under moral uncertainty is superior to LD as a method of moral
problem-solving and is the one that moral agents should adopt.
Ted Lockhart
Michigan Technological University
11
A full discussion of the procedure is given in my recent book, Moral Uncertainty and Its Consequences
(New York: Oxford University Press, 2000).
Download