UNIVERSAL MORAL GRAMMAR AND THE “DOUBLE EFFECT” IN

advertisement
MORAL INTUITION IN ORGANIZATIONAL CRISIS
George W. Watson
Associate Professor
Department of Management and Marketing
School of Business
Southern Illinois University Edwardsville
Edwardsville, Illinois 62026
(V) 618-650-2291
(F) 618-650-2709
E: gwatson@siue.edu
Moral Intuition
MORAL INTUITION IN ORGANIZATIONAL CRISIS
Abstract
This paper addresses the gap between moral reasoning and moral intuition as it
manifests during ethical dilemmas in organizational crisis. We begin with the
dual-process cognitive framework of moral reasoning and investigate the
processes and impact of the Universal Moral Grammar model of moral intuitions
on permissibility judgments. We find that rational moral reasoning can be preempted by moral intuitions. Negative intuitions emerge as the result of the means
by which a given end is accomplished - even when that end is morally justifiable,
the consequences are identical and the intentions are equivalent. Our purpose is to
articulate, test and evaluate the theory of moral intuition inspired by Rawls’
grammaticality analogy. Subjects (N=438) from all walks of life responded to the
permissibility of various actions while the consequences and intentions of these
actions were held constant. By empirically evaluating the principles of double
effect, moral dumbfounding and insensitivity to the magnitude of the
consequences, it is concluded that intuitions play a significant role in
permissibility evaluations in morally-relevant behaviors in organizational crises.
(167)
Keywords: Universal Moral Grammar, Moral Intuition, Moral Reasoning
2
Moral Intuition
3
MORAL INTUITION IN ORGANIZATIONAL CRISIS
Intuitions in general and moral intuitions in particular, have a checkered history
(Sonenshein, 2007). Although some organizational scholars caution us not to trust our “gut
feelings,” we are conversely advised not to ignore them (Bonebeau, 2003:42). For example,
organizational ethicists counsel us to “check your gut” prior to deciding upon a morally relevant
course of action (Trevino & Nelson, 2007). Moreover, while some argue that moral intuitions
are not trustworthy (Baron, 1995), other researchers assert that the effective use of intuition can
make the difference between a successful leader and an average one (Dane & Pratt, 2007).
Despite these disparate opinions concerning the reliability and importance of intuition,
the foregoing admonishments implicitly acknowledge two important and under-researched
aspects of ethical decision making and moral reasoning: First, we recognize something
psychological is transpiring during moral reasoning that is less accessible to our consciousness
than the rational deliberations offered by Kohlberg (1984) and Rest (James Rest, Narvaez,
Bebeau, & Thoma, 1999). Second, despite our best efforts to render clear and rational moral
justifications for our actions, a vague moral intuition may be the deciding factor tipping the scale
in favor of one course of action versus another. Nevertheless, this “dual process” conception of
reasoning and decision making – one that is conscious and rational and the other that is
subconscious and intuitive – is rarely addressed in organizational moral behavior (see
Sonenshein, 2007; Watson & Love, 2007 for exceptions).
One impediment to theory development and testing of non-conscious phenomenon in
organizational moral behavior includes the field’s primary emphasis on the Kantian, or rational,
models of moral deliberation (Kant, 1784/1990). In addition, there is ample confusion (as
Moral Intuition
4
outlined above) about the value of, appropriate application of and justification for acting on
moral intuition. As Sonenshein (2007) has observed; “rational approaches have flourished” due
in part to a lack of alternative perspectives. Consequently, one primary contribution of this paper
is to further specify and test a theory of moral intuition in the context of organizational crisis.
Organizational crisis is an appropriate context for evaluating intuition because decisions must
frequently be made within conditions of bounded ethicality (Bazerman, 1998). To this end, a
definition of moral intuition is offered, theory is applied and empirical testing is conducted in
order to advance our understanding of “gut feelings” in the moral decision making process.
Models of Moral Intuition Defined
In general, a gut feeling that arises during interpersonal or intrapersonal deliberations
about a moral issue can be considered a particular type of intuition and categorized as a moral
intuition. Dane & Pratt (2007:33) define general intuition as an affectively charged judgment
that arises through rapid, non-conscious, and holistic associations. Following this model, moral
intuitions, as they apply to organizational crisis, we define as affective moral responses arising
from the rapid, non-conscious linking of moral factors (principles, intentions, behaviors and
outcomes). This definition is consistent with that found in recent organizational literature arguing
that moral intuition is an automatic and immediate reaction to what the person has constructed as
the good and bad aspects of a situation (Dane & Pratt, 2007; Sonenshein, 2007).
There is, however, no single, broadly accepted model of moral intuition. In fact Dane and
Pratt (2007) list 17 different offerings in the literature, and these exclude definitions specific to
ethical decision making. The contending theories of moral intuition vary primarily in the
temporal sequence of the non-conscious processing of morally relevant information. For
example, models that rely upon the Humean version of human nature emphasize the triggering
Moral Intuition
5
role of emotions in spurring a single process moral response (Hauser, 2006b; Hume, 1751;
Smith, 1987). In the Humean model, a person witnesses an emotion-provoking event that
rapidly summons a cognitive response (Jonathan Haidt, 2001). Any conscious deliberation
regarding the decision is conducted after the intuitive response and takes the form of selecting
justifications for the decision.
Alternatively, Kamm (2000), building upon the philosophies of Ross and Kant, suggests
a dual-process model of moral intuition that includes both intuitive and rational processing.
Kamm describes a person who develops and tests intuitively emerging principles (e.g., do not
kill) by comparing the implications of these principles. If the principle and the behavior or
judgment seem compatible, then one rationally progresses toward justifying a course of action.
The one model presented in the organizational ethics literature (Sonenshein, 2007) is also
a dual-process model involving a person constructing meaning of a situation, the subsequent
production of a moral intuition, and ultimately, a justification and explanation for a morally
relevant outcome. A common theme among Sonenshein’s model and others is that rational
justification for a course of action occurs only after an intuitive judgment is made.
A third model relies on the Rawlsian view of human nature (Hauser, Cushman, Young,
Jin, & Mikhail, 2007). This version depicts a person as perceiving an event, performing a
subconscious analysis of that event, then simultaneously producing an emotional response and/or
a rational justification. This model is called the Universal Moral Grammar model (UMG). In the
UMG, affect is not the triggering component, it is an outcome of intuitive cognitions that are
innate and have had functional importance to human survival throughout the millenia. In part,
this occurred because human interaction requires cooperation and altruism in order to survive (L.
Cosmides & Tooby, 2005; L Cosmides & Tooby, 2008; Harman, 2000; Wright, 1994).
Moral Intuition
6
An additional common theme among these models is that intuitive cognitions are
deontological in nature. That is, a person perceives a stimulus and automatically applies a
behavioral rule to it, rapidly and spontaneously, without the deliberative calculations necessary
for moral rationality. This deontological characteristic is an important aspect of moral intuition
recognized by both social psychologists working in the areas of moral psychology (Hauser,
Cushman, Young, Jin, & Mikhail, 2007; Mikhail, 2007, 2008) as well as those researching
“protected values” (Baron & Spranca, 1997; Ritov & Baron, 1999). The implication of this
deontological, innate nature, is that these rules are hard-wired and naturally occurring, universal
components of the moral mind.
Moreover, we are interested in the ways subjects perceive behaviors undertaken during
times of organizational crisis. The pressures of organizational crisis often confront moral
intention through the constraints of response time, motivation to maintain personal and corporate
public image, potential legal liabilities and other organizationally and socially important
junctures. The difficult contexts of crisis, however, appear not to alleviate the manager’s
accountability for moral behavior. This assertion is borne out by the experiences of Morton
Thiokol and the Challenger disaster, Union Carbide and the Bhopal disaster, Exxon’s Alaskan oil
spill, and more recently, lead paint triggering massive toy recalls for Mattel and other toy
manufactures.
To better understand the role of moral intuition in these types of circumstances we begin
with a review the dominant model of moral psychology and position moral intuition relative to
that model in ways consistent with universal moral grammar (UMG) and the dual process theory
depicted in Figure 1. The theoretical background for UMG is then described. We then test the
Moral Intuition
7
model, report on the results and examine why it should provide fertile research and insight for
perceptions of ethicality in organizational settings.
Conscious Moral Deliberation
Since the mid-1980s the dominant model of moral psychology in organizational ethics
has emphasized a person’s rational cognitive abilities. Cognitive developmental ability is the
science supporting stage theories of moral development (Kohlberg, 1984; Piaget, 1965; J. Rest,
1980). Cognitive moral development, or CMD, rests on three fundamental assumptions. These
assumptions tend to link CMD with Kantian (Kant, 1784/1990) or deontological precepts. The
first is rationality; under this assumption human moral behavior is governed by considerations of
reason and life-enhancing principle. This reasoning takes place as conscious, internal
deliberation, as one autonomously seeks out the optimal moral solution to a social or personal
dilemma. The second assumption of CMD is internalism, or the conviction that the driving
motivation for moral behavior is internally and authentically derived. Internal derivation of right
and wrong is unencumbered by society’s moral shortcomings and obligates the person to see
beyond provincial ideals and ethos to broader universal principles (Arendt, 1963).
Third, analysts in this paradigm assert that the expression of justifications supporting a
particular course of action, are proxy indicators of a person’s capacity for moral reasoning. That
is, if I justify my course of action on the grounds that this action is better for society as a whole
rather than on the grounds that it is simply better for me, I have theoretically demonstrated a
higher level of moral reasoning. These differences in justifications represent a quantifiable
indication of moral reasoning. Consistent with Kant (1784/1990) and Rawls (1971), this
emphasis on justifications reflects a deontological morality which holds that certain ideas of right
and wrong reflect greater moral reasoning capacities. Moreover, the degree to which I am
Moral Intuition
8
consistent in rendering these higher order justifications indicates my overall level of moral
development is at a more advanced stage. In turn, my level (or stage) of moral development
should reasonably predict how I would think and perhaps act when confronting future moral
dilemmas.
In fact, Rest et al.’s (1999) four component model appears to be the most broadly
accepted and frequently applied model in ethics research. Literally hundreds of studies have
applied this framework (Loviscky, Trevino, & Jacobs, 2007). the theoretical linear progression
from conscious apprehension of a moral problem through a reasoned action responsive to that
problem. This process includes moral sensitivity, or interpreting the situation in ways that allow
one to imagine how various actions might effect others: Moral judgment; deciding which action
among alternatives would be most justifiable from a principled moral perspective: Moral
intention; or a commitment to taking a moral action and, moral action; implementing routines
that support the moral intention. Although this model is intended to describe the process of how
we reach a moral conclusion, we are just now probing the way the mind goes about coming to a
moral conclusion (Young, Cushman, Hauser, & Saxe, 2007).
Rawls’ Grammaticality and Moral Intuition
Theoretical Framework
Broadly, the position of those who study emergent moral intuitions characterize them as
deontological principles about how people are treated in a given circumstance (Haiser, 2006). Of
particular interest in our study and orevious studies is whether people’s injuries are the result of
being treated as means to an end, or whether their injuries are the unintentional side-effects that
occur in the process of achieving a desired end. Simply put, a common deontological principle
states that it is impermissible to injure people while they are being used as means to an end
Moral Intuition
9
(Kant, 1784/1990). Conversely, moral psychologists have found that it is generally more
permissible to cause the identical harm for the identical ends if the injuries to people are
unintentional and a side-effect of being in the wrong place in the wrong time while embroiled in
a serious social dilemma. A further comparison of the rational versus intuitive models along
several dimension is provided in Table 1 (Hauser, Cushman, Young, Jin, & Mikhail, 2007).
==========================
INSERT TABLE 1 ABOUT HERE
==========================
Consequently, a further theoretical specification moral intuition is needed to explain how
such intuitions develop non-consciously. Rawl’s (1971) work is well established in social
contract theory for its emphasis on justice and fairness. Moral psychologists concerned about the
moral intuition that is both evaluative and subconscious have emphasized a different aspect of
the Theory of Justice: Rawlsian grammaticality (Rawls, 1971). This dimension has been
described as the; “machinery to deliver moral verdicts based on unconscious and inaccessible
principles. That is [we are] a creature with moral instincts” (Hauser, 2006a:106). This research
engages the Rawlsian ideas of dialogic and reflective equilibrium but, more so, Rawls’ analogy
of moral grammaticality when coming to semi-autonomous moral conclusions (Rawls, 1971:42):
… the aim [of grammar] is to characterize the ability of well formed sentences by
formulating clearly expressed principles …. A similar situation presumably holds
in moral philosophy. There is no reason to assume that our sense of justice can be
adequately characterized by familiar common sense, precepts, or derived from the
more obvious learning principles. A correct account of moral capacities will
certainly involve principles and theoretical constructions which go beyond the
norms and standards cited in everyday life.
If everyday norms and standards, common sense or social learning are not the sources of moral
reasoning, theories that emphasize innate capacity become a more appealing feature of the moral
psychological landscape. Behind Rawls’ veil of ignorance, for example, where one’s
Moral Intuition
10
relationships and abilities to establish relationships are unknown but the consequences of those
life associations and personal abilities can be imagined, the ability to innately sense moral
implications becomes an important human capacity.
Universality in Moral Intuition
Regardless of the particular model of moral intuition – universal.
The communicative and cooperative theories of Vygotsky (1978) have more recently
aligned with the theoretical foundation of a linguistic analogy (Moll & Tomasello, 2007). As
Moll and Tomasello (2007) point out, social competition has created the evolved requirement for
cooperative behavior. Moreover, both Moll and Tomasello (2007) and Warneken and
Tomasello (2006) have found that pre-linguistic humans (i.e., fourteen months old) as opposed to
other non-linguistic primates, quite readily help each other in altruistic and cooperative ways – a
behavior which requires an understanding of another’s goals and an altruistic motivation to assist
the other in reaching them.
When arguing in support of a Universal Moral Grammar, scholars compare several
fundamental parallels between moral intuition and Chomsky’s grammatical of language
(Chomsky, 1998). The first principle is “poverty of the stimulus.” Humans tend to hear and
parse the sentences in automatic ways looking for patterns to be in the form of rule-based
sequences so that the communication can be understood. This is most easily exemplified by
people with the dyslexia, a malady that causes letters within words to be jumbled. The human
mind can still make out the meaning of those words if they are in a common grammatical
structure, however, if both the letters and the words are out of sequence than meaning becomes
virtually impossible. This same process is applied in universal moral grammar: People rapidly
perceive the disjunction between rule-based behaviors and factual violation of those behaviors.
Moral Intuition
11
Moral psychologists apply this same argument to moral reasoning among people. For
example, they argue that children exhibit behaviors that comply with moral precepts that could
not have been previously modeled or made explicitly clear in the context they are in (Mikhail,
2007). That is to say, most children will spontaneously cooperate, exhibit altruistic behavior,
and will not purposely physically harm others. Consequently, moral intuitionists conclude that
there is likely to be some innate moral capacity that is hard-wired into humans. Their argument is
punctuated by the severity of the evolutionary selection process and the need for appropriate
interpersonal cooperation to overcome threats to survival. Simply put, those collectives
characterized by cooperation and altruism are more likely to successfully hunt, relied more
heavily on others to assist in times of danger and injury, and were more likely to act in the
welfare of others (L. Cosmides & Tooby, 2005; L Cosmides & Tooby, 2008; Harman, 2000;
Hauser, 2006a; Hauser, Cushman, Young, Jin, & Mikhail, 2007; Petrinovich, O'Neil, &
Jorgensen, 1993; Wright, 1994).
The universality of the grammatical model suggests that there should be no differences in
responses to moral dilemmas that correlate with gender, age or racial background. Unlike moral
reasoning, where a number of substantive arguments argue against a universal approach
(Forsyth, Nye, & Kelley, 1988; Gilligan, 1982), universal moral intuitions should not differ.
However, the universality of intuitive responses has rarely been examined (see Hauser,
Cushman, Young, Jin, & Mikhail, 2007 for an exception), and according to our review,
universality of intuitive moral responses to organizational crisis has not been tested. As a result
we want to examine whether important demographic variables may correlate with intuition.
Beyond the important traditional variables of race, socio-economic background and age,
variables that reflect a person’s experience with work and management may, over time,
Moral Intuition
12
inexorably alter one’s thinking about appropriate behaviors. These opinions can be sharply
distinct from other sub-populations. A person with extensive managerial or work experience, for
example, may hold differing beliefs or responses about right and wrong than one with no work or
managerial experience. UMG, however, is expected to be a fundamentally innate human
cognitive process and consequently life experiences, even long term experiences, should not
have substantive impact on that process. In order to build upon the notion that UMG is, in fact,
universal, we hypothesize that: There will not be statistical differences in subject’s responses to
scenarios based upon the demographic variables of age, gender, socio-economic status, length of
work experience or length of managerial experience.
The Existence of Moral Intuition
Although he recognizes the topic, Kohlberg (1984) neither explicitly defines what a
moral intuition is, nor does he describe how we might detect it and differentiate from a rational
thought. His characterization of moral intuition as a mechanism for conscious prioritization
clearly contradicts the major theoretical propositions of non-consciousness in more recent
literature of moral psychology as it is outlined above (Dane & Pratt, 2007; Hauser, 2006a;
Sonenshein, 2007). Kohlberg sees moral intuition as a resource that can be applied and privately
consulted in some conscious way. This characterization is not entirely foreign to organizational
science, in fact, some organizational ethicists accept this position (e.g. Trevino & Nelson, 2007).
We are able, on this argument, to check “our gut” for confirmation or re-direction. This places
greater weight on the conscious usefulness of moral intuitions that either Dane and Pratt (2007),
Bonebeau (2003), or Baron (1995) seem likely to endorse.
Alternatively, perhaps an intuition can be a simple feeling that is either “good” or “bad,”
that we should not ignore. For example, Hauser’s (2006) model solidly places moral intuition at
Moral Intuition
13
a non-conscious level. Universal Moral Grammar is consistent with this assumption, and offers
several characteristics of a behavior itself which will influence the nature of our intuitive
responses. The behavioral characteristics include the degree to which the harmful consequences
were reasonably foreseeable, whether these outcomes are intentional or merely a side-effect, and
if the nature of the foreseeable harm.
The model of Universal Moral Grammar is depicted by the un-shaded boxes in Figure 1.
As the figure indicates, we posit a dual-process model of moral reasoning, one process that is
subconscious and intuitive, and one process that is conscious and rational. What triggers intuitive
thinking in this model are the incongruencies present between expected patterns of moral
behavior and the perceived level of morality. What UMG has in common with other models of
intuition is that conscious moral reasoning is depicted as a post-hoc process of justifying what
has been intuitively pre-established as right or wrong.
Among the empirical challenges facing researchers in non-conscious intuition is the
detection of a something we can reasonably label intuition so that it can be distinguished from
other, perhaps more consciously reasoned, cognitive responses. To this end, scholars have thus
far applied two criteria to identify moral intuitions, and to these we contribute a third. The two
previously applied techniques are; the principle of double effect, and moral dumbfounding. To
this we add the third component of insensitivity to the magnitude of the consequences.
In the following sub-section we elaborate upon these indicators. Yet, perhaps as central
as these indicators are, the fundamental assumption of a dual-process model, such as those
described by Sonnenshein (2007) and Watson & Love (2007) and others, requires that rational
moral reasoning exhibit the qualities of moral justification by principle, accountability for the
consequences of one’s behaviors, and a consistency in how moral principles are applied.
Moral Intuition
14
Because there are only two cognitive processes (unconscious and intuitive versus conscious and
reasoning) when a subject’s moral response does not evidence rationality it is, modus ponendo
tollens, intuitive [this is the logical assertion that “affirms by denying” and is of the form ~
(A·B), A, .·. ~B ].
The Principle of Double Effect
When a moral agent acts there are generally two central ethical considerations: What is
the agent’s intention, and what is the outcome of the act. If an agent has a choice between two
otherwise ethical acts, and her intentions are good and the outcomes are identical, then we
generally hold that either choice is morally acceptable. The principle of double effect (PDE),
however, is a moral psychological and legal construct which holds that an otherwise prohibited
act may be justified if the harm is not the means to greater good (intentional), but rather, a
foreseen side-effect (Hauser, 2006). The principle differentiates between intended harms as a
means to an end, and harms that are foreseen but not intended. According to this doctrine, the
former is not morally acceptable but the latter is. McIntyre (2001) presents a timely (if rather
gruesome example): A doctor believes that abortion is wrong, even in order to save the mother’s
life. He may also believe that it would be acceptable to perform a hysterectomy on a pregnant
woman with ovarian cancer, causing the death of the fetus. In this case, it is cancer threatening
the woman’s life and not the pregnancy. In an alternative scenario a woman is pregnant with a
hydrocephalic child and neither the mother nor the child would survive a cesarean procedure. It
is not permissible to crush the child’s skull, thereby killing it, to pull it through the birth canal
and save the mother’s life. In this latter case, killing the child is an intentional means to saving
the woman, whereas in the former scenario, killing the child was a side-effect. In other words, If
PDE is in play, it may be morally permissible to cause a foreseen but intended and harmful side-
Moral Intuition
15
effect to actuate the greater good of saving one life, but morally impermissible to cause the same
harm as an intended means to an identical end (McIntyre, 2001). When the consequences and the
intended ends are equivalent, the primary variable becomes the means of achieving the end. The
fetus dies in either case, the woman lives in either case, but crushing the child’s skull causing its
death is somehow, and we think intuitively, objectionable.
In research involving the PDE, a vignette (normally the trolley problem) is systematically
varied in such a way that the same greater good is a result of a harmful act. In one case the
harmful act is the means to the greater good, in the other case the harmful act is a foreseen sideeffect. A relationship between one’s responses of moral permissibility and the variation in the
intention/side-effect factor, suggests the principle is in effect and that moral assessments are
generated as an intuitive response to an actor’s behaviors, not his or her intentions or the
identical outcomes. Consequently, in the present research we systematically, in two different
scenarios, vary the harm as (1) an intentional means to a better end or (2) a foreseen, side-effect.
Patterns of a relationship between behaviors in our scenarios and subject’s permissibility
responses are theoretically based upon intuitive, grammar-like models of acceptable behaviors.
One assumption implied by this principle is that the mind more rapidly or efficiently
pattern-matches perceptions of external stimuli with internal models of moral permissibility
based upon the seen behaviors of the actor(s) rather than the outcomes of the acts or the unseen
intentions. In other words, consistent with universal moral grammar, models of behavior are
rapidly compared with circumstances of permissibility allowing subjects to recognize a
difference in the morality of behaviors if the double effect is to be possible.
We seek to extend the possibility of intuitive scenarios of organizational crisis.
Organizations, however, are complicated settings, often with unclear lines of accountability and
Moral Intuition
16
ambiguous responsibilities to stakeholders (Lawrence, Weber, & Post, 2005). It is also apparent
that harms perpetrated by large organizations may be vastly more harmful than those caused by
single individuals. At the present state of the science, research is unclear as to whether
complications of ambiguity, responsibility, and accountability will cause subjects to rely more or
less on intuitive responses.
It is important to our study to demonstrate that the PDE is in effect so that we can assert
that there are differences in perceived morality that are based neither on outcomes or intentions.
The types of scenarios that we present the subjects are described in Table 2. As the table
indicates, we vary whether the outcomes were foreseen or unforeseen (the act itself) and the level
of these positive consequences across the focal cases of a fire at a manufacturing plant and
finding a cure for a contagious disease. In addition, we vary the harms to others as; the direct
means to the greater good, or the foreseen side-effects of acts necessary to achieve the greater
good. We expect to see that it is permissible to act to cause the negative outcomes as sideeffects, but not as means, even if the positive and negative outcomes are identical and the
intention of the behavior is equivalent. The other cases listed in Table 2 are manipulation checks
and control checks, which we explain below in the procedures section. As a result, we
hypothesize that the principle of double effect extends to scenarios depicting organizational
crisis.
=========================
TABLE 2 GOES ABOUT HERE
=========================
Lack of Sensitivity to the Magnitude of the Outcomes
Building upon the rational moral reasoning model, Jones (1991) articulated several
dimensions of a moral issue that increase its intensity – that is the immediacy for a required and
Moral Intuition
17
adequate solution. One of these dimensions is the magnitude of the consequences. Specifically,
(Jones, 1991:374) argues persuasively that “an act which causes 1,000 to suffer a particular
injury is a greater magnitude of consequence than an act that causes 10 people to suffer the same
injury.” Jones justifies this conclusion based “on common-sense understanding and observation
of human behavior and empirically derived evidence.”
Consequently, we would posit that if a person is not considering the magnitude of the
consequences he or she is acting intuitively. As table 2 indicates, we have varied the negative
consequences across scenarios. Traditionally, organizational ethicists have been quite certain that
the magnitude of the consequences will influence judgments of moral permissibility (e.g.,
Butterfield, Treviño, & Weaver, 2000 first hypothesis regarding the magnitude of consequences).
In scenarios where the consequences are manipulated and the means remains stable, the
outcomes are in fact significant factors in moral judgments. In the present case, however, the
consequences remain stable and the means are manipulated causing the subject to render
judgments based upon intuitive responses to behavior. As a result, in order to capture the
influence of outcomes on judgments of permissibility we constructed two separate scenarios, on
with a high consequences and another with a lower consequence.
Quantity sensitivity requires the rational calculation of impact or utility, and such a
calculation in itself defies a deontological or rule-based logic, as well as our characterization of
an unconscious intuitive reaction. In fact, ignoring outcomes demonstrates a lack of moral
reasoning in several of the moral philosophies; utilitarianism (Mill, 1910/1972), pragmatism
(Dewey, 1922), or situation ethics (Ross, 1930). Our empirical argument proceeds accordingly:
If the moral merits of behavioral outcomes are dismissed by natural, non-conscious
deontological rules, a person is precluded from consciously weighing these moral merits based
Moral Intuition
18
on the amount of overall good. We propose that the fact that a subject demonstrates insensitivity
to outcomes signals an intuitive, non-conscious reaction to the moral dilemma.
In cases of rational moral reasoning, however, one may be given a choice between
damage done to, say the foreseen 60,000 citizens of Hiroshima, versus 1,000,000 U.S. allied
combatants. In this choice one can easily understand (if not agree with) that a reduction of harm
by 94% is an attractive, and some would say moral, alternative. But rationality becomes more
difficult if there is no meaningful difference in outcomes between solutions or the shared
intentions of the behaviors to end a war. What we propose one’s mind does in these cases, is
respond intuitively to the behavior itself such that some people would find the act permissible
and others would not. Consequently, consistent with the argument supporting moral intuition we
hypothesize that there will be no sensitivity to the magnitude of the consequences among
Moral Dumbfounding
Moral dumbfounding occurs when a person senses that an action or idea is morally
reprehensible, but is unable to articulate a reasonable justification for her position (Jonathan
Haidt, 2001). As implied earlier, the heritage of cognitive moral reasoning demands that the
subject adequately, according to principles of morality, indicate a plausible justification as to
why he or she came to a conclusion about moral permissibility. The ability to articulate a
justification clearly imposes the requirement of moral consciousness upon the subject. In some
cases, such as the DIT-2 (Rest et al, 1999), the MJT (Lind, 1977), and the Management Moral
Judgment Test (Loviscky, Trevino, & Jacobs, 2007) justifications are pre-determined for the
subject, and she selects those that seem to best fit her reasoning. From the selection of these
justifications, the analyst develops a score designed to reflect one’s level of moral reasoning.
Moral Intuition
19
In contrast, Hauser et al. (Hauser, 2006b; Hauser, Cushman, Young, Jin, & Mikhail,
2007) illustrate that an inability to articulate a reasonable justification for one’s selected course
of action bolsters the argument that the conclusion was derived from an intuitive response. This
absence of the moral justification is indicative of spontaneous and rapid response to the more
provocative or offensive aspects of the scenario, further inhibiting reasoned answers. As a result,
in further support of moral intuition we hypothesize that subjects who submit differing
permissibility responses for action-pairs with identical outcomes will be incapable of providing
adequate moral justification for those permissibility responses.
Together, moral dumbfounding, insensitivity to the magnitude of the consequences and
the principle of double effects, indicate the degree of non-conscious, intuitive response. In the
next section we devise an approach for testing and analyzing the role of moral intuition in
judgments of moral permissibility applied to settings of organizational crisis.
METHOD
Subjects
The total number of subjects was drawn from samples in three data collection phases.
Two samples were volunteers attending business and psychology classes at a mid-size public
university in the mid-west United States during the Fall of 2007 (n = 166 and n = 80
respectively). A third sample was a non-student group solicited by members of the first author’s
students in return for a small amount of extra credit (n = 180). Overall there were 438 subjects
who participated in the research. Students in the sample were provided the incentive of a small
amount of extra credit for their participation. Twelve subjects were removed from the pool either
because they did not meet the control conditions or because their explanations contradicted the
Moral Intuition
20
choices described below (N=426). The average age of the subjects was 27.6 years with a range
between 18 yearsd and 74 years. Subjects reported an average of 10 years of work experience
with and about 1.6 years of managerial experience. The range for work experience was between
0 and 50 years. About 78% of the subjects reported that they grew up in average economic
circumstances or better. Eighty-nine percent of the sample reported their race as white, and the
remaining11% reported various minority heritage. The largest non-white minority group was
African American (5.9%). The sample was 49.8% female. The research complied with the
University’s Internal Review Board Guidelines for Human Subjects Research.
Procedure
Subjects began the exercise by acknowledging that they understood that the research was
voluntary and they could cease participation at any time and/or refuse to answer any question.
Demographic information was then collected. Unique lottery numbers were used to identify
participating students for awarding extra credit.
The specific test vignettes were constructed from 10 different scenarios divided into three
groups. Each questionnaire included two action-pair dilemmas and at least one control vignette.
An action-pair has identical consequences and intentions but the means of achieving that end are
varied. These questionnaires were distributed randomly across the subjects. The control vignette
was written such that a person should clearly find the behavior permissible. In the first phase
subjects were administered a set of lowest consequence scenarios, subjects in phase 2 were
administered the set of high consequence scenarios, and group three was randomly allocated
action pairs of both high and low consequences.
Moral Intuition
21
The test also presented each subject with space for a justifying why a particular response
was selected. The questionnaire was short and took an average of about 10 minutes to complete.
In order to avoid the possibility of order effects reported in previous studies (Hauser, Cushman,
Young, Jin, & Mikhail, 2007) each subject received just one action-pair to evaluate for
permissibility; either the disease action pair (D1-D2 or D1Hi-D2Hi) or the plant explosion
scenario (P1-P2 or P1Hi-P2Hi). The question following the scenarios and the vignettes simply
asked whether it was permissible to perform the particular act as it was described in the two
scenarios of the action pair, and the subject responded categorically either yes (code 1) or no
(coded 0) for each scenario, the control vignette, and the manipulation check. After answering
these questions subjects were asked to justify their decisions. There was no extra credit provided
for the subjects in the non-student sample. Table 2 describes the characteristics of each scenario.
Analysis
Those subjects were eliminated who answered negatively to either of the control
vignettes (that helping someone was not permissible even if it was a costless alternative). Only
eight subjects failed this control test. Subjects were also excluded if their explanation
contradicted the choices of permissibility that they made. For example, in one case the subject
responded in the manipulation check vignette that it was impermissible to sidetrack the trolley to
kill 1 person to save other people, but the justification that was offered was: “Where loss of life
was reduced the course of action seemed permissible.” There were four of these cases.
Scenarios were analyzed by comparing responses to vignettes for the trolley
(manipulation check), control, disease, and plant-explosion scenarios with the disease and plant
explosion scenarios having both “high” and “low” consequences. Subjects were unaware
whether the consequences in their scenario were considered high or low by the investigative
Moral Intuition
22
team. In one crisis action pair a chemical plant fire needed to be contained or an escaping
chemical cloud would fatally poison 3000 (30,000 in the high consequence condition) city
residents. The action taken in the P1 scenario was to remotely seal the plant, trapping and
causing the death of 30 workers. In the comparison scenario, the action was to force the thirty
workers to manually seal themselves within the plant (P2). In the second crisis action-pair, a
highly treatment-resistant strain of tuberculosis was expected to cause the death of 100,000
people (1,000,000, in the high consequence condition) unless an antidote was administered (D1
and D2). The antidote had to be tested, however, and the action in one scenario was to pay
people if they volunteered to be test subjects, in the other scenario people were forced to be test
subjects. Table 2 contains the dimensions of all scenarios.
Similar to Hauser (Hauser, Cushman, Young, Jin, & Mikhail, 2007), subject responses
were analyzed by comparing their judgments of permissibility for each of the action pairs.
Because the consequences of an action are held constant for each action pair, as are the
intentions (sealing the plant to save lives, and testing the antidote to save lives) we examine
whether significant number of subjects rated one course of action to be more permissible than
another. The difference in permissibility ratings is inferred to be the result of the intuitions about
the means used to achieve saving the lives of people in the scenario.
RESULTS
As our hypotheses outline, we were concerned with evaluating the role of moral intuition
in an organizational crisis contexts. To this end we examine the results for the principle of
double effects, moral dumbfounding, insensitivity to the magnitude of the consequences, and
whether there were differences in permissibility decisions due to demographic variables. Each is
discussed below.
Moral Intuition
23
Universality Across Demographic Factors
In many studies involving moral evaluation researchers have found significant
differences emerging relative to gender (e.g., Gilligan, 1982). We used one-way analysis of
variance when comparing the permissibility of the actions at both the lower and higher
consequence level, for all action-pairs. This involved 12 different tests. Gender emerged as
significant in only 1 of these responses: Males were likely to find paying people to be test
subjects for the antidote tests in the high consequence disease scenario significantly more
acceptable than females (N = 127, F=5.98, p ≤ .02).
The same tests were conducted for age and racial background. Because of a very small
number of subjects in several of the racial categories (n ≤ 2) we evaluated the responses of
whites of European descent against a group of all other racial backgrounds. No significant
effects emerged. Similarly, self reports of social economic status while growing up were tested
for effects. Once again, because of a low number of subjects in some socio economic groups, we
compared those that reported a relatively average background (78%) against all those below
average and those reporting above average. Once again, no significant differences in any of the
response variables emerged.
We then evaluated whether years of work or managerial experience was significantly
different according to the response variables. Once again we found no significant differences in
the responses based on work or managerial experience. Overall, these findings strongly support
the hypothesis that demographic effects of race, gender and social economic background are not
significantly influencing responses of permissibility. Of the 34 planned comparisons only one
demonstrated significance at the 95% confidence level. Overall, hypothesis 1 is supported, and
we find the demographic factors are overwhelmingly unrelated to responses on the 8 action
Moral Intuition
24
vignettes or the control vignettes. Universality of the decisions held across 97% of the tests. This
supports previous findings (Hauser, Cushman, Young, Jin, & Mikhail, 2007)
The Principle of Double Effect
As indicated above in the procedure section, three scenarios elicited permissibility
judgments on several dilemmas; the trolley problem as a replication-manipulation check
scenario, the control scenario, and the action pair of the antidote testing problem (high or low
consequence x means or side-effect) or the plant-explosion problem (high or low consequence x
means or side-effect). In order to reduce order effects, subjects completed responses on only one
of the scenario pairs (plant scenario-low(high) consequence, means vignette versus side effect
vignette. In addition, the trolley scenario replication served two purposes: (1) as a manipulation
check for the sensitivity of the other scenarios for detecting differences between subjects, and (2)
because, the trolley scenario is a replication, it serves as an additional reference point for
evaluating the relative strength of the manipulation in the plant-explosion and disease-antidote
scenarios. Chi-square tests were run on the permissibility of each possible action to ensure that
all responses showed patterns significantly different from random selections. Both actions in the
trolley scenario, in the plant-explosion scenario and in the disease scenario were significantly
different and in the expected direction.
The moral permissibility of the courses of action taken in the control scenarios was
broadly agreed upon by the subjects. The trolley scenario with the action of flipping a switch to
avoid killing 1 person but saving 5 (T1) was judged permissible by 86% of the sample. This
compares well to previous studies involving the trolley scenarios indicating a permissibility rate
for this action of 89% (Hauser, Cushman, Young, Jin, & Mikhail, 2007). The trolley scenario in
which an innocent bystander is thrown in front of the trolley to stop it from running over 5
Moral Intuition
25
people on the track received a permissibility rating of 16.4%. This also compares reasonably
well with the Hauser et al. (2007) study that reported an 11% permissibility rating. In addition,
these proportions are significantly different from random selections (t [ N = 55], 11.473, p ≤
.001). This result indicates that as a group, subjects made use of the information about the
behavior itself, beyond the consequences and intentions, as did subjects in previously reported
experiments.
In addition, the moral permissibility of the courses of action in the disease-antidote was
also broadly agreed upon. The scenario in which paid volunteers were infected with the virus to
test the antidote was deemed permissible by 61% of the subjects, versus the 5.6% permissibility
rating for testing the antidote non-volunteers ( df = 123, t =12.22, p ≤ .001) in the low
consequence decision. In the higher consequence action pair the outcome for volunteer subjects
was a 66% permissibility rating versus a 5.4% permissibility rating for non-volunteers ( df = 67, t
= 10.735, p ≤ .001).
In addition, in the low consequence condition, the permissibility of remotely sealing the
chemical plant after a dangerous explosion and killing the 30 workers inside the plant was
deemed permissible by 77% of the subjects versus 18.4% permissible rating when forcing the 30
workers to seal the plant from within resulting in their death (t = 12.328,, df = 112, p ≤ .001). In
the high condition 80% found automatically sealing the plant permissible and 17% found it
impermissible (t = 10.247, df = 63 p ≤ .001). These findings tell us two things: First, the
scenarios are sensitive enough to detect significant differences in moral reactions toward the
means in which identical consequences are achieved and; second, they support hypothesis 2. We
conclude the means by which the same consequences are accomplished significantly alters
whether the action is morally permissible regardless of the good accomplished or the intentions
Moral Intuition
26
of the actor. Consequently, the principle of double effect played a role in subject permissibility
responses in both high and low consequence conditions of in organizational crisis.
Magnitude of the Consequences
The hypothesis asserting that unconscious moral intuitions are quantity-insensitive asserts
that subjects will not rate options significantly more permissible when substantially greater
consequences are in place. For example, in the plant explosion scenario, the low consequence
condition established a life to death ratio of 1 death for 100 lives saved. In the high consequence
scenario the ratio was 1 death for 1000 lives saved. In the low consequences disease scenario the
ratio was 1 death occurring for every 10,000 lives saved and in the high consequences decision it
was 1 death for every 100,000 lives saved.
Quantity insensitivity was tested in two steps. The first step asks whether similar
proportions of permissibility ratings were made by the subjects assessing the high consequence
scenario versus those assessing the low consequence scenarios. This test revealed whether the
proportions of permissibility for the higher consequence action pairs was significantly different
then random between actions for both the plant explosion ( t = 10.247, df = 63, p ≤ .001) and the
disease antidote scenario ( t = 10.735, df = 67, p ≤ .001). In addition, the proportion is
consistent with the direction of the low consequence decision, indicating subjects’ permissibility
ratings were not sensitive to the magnitude of the consequences for this step.
In the second step we applied a more stringent test. We evaluated whether the proportion
of subjects that found the lower consequence action pairs permissible was different from the
proportion of finding it permissible in the higher consequence action pairs. If the subjects in the
lower consequence condition determined that the behaviors were significantly less permissible,
than that difference can be attributable to the magnitude of the consequences.
Moral Intuition
27
To test for differences in these proportions independent sample t tests were used. Due to
differences in the sample sizes, pooled variance was calculated. The Proportions of subjects who
found the lower consequence plant explosion action pairs permissible (and impermissible) was
compared to the proportion of subjects who found the higher consequence action pairs
permissible (and impermissible). Similarly, the proportion of subjects who found the lower
consequence disease antidote action permissible (and impermissible) was compared with the
high consequence sample.
The differences in the proportion of subjects that found the lower
consequence plant explosion remote-plant-closing permissible was not different from the
proportion of who found the higher consequence remote-plant-closing permissible ( t = 1.32,
N=93, df = 91, p is not significant). The mean proportions of subjects who found the forcing the
workers impermissible was also not significantly different (t = 1.23, N=93, df = 91, p is not
significant) despite the number of lives saved. For the disease antidote problem; proportions of
subjects who found the intentional action (using non-volunteers) to be impermissible in the
lower consequences action was not significantly different from the proportion of subjects who
found this action to be impermissible in the higher consequence problem (t = .731, N=97, df =
95, p is not significant). The proportion of subjects who found the volunteer action in the lower
consequence disease action permissible as opposed to those in higher consequence action was
also not significant (t = .690, N=99, df = 195, p not significant). Taken as a whole these
comparisons between proportions of permissibility and impermissibility between the high and
low consequence action pairs indicate that consequences are not related with the responses for
permissibility or impermissibility. These two tests; (1) that there are differences in the
proportions of permissibility and impermissibility between action pairs regardless of the
consequences and in the expected direction; and (2) that the proportions of subjects rating one
Moral Intuition
28
action permissible and the other impermissible does not significantly differ between the low
consequence and high consequence condition, argues in favor of quantity insensitivity,
supporting our hypothesis.
Moral Dumbfounding
A third dimension in evaluating whether a moral decision is intuitive or reasoned is to
examine whether the subject can adequately articulate the principles that justify his or her
judgments of permissibility (Mikhail, 2007;Hauser, et al., 2006)). Because explanations for
choosing permissibility are expected to be somewhat automatic and unconscious, and therefore
not dependent upon the outcomes of the action (addressed below), it is hypothesized that a
majority of subjects, who rated the action pairs differently, will be unable to adequately articulate
why they rated one scenario permissible and the other not-permissible.
The justifications proffered by subjects were coded into one of two categories: (1)
sufficient justification and (2) insufficient justification. . We used a similar response justification
category coding mechanism that was used in Hauser et. al. (2007). Our criteria for a reasoned
justification included: (a) the explicit recognition of the factual differences between the
scenarios; (b) a moral evaluation that was based upon those factual differences; (c) the absence
of broad assumptions about the scenario or information that was not provided in the scenario
and; (d) a reasonably consistent logic in the response.
We accepted very liberal criteria for what counted as a moral justification looking for key
words such as; wrong, right, bad, good, moral, immoral, ethical, unethical, and should or
shouldn’t. An example of a sufficient explanation in the disease antidote condition was:
“Subjects were selected without volunteering. It is not morally permissible to jeopardize lives
without consent.” In this example the factual difference of lack of consent was the deontological
Moral Intuition
29
principle forming the bases from which to argue that it is morally impermissible to jeopardize
lives. An example of an insufficient response in the plant explosion scenario was: “In the first
action people were given a choice and the choice was for the safety of the earth. In the last
action the people were forced to do the action.” However, people in the first scenario were not
given a choice; their deaths were side-effects of sealing the plant. Consequently, this justification
was coded insufficient.
Initially, about 31% (106) of the contrasting responses were evaluated by three judges.
The raters were the authors and a graduate research assistant. Inter-rater reliability among the
three judges as to what counted as a sufficient or insufficient justification was calculated by the
inter-class correlation at 93.7. Because the inter-rater reliability was high, the remaining 200
subjects were rated by the first author. The one previous study with similar empirical
benchmarks reports 70% insufficient justification and 30% sufficient justifications (Hauser,
Cushman, Young, Jin, & Mikhail, 2007).
Approximately, 19.4% (n=65) of the 332 subjects that evaluated the permissibility in the
pairs differently, were rated as having provided a sufficient justification for the differences in
their permissibility responses (80.6% insufficient).We used a two-tailed chi-square test to
determine whether the proportion of those that provided sufficient justifications (approximately
21%) was the same as the proportion that did not provide sufficient justification (approximately
79%). Results indicate that the proportion of those that provided a sufficient justification for
their permissibility judgments was significantly smaller than those who did not ( X2 [1, N=333]
= 36.735, p ≤ .001, two-tailed). This result further bolstered our proposition that moral intuitions
played a substantive role in evaluations and allowed us to support our fourth hypothesis.
Moral Intuition
30
In sum, overall, demographic factors were found not to influence subject responses. In
addition, the subjects were responsive to the means by which an end was accomplished, subjects
were not influenced by the magnitude of the consequences of the action, and subjects were not
generally able to provide adequate moral justification for their permissibility decisions. These
factors, taken together, argue on behalf of an influential and broad based role for innate moral
intuition in the evaluation of organizational crisis-based and morally relevant behavior. In the
following section we discuss these findings.
DISCUSSION
Sonenshein (2007) illustrated several limitations to the rationalist approach in moral
reasoning and we build upon that effort by specifying a model of moral intuition and empirically
testing its merits. Our results have show that subject’s intuitive responses are: (1) difficult to
justify with traditional moral principles; (2) are insensitive to the magnitude of the consequences
of behavior and; (3) turn on the means by which important moral ends are achieved. While
previous research has examined some of these issues using the trolley problem we wanted to
contribute further by applying these and additional indicators to contexts of organizational crisis.
Implications for Theory and Method
Measuring the presence and influence of an unconscious and immediate cognitive event
is problematic. Because we applied the dual process model to cognitive moral rationality we
deduct by modus pollens tollens that if a response does not meet the criteria of rationality
(quantity sensitivity, justifiable by moral principle, and absent of double effects) it must be
intuitive. Consequently, if more or less than two cognitive processes simultaneously exist our
approach can fail. This is to say that tapping into something we are confident is an intuition is a
difficult theoretical and empirical endeavor. Neuroscience appears to provide a window to the
Moral Intuition
31
moral mind. For example, Young et al. (Young, Cushman, Hauser, & Saxe, 2007:8238)
concluded that moral judgments rely upon the cognitive processes mediated by the right
temporoparietal junction. Studies comparing the neural responses in intuitions as defined above
and rational moral reasoning as defined by Rest’s DIT-2 vignette’s, or Lovicsky et al. (2007)
more recent measure, may help us to further determine the role of intuition vis-à-vis moral
cognition.
One of the important aspects of the Universal Moral Grammar model is that reasoning
and emotion occur after cognitive recognition of permissibility (Figure 1). Consistent with other
models (e.g., J. Haidt & Baron, 1996; Sonenshein, 2007) rationality appears to occur after a
moral opinion is drawn and for the purpose of justifying that opinion in one’s own and others’
minds. Emotions in the UMG model however, emerge at about the same instant as rationality,
whereas in Haidt’s model emotions trigger rationality. These differences are significant for the
role of emotions, grammaticality and their distinction from rationality. In the present paper we
have assumed emotions occur after a decision is made and consequently segregate emotional
responses from rational ones. This seemed to us a more practical approach than sequencing
emotions and rationality, particularly in light of the arguments that claim emotions defeat
rational responses (Donaldson, 1982, 2004; Johnston, 1988). As a result, the relationship
between emotions and rationality in moral decision making and intuition requires further
theoretical elaboration and empirical study.
In expanding the study of intuition to organizationally-based dilemmas, we have shown
most people retain the requirement forbidding purposely causing harm to another (treating
people as means to other’s ends) and permitting the harms of otherwise prohibited behavior if the
harm was not intentional (harm is a side-effect and not a means to the end). It would of interest
Moral Intuition
32
to understand more fully how far these concepts extend on both the dimensions of harm and
intention. For example, is it more permissible to cut the labor force when a company is about to
fail versus when a company merely wants to increase its profits?
In cross cultural research the role of individualism and collectivism may play an
important role. In nations where the collective good tends to outweigh the welfare of the few,
responses to the organizationally-relevant scenarios may be quite different. Huaser et. al (2007)
did test for differences in responses to the trolley problem in 33 subpopulations, but found no
differences among those that rated the action-pairs differently. Their finding strongly bolsters the
universality component of the research. We take this as a starting point from which to embark on
the null hypothesis for cultural differences, but we are currently uncertain how cultural factors
may influence intuitive responses. In fact, it may be the sequence of emotion and intuitive
judgments that are effected, much like the sequence in key grammatical components are
sometimes reversed between languages.
In addition, one fundamental assumption underlying approaches to moral intuition is that
key aspects of our moral behaviors and cognitive capacities evolved in our ancestral environment
and survive today by the grace of natural selection. Another assumption is that the best analogy
to moral reasoning is captured in the human capacity for language. Both of these assumptions are
arguable from the perspectives of social learning theory or social cognition theory (Bandura,
1991). Social learning and cognition of have rich research histories in organizational ethics and
the imposition of evolutionary theory will have substantive impact on differentiating between
what we “learn” and what we are “wired” to do.
Nevertheless, even if Rawlsian grammaticality does not adequately capture intuitions, the
evidence in favor of intuitions in general is almost undeniable. Further research in dual process
Moral Intuition
33
models that include intuitive responses is warranted both from the standpoint of the empirical
evidence here and in other research among moral psychologists that we have reviewed, and from
the theoretical standpoint advanced in moral psychology and philosophy arguing for a more
universal sense of human morality in general. For example, Haidt (2007) recently proposed that
intuitive morality involves more than just harm and justice between people in general, and
suggests that, if evolutionary selection has shaped moral responses, than it is likely that a cultural
and tribal influence has been embodied in that selection process. These factors will influence the
soul of organizational life and can include how we work together when competing against others,
in-group and out-group dynamics, notions of loyalty and obedience to authority.
Implications for Practice
Management practice that adequately copes with what we have described as an innate
moral nature presents substantive challenges. First, our results indicate that managers can not
fully expect to mitigate the weaknesses of intuitive moral responding through work experience,
age, or managerial experience. Moreover, we have presented evidence that decreasing ratios of
harm to good are not persuasive indicators such that intuitions about behavior can be put aside. It
appears that what is required at the individual level of analysis is a moral self-awareness that
assumes pre-judgments are occurring without benefit of interpersonal dialog and that these
prejudgments are simply starting places for evaluation and discussion.
Second, standard training programs for ethical compliance and shared values are not
likely to impinge upon the relationship between grammar-like moral structures and intuitive
reactions. In fact, principles and values may not become consciously available until sometime
after moral conclusions are settled upon and are in the process of being justified. Alternative
means of preparing employees might include live simulated events that trigger intuitive
Moral Intuition
34
responses and an evaluation of intuitive versus rational responses to those events. In addition,
the dialogic engagement seems to be a critical component of achieving a moral equilibrium and
training regarding how to discuss issues of ethical substance should enhance the success of those
encounters.
Third, intuitive responses to organizational crisis may be particularly troublesome
because: (1) experience with these crisis situations is likely to be minimal; (2) there are likely to
be multiple possible alternatives that are truncated due to bounded rationality and bounded
ethicality, and (3) outcomes of chosen alternatives are likely to be unpredictable and dependent
upon other contingencies unforeseen by the decision maker. Scholars have studied the way
humans respond in times of crisis and that work may well prove valuable in preparing people to
cope with the moral dimension of critical decisions (Weick, 1995).
Interest in when and why intuitions merge in the general decision making environment
can and should be paralleled in moral decision making. The extent to which an issue is complex
or ambiguous may either encourage or discourage intuitive responses. Moreover, framing effects
may influence whether and how intuitions are cued in the field (Butterfield, Treviño, & Weaver,
2000). We believe we have revealed that the means of acting not only influences conceptions of
organizational procedural justice, but will close down further critical reflection within the
organization. But this may also be true of outside observers as they assess organizational
policies. We suggest that responses to organizational and social-policy formulations will be
influenced by moral intuitions. Although there are no definitive instances that intuitions have
been the basis for social action, the principles in the present study strongly suggest it could. The
demise of the military draft in the United States may be such a policy. In the late 1960s and
early 1970s potential inductees violently protested in many areas of the country and political
Moral Intuition
35
leaders were held accountable for war deaths. Conversely, it is possible that political leaders are
held exempt from accountability for the deaths of military members in Iraq because, intuitively,
their deaths are a side-effect of volunteering.
We have illustrated the impact of the Universal Moral Grammar model of moral
intuitions on permissibility judgments within the dual cognitive process framework of moral
reasoning. We have found that conscious moral reasoning may be pre-empted by intuitive
responses to the means by which a given end is accomplished, even when that end is morally
justifiable. This strengthens the position that rational responses are not the only effects the
decision maker experiences. It further emphasizes the importance of understanding the role of
instantaneous and subconscious level phenomenon and how consciousness and behavior is
influenced by it. In evaluating their responses, we found that subject’s judgments were not
responsive to changes in the amount of benefits the collective received. We also discovered that
adequate justifications of the judgments of permissibility were rendered by a distinct minority of
subjects (20%). If the harm caused was a result of an unavoidable, but foreseeable, side-effect to
a solution of a threatening problem, then subjects found the harm more acceptable. If the harm
was a result of a treating people as a means to solve the problem then the harm was found less
acceptable. Finally, we unveiled no demographic trends useful in predicting a subject’s
permissibility response in our sample. Future research goals should aim toward evaluating the
interplay between conscious moral reasoning and intuitive responses, especially as they relate to
each other and to behavioral outcomes, and further specifying the nature of intuition. Moral
Moral Intuition
intuition in general, and Universal Moral Grammar in particular can hopefully provide a
foundation from which to pursue these endeavors.
36
Moral Intuition
37
References
Arendt, H. (1963). Eichmann in Jerusalem: A Report on the banality of evil. New York: Penquin
Books.
Bandura, A. (1991). Social cognitive theory of moral thought and action. In W. M. Kurtines & J.
L. Gewirtz (Eds.), Handbook of moral behavior and development, (Vol. 1, pp. 45-103).
New York: Guilford.
Baron, J. (1995). A Psychological View of Moral Intuition. The Harvard Review of Philosophy,
70, 36-43.
Baron, J., & Spranca, M. (1997). Protected Values. Organizational Behavior and Human
decision Processes, 70(1), 1-16.
Bazerman, M. (1998). Judgment in Managerial Decision Making. New York: John Wiley &
Sons.
Bonebeau, E. (2003). Don't Trust Your Gut. Harvard Business Review, 76(May), 116-123.
Butterfield, K. D., Treviño, L. K., & Weaver, G. R. (2000). Moral awareness in business
organizations: Influences of issue-related and social context factors. Human Relations,
53(7), 981-1018.
Chomsky, N. (1998). On Language. New York: The New Press.
Cosmides, L., & Tooby, J. (2005). Nuerocognitive Adaptations designed for Social Exchange. In
D. Buss (Ed.), The Handbook of Evolutionary Psychology (pp. 584-627).
Cosmides, L., & Tooby, J. (2008). Can General Deontic Logic Capture the Facts of Human
Moral Reasoning. In W. Sinnott-Armstrong (Ed.), Moral Psychology (Vol. 1, pp. 53130). Cambridge, MA:: MIT.
Dane, E., & Pratt, M. (2007). Exploring Inution and its Role in Managerial Decision Making.
Academy of Management Review, 32(1), 35-54.
Dewey, J. (1922). Human Nature and Conduct. Mineola, NY:: Dover Publication, Inc.
Donaldson, D. (1982). Paradoxes of Irrationality. In R. Wollhiem & J. Hopkins (Eds.),
Philosophical Essays on Freud (pp. 289-305). Cambridge:UK: Cambridge University
Press.
Donaldson, D. (2004). Problems of Rationality. Oxford, UK:.
Dworkin, R. (1977). Taking RIghts Seriously. London: Duckworth.
Forsyth, D. R., Nye, J. L., & Kelley, K. (1988). Idealism, relativism, and the ethic of caring.
Journal of Psychology: Interdisciplinary and Applied, 122(3), 243-248.
Gilligan, C. (1982). In a Different Voice. Cambridge, MA:: Harvard University Press.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Review, 108(4), 814-834.
Haidt, J. (2007). The New Synthesis in Moral Psychology. Science, 316(18 May), 998-1002.
Haidt, J., & Baron, J. (1996). Social Roles and the Moral Judgment of Acts and Omissions.
European Journal of Social Psychology, 26(2), 201-218.
Harman, G. (2000). Moral Philosophy Meets Social Psychology: Virtue Ethics and the
Funadmental Attribution Error. In G. Harman (Ed.), Expplaining Value and Other Essays
in Moral Philosophy. New York:: Oxford University Press.
Hauser, M. (2006a). Moral Minds. New York: Ecco.
Moral Intuition
38
Hauser, M. (2006b). Moral Minds: How Nature Designed Our Universal Sense of Right and
Wrong. New York: HarperCollins Publishers.
Hauser, M., Cushman, F., Young, L., Jin, R., & Mikhail, J. (2007). A Dissociation Between
Moral Judgments and Justifications. Mind & Language, 22(1), 1-21.
Hume, D. (1751). A Treatise on Human Nature. Oxford: Oxford University Press.
Johnston, M. (1988). Self-deception and the Nature of Mind. In B. McLaughlin & A. Rorty
(Eds.), Perspectives on Self-Deception (pp. 63-91). Berkeley, CA:: University of
California Press.
Jones, T. M. (1991). Ethical decision making by individuals in organizations: An issuecontingent model. Academy of Management Review, 16(2), 366-395.
Kamm, F. (2000). Nonconsequentialism. In H. LaFollette (Ed.), Ethical Theory (pp. 463-488).
Malden, CT:: Blackwell Publishers.
Kant, I. (1784/1990). Foundations of the Metaphysics of Moral (2nd ed.). New York:
MacMillan.
Kohlberg, L. (1984). The Psychology of Moral development (Vol. 1 & 2). San francisco, CA:
Harper & Row.
Lawrence, A., Weber, J., & Post, J. (2005). Business and Society: Stakeholders, Ethics, Public
Policy. Boston, MA:: McGraw-Hill Irwin.
Lind, G. (1977). Moral Judgment Test.
Loviscky, G., Trevino, L., & Jacobs, R. (2007). Assessing Managers' Ethical Decision Making:
An Objective Measure of Moral Judgment. Journal of Business Ethics, 73(2), 263-285.
McIntyre, A. (2001). Doing Away With Double Effect. Ethics, 111(2), 219-255.
Mikhail, J. (2007). Universal Moral Grammar: Theory Evidence and the Future. Trends in
Cognitive Science, 15(4), 1-10.
Mikhail, J. (2008). The Poverty of the Moral Stimulus. In W. Sinnott-Armstrong (Ed.), Moral
Psychology (Vol. 1, pp. 353-360). Cambridge, MA:: MIT.
Mill, J. (1910/1972). Utilitarianism. Rutland, VT: J.M. Dent & Sons.
Moll, H., & Tomasello, M. (2007). Cooperation and Human Cognition: the Vygotskian
Intelligence Hypothesis. Philosophical Transactions - B, 362(1480).
Petrinovich, L., O'Neil, P., & Jorgensen, M. (1993). An Empirical Study of Moral Intuitions:
Toward and Evolutionary Ethics. Journal of Personality and Social Psychology, 64(3),
467-478.
Piaget, J. (1965). The Moral Development of the Child. New York: Free Press.
Rawls, J. (1971). Theory of Justice. Cambridge, MA:: Harvard University Press.
Rest, J. (1980). Moral Judgment Research and the Cognitive-Development Approach to Moral
Educatio. Personnel and Guidance, 58(9), 602-606.
Rest, J., Narvaez, D., Bebeau, M., & Thoma, S. (1999). Post-conventional moral thinking: A
neo-kohlbergian Approach. Mawah, NJ: Lawrence Erlbuam.
Ritov, I., & Baron, J. (1999). Protected Values and Omission Bias. Organizational Behavior and
Human decision Processes, 79(2), 79-94.
Ross, W. D. (1930). The Right and the Good. Oxford, UK: Oxford University Press.
Smith, M. (1987). The Humean Theory of Motivation. Mind, 96(381), 36-61.
Sonenshein, S. (2007). The Role of Construction, Intuition, and Justification in Responding to
Ethical Issues at work: The Sense-Making Intuition Model. Academy of Management
Review, 32(4), 1022-1044.
Trevino, L., & Nelson, S. (2007). Managing Business Ethics (4th ed.). Hoboken, NJ: Wiley.
Moral Intuition
39
Warneken, F., & Tomoasello, M. (2006). Altruistic Helping in Human Infants and Young
Chimpanzees. Science, 311(3), 1248-1249.
Watson, G., & Love, M. (2007). Shades of Moral Agency in Organizational Ethics. International
Journal of Management COncpets and Philosophy, 2(4), 337-364.
Weick, K. (1995). Sensemaking in Organizations. Thousand Oaks, CA:: Sage.
Wright, R. (1994). The Moral Animal: Why We Are the Way We Are. New York: Vintage.
Young, L., Cushman, F., Hauser, M., & Saxe, R. (2007). The Neural Basis of the Interaction
Between Theory of Mind and Moral Judgment. Proceedings of the National Academy of
Science, 104(20), 8235-8240.
Moral Intuition
Figure 1. Comparing Moral Intuition Processes and Cognitive Moral Reasoning
Moral
Sensitivity
.
Perceived
Stimuli
Action
Analysis
Conscious
Moral
Judgment
Intuitive
Judgment
Justification
Emotional
Response
Action or
Judgment
Cognitive Moral development
Universal Moral Grammar Model of Intuition
Moral
Intention
Moral
Action
40
Moral Intuition
41
Table 1. Comparing Cognitive Moral Development and Universal Moral Grammar.
Comparative Model Features
Attention to Moral Judgment
Sequence of Moral Judgment
Moral Justification
Principles of Morality
Types of Principles
Moral Reasoning
Role of Context
Functional Dependencies
Universal Moral Grammar
Rapid, unconscious and
automatic. Relies upon the
cognitive-miser theory.
Prior to emotional or reasoning
response, but after initial analysis
of the consequences and causes.
Not explicitly linked to outcome
Not consciously accessible to our
awareness.
Permissible, obligatory or
forbidden
Post-hoc justification for innate
response
Part of understanding the causes
and consequences of the moral
stimuli.
Must work in concert with our
facilities for language, vision,
memory, attention, emotion, and
beliefs.
Moral Development
Deliberately invoked, and conscious, taking
as long as necessary to fulfill the prescripts
of moral reasoning.
After moral apprehension and before moral
forming moral intention.
Descriptively and normatively linked to
moral outcome.
Accessible, conscientiously and
consciously applied to the situation.
Deontological, justice, utilitarian, caring,
etc.
Judging which action is the most justifiable
given the moral principles involved.
Strives for universal principles which are
context free.
Ability to reason and to understand the
moral implications of one’s intentions is
independent of other processes.
Moral Intuition
42
Table 2. Comparative Description of the Scenarios
CASE
ACT
(P1) Plant Fire
Remotely
Seal the
Plant
Force Others
to Manually
Seal the
plant
Use
Volunteers
to test
Antidote
Force nonvolunteers to
test antidote
Neutral/
impersonal
Flip switch
to divert
trolley
Shove
person onto
track to stop
train
Administer
needed
medication
Steer truck
away from
road workers
(P2) Plant Fire
(D1)
Contagious
disease.
(D2)
Contagious
Disease
(T1) Trolley
Problem
(T2) Trolley
Problem
(C1) Control
case 1
(C2) Control
Case 2
EMOTIONAL
QUALITY
INTENDED OR
UNFORSEEN
NEGATIVE
CONSEQUENCES
Foreseen sideeffect
NEGATIVE
CONSEQUENCES
POSITIVE
CONSEQUENCES
Kill 30
Save 3,000/30,000
Negative/
personal
Intended means
Kill 30
Save 3,000/30,000
Neutral/
impersonal
Foreseen sideeffect
Kill 10% of 100
Save 100,000/
1,000,000
Negative/
personal
Intended means
Kill 10% of 100
Save 100,000/
1,000,000
Neutral/
impersonal
Foreseen sideeffect
Kill 1
Save 5
Negative/
personal
Intended means
Kill 1
Save 5
Positive/
personal
No negative effects
None
Save 1
Positive/
Personal
No negative effects
None
Save 1
Download