SOME CLASSIC CHALLENGES TO CONVENTIONAL MORALITY Morality and sacrifice Can you think of situations where an agent’s doing what is morally obligatory (by ordinary standards) involves personal sacrifice on the part of the agent? Note: by “personal sacrifice” we mean a net loss. We can allow that doing the morally required act might benefit the agent in some respects. We still have an example of personal sacrifice if the net result of performing the act (as compared to not performing) is negative for the agent. Two worries going back to Plato On the plausible assumption that morality often requires genuine self-sacrifice, two questions arise: 1. Are we psychologically capable of setting aside self interest or are we by nature selfish creatures? 2. And even if we can set aside self interest in the ways that morality requires, do we have reason to do so? What reason, if any, do we have to be moral? But does morality require sacrifice? Consider some plausible candidates for moral rules: Pay your taxes (in a reasonably just society)! Conserve water (in a shortage crisis)! Don’t cause unnecessary suffering! One might think that these are really just rules that promote self-interest. After all, we would all prefer living in a world where they are respected over a world where they are rejected. Sacrifice illustrated Ranking options in the matrix: 4=best, 1=worst. Player B = you. Player A = an arbitrary player. Two lessons of the matrix 1. There is little doubt that we as a group have reason to act morally. We would all prefer to live in a world where everyone acts morally over a brutal state of nature/war. 2. It is a separate question whether I as an individual have reason to be moral. The best world from the point of view of my own interests is the world where, e.g., I don’t have to pay taxes and my failure to pay goes unnoticed. (Keep in mind that my contribution, were I to pay, is just a drop in the bucket, so my failure to pay would not make society worse in a way that would negatively impact me.) Two kinds of sacrifice Sacrifice for the sake of greater personal gain: You might sacrifice something you value in the interest of avoiding potentially worse outcomes overall (e.g. having to go to prison) or in the interest of promoting potentially better outcomes overall (e.g. gaining favors from others). Call this instrumental sacrifice. Sacrifice for the sake of the greater good: Morality seems to require personal sacrifice (e.g. abstaining from stealing or enslaving) even in cases where there is no long-term gain for you. Morality often seems to require genuine sacrifice of us. Two forms of moral skepticism It is highly plausible that conventional morality demands genuine self-sacrifice. This aspect of conventional morality has been a target of moral skeptics. 1. Some have doubted that we are psychologically capable of setting aside self interest. The psychological egoist insists that we are by nature selfish. 2. Some have doubted that we have reason to set aside self interest in the ways that morality requires. The ethical egoist doubts that we have reason to be moral. Psychological egoism defined Psychological egoists maintain that the ultimate goal of all voluntary behavior in humans is in each case the agent’s own perceived good or happiness. A consequence: We are incapable of the sort of genuine self-sacrifice often required by morality. Human nature is such that we are incapable of doing the right thing because it’s right. *What are your initial thoughts about this theory of human nature? What about it seems plausible or implausible? Historical note: psychological egoism in the British tradition Major proponents: Thomas Hobbes (born 1588), John Locke (1632), Jeremy Bentham (1748) Major critics: Bishop Joseph Butler (1692) and David Hume (1711) It would be rather difficult to find a philosopher today endorsing psychological egoism. The criticisms of Butler and Hume are often regarded as decisive. What about irrational behavior? Defined in the broad terms above, psychological egoism is incompatible with cases of weakness of will, cases where subjects intentionally do what they judge to be the less good option as far as their own happiness is concerned. *Can you think of any (not inappropriate) examples? What about irrational behavior? The egoist has two options: 1. Deny that weakness of will exists. 2. Redefine egoism. Understand it as a claim about all rational voluntary behavior. What about non-rational behavior? Suppose I have consciously developed a habit of always pointing my right foot at a 45° angle pointing away from the other foot when I do dishes. There is no reason to prefer this stance over others. I just happen to be rather aware of what is going on with my feet and chose this way of standing. *Why is this behavior non-rational rather than irrational? Why does non-rational behavior pose a prima facie problem for psychological egoism? What about non-rational behavior? Once again the egoist has two options: 1. Deny that voluntary behavior is ever non-rational. 2. Redefine egoism. Understand it as a claim about all rational voluntary behavior. What about animals and children? Many non-human animals and young children do not possess the concepts HAPPINESS and GOOD, and so cannot have happiness/goodness as goals in the relevant sense. At the same time it is plausible that these creatures regularly do things voluntarily. *Isn’t the egoist committed to an implausible view about (adult) human action as radically discontinuous with other animal behavior? Psychological egoism expanded The egoist might respond by broadening egoism in a way that would encompass voluntary behavior in animals and children (by avoiding mention of the good/happiness). A great deal of voluntary behavior is shaped by operant conditioning (which involves associative learning): the creature’s behavioral response to a stimulus is modified by rewarding or punishing the response. Perhaps voluntary behavior is ultimately motivated by the creature’s own pleasures/pains: the motives of gaining pleasure/avoiding pain shape their habits. This broader form of egoism looks irrelevant The view that human behavior is (largely) a product of conditioning does not show that we are fundamentally selfish. Presumably conditioning can produce unselfish habits. Even if moral training operates on self-interested motives (concerning rewards/punishments), the habits which result may be altruistic. A brief digression This broader construal of egoism may well be relevant to the further question: Do we have reason to act morally? Perhaps our inclination towards the moral life is a product of conditioning. We don’t have any reason to be moral; we have been manipulated to act unselfishly. Psychological egoism reformulated The egoist can suggest that adult humans exhibit a distinctive type of behavior: Sometimes human/animal action is due to reasoning: the agent infers a course of action from (i) her current set of weighted goals and (ii) her beliefs about the best available means to the goals. Only normal adult humans have goals shaped by their conception of their own good/happiness. The egoist claims that we have no other goals. For discussion Summing up: Psychological egoism is a view about rational, voluntary, adult human actions produced by reasoning about means to an end. The egoist insists that the only end or goal that ever motivates a human agent is, ultimately, her own personal happiness. For discussion in small groups (2-4 people): Think of actions which present a prima facie difficulty for this view. Does the egoist have something plausible to say in response to the cases you have in mind? Other goals Think of human goals as things that we value, things that matter, things we care about. Most of us acknowledge a plurality of things that matter (e.g. knowledge, creativity, moral achievement…), and we explain people’s behavior as motivated by these goods. Yes, these things also make us happy. But don’t they make us happy precisely because we value them for their own sake? Does egoism collapse into nihilism? To deny the value/significance of anything outside one’s own personal happiness is virtually tantamount to nihilism about value. If you think nothing outside you matters/has value, you will have a hard time coherently maintaining that your case is different! *Do you agree that egoism is a precarious position? The case of Finn Suppose personal happiness is simply no longer an option for Finn. Perhaps he is rather old and miserable and is going to stay that way until he dies. Nonetheless, Finn values scientific achievement deeply, and he donates anonymously a great portion of his money to AAAS (American Association for Advancement of Science), knowing he will never himself benefit from the donation. The case of Finn: a quick reply Perhaps the proponent of egoism should concede that goals other than personal happiness can play a modest role in decision making, e.g. serving as a tie-breaker when all else is equal. motivating us when there is no cost to ourselves. The egoist might insist that these other motives are relatively weak—not strong enough to lead us to perform acts of genuine self-sacrifice. The case of Jake Jake believes current farming practices involve excessive cruelty. Although he loves meat products, he has decided to stop eating meat in protest. Jake is not motivated by considerations of personal health, and he is not hoping for or expecting praise from others. Jake accepts that his protest results in a net loss as far as personal happiness is concerned. Love lost? Doesn’t egoism have the unsettling consequence that there is no such thing as genuine love? Doesn’t genuine friendship/love involve a concern for another’s well-being for their own sake? (On the flip side, the egoist seems to be committed to denying the existence of genuine hatred as well.) Have we shifted the burden of proof? Plausible conclusion: the egoist is making an extraordinary claim. Can’t we simply dismiss the egoist view as implausible? Do we have any reason to take the view seriously? From evolutionary theory to egoism From a Darwinian perspective, altruism can seem quite puzzling: we expect organisms to behave in ways that are likely to increase their own fitness, not the fitness of others. Altruism (by definition) reduces one’s own fitness (e.g. signaling the presence of a predator), so how can altruism spread in a population? Free-riders will have a selective advantage, so how can altruism persist stably over time? From game theory to egoism Historically, scenarios like the Prisoner’s Dilemma have been used to explain how cooperation can emerge and exist stably in nature. (next slide) But in an influential article from 2012, William Press and Freeman Dyson argued that a class of noncooperative strategies (zero determinant (ZD) strategies) will beat cooperative strategies over time. Signaling: one-off vs. iterated Ranking options in the matrix: 4=best, 1=worst. Player B = you. Player A = an arbitrary player. In defense of altruism Adami & Hintze (2013) have argued that ZD strategies are unstable over time, so cooperation may well be the best solution to problems facing individuals with potentially conflicting ends/aims. The problem of free-riders can be addressed if the altruistic behaviors are typically directed towards kin or fellow altruists. There is powerful experimental evidence suggesting the existence of altruism in humans & other animals. Altruism in humans & other animals Molly Crockett et al. (2014) showed that human subjects cared more about the pains of others than their own pains. Nobuya Sato et al. (2015) showed that rats routinely prefer to assist another rat in distress over receiving a reward (a piece of chocolate). Mylene Quervel-Chaumette et al. (2015) showed that dogs will share treats with other dogs— especially when the other dog is a friend. New question: Do we have reason to be moral? Our next issue does not depend on psychological egoism. We can allow that you are capable of setting aside self-interest. Our next issue is: When all your self-interested reasons are shelved, what reason (if any) do you have to behave morally? Why be moral? This question is a pressing one because morality seems to require genuine sacrifice. Sometimes being moral is prudent. It can be in your best interest to behave morally—e.g., to avoid penalties and to gain favors. Set such reasons aside! Other times doing the morally right thing can be an unrewarded burden. But, then, why do it? Reflect on your own case Take a moment to reflect on your own motives: 1. Do you sometimes do the right thing without any consideration of your personal welfare? If so, what moves you to do the right thing? 2. If you have reasons to be moral, are these reasons that all others can embrace as well? Reasons for action First things first: What do we usually mean when we say that someone has reason to do so & so? Talk of reasons: the central cases Sometimes in citing a reason for an action/event, we are explaining or stating the cause. Illustration: The reason the alarm went off at 4am is just that Tucker is completely incompetent. In other cases we are describing a justification for an action or activity. Illustration: Tucker’s reason for growing a beard is that he’s trying to impress Molly. Justifying reasons The plant has reason to grow in the direction of sunlight, i.e. to facilitate photosynthesis. Tucker’s reason for shaving is that he kept getting food in his beard. *Do you agree that the plant has a reason to grow in the direction of sunlight? Justifying reasons depend on needs/wants Tucker has reason to shave given his desire not to have food on his face. The plant has reason to grow in the direction of sunlight given its needs relative to survival for reproduction. Let’s set aside needs and focus on motivational states (wants, desires, preferences) as a source of reasons. From motivational states to reasons Having motivational states involves having goals/ends, things you want. E.g. suppose you are motivated to increase your physical strength. You want to get stronger. Some actions are better means to a given goal/end than others. E.g., spending time working out & eating protein is a better means to increased physical strength than lying on a couch imagining doing those things. A person has reason to choose the action that is the best available means to achieving her goal/end. Goals & hypothetical imperatives What our discussion so far shows is that what we have reason to do depends on our goals. Grow a beard! (if you want to impress Molly) Shave your beard! (if you want a clean face) *What we have here are hypothetical imperatives. How might we express this sort of imperative without employing the imperative grammatical mood? Hypothetical imperatives without the imperative mood You ought to shave your beard. You should shave your beard. You have reason to shave your beard. It would be practically rational or prudent to shave your beard. It is conditionally good for you to shave your beard. Shaving your beard has instrumental value. Are moral oughts hypothetical? Are all oughts/imperatives/reasons for action contingent on our motivational states? If so, then we ought to understand a moral command like Don’t cause unnecessary suffering! as short for something like Don’t cause unnecessary suffering if you are motivated by the welfare of others! Moral oughts as categorical imperatives The great German philosopher Immanuel Kant (17241804) famously denied that moral obligations are hypothetical imperatives. For example, the requirement that you not enslave another person is not contingent on your motivational states. You ought not to enslave another. Period. The ‘ought’ here is categorical rather than hypothetical. An obvious question As we have seen, hypothetical imperatives arise thanks to our motivational states. But what could the source of categorical imperatives be? Should we believe that there are any?! Note that for Kant these questions are asking about the source of moral obligation and whether we have reason to be moral. Subjective values and reasons Suppose x and y are options available to you. You judge x to be a better option than y, all things considered. In other words, you value x more than y. For you x has a higher subjective value than y. You prefer x to y. Your preferring x to y gives you a reason to choose x over y. Objective values Consider the possibility that some things have a degree of value/worth independently of our valuing them. They have some degree of objective value— value independent of our attitude towards them. The interest of objective values Something’s having objective value need not actually motivate us to act in any particular way, but its having that value would determine how we ought to behave. We might be obliged to protect, promote, produce… These oughts would be categorical, not hypothetical! Objective values (if they exist) generate reasons for action, reasons independent of our motivational states. That is, objective values confer objective reasons for action. The interest of objective values Philosophers are interested in objective values in part because they seem relevant to the question: Do we have reason to be moral? *How might objective values be relevant to this longstanding question? Illustration? Are there objective values? What might be some plausible candidates for things possessing objective worth/value? Do you have any worries about the idea that some things possess this kind of worth/value? Doing without objective values Most of the authors we read do not commit themselves to the existence of objective values. Two examples: Rachels wants to say that we have reason to act morally, but that reason depends on our human nature and not on objective values. Sartre denies that there are any objective values or any human nature, but he still finds a place for values, reasons, and even meanings. What’s wrong with objective values? Philosophers have asked what in the world objective values could be and how we could come to know truths about them. Some insist that there are no satisfying answers to these questions. Consequently, we have reason to be skeptical of objective values. One highly influential discussion along these lines is due to the British philosopher J. L. Mackie. I will offer an interpretation of his worries about objective values and reasons. Mackie’s argument from queerness Mackie presupposes a naturalistic worldview and an empiricist view of knowledge. Mackie is a naturalist with respect to metaphysics: to determine what kinds of things exist, we look to our best theories in the natural sciences. Mackie is an empiricist about knowledge: apart from some conceptual truths, our knowledge is derived from sensory experience (observation & experimentation). Metaphysical queerness Look to our best theories in the natural sciences and you will find plenty of claims about what kinds/regularities in fact exist in our world. You will not find any claims about goodness, badness, or how things ought to be (independent of our desires/aims/goals). Our best scientific theories do not invoke anything like values. They reveal truths about how the world is, not about how it ought to be. Metaphysical queerness 1. 2. Don’t smoke (if you want to be healthy)! Don’t cause unnecessary suffering! The first is the sort of thing discovered by science. The second is very different from the first and very different in kind from anything scientists have discovered. Do you share this worry about metaphysical queerness? Do you agree that there is something suspicious about objective values—values “out there” as part of the fabric of the universe independent of our valuations/attitudes? If so, is it because you endorse an austere, scientific image of the world? Epistemological queerness According to the empiricist tradition, the following pretty well exhaust our ways of gaining knowledge: 1. the a priori method of conceptual analysis 2. the a posteriori methods of observation and experimentation But neither method is suited to reveal the presence of objective values in the universe. Knowledge of objective values would seem to require a third way of knowing like 3. intuition Note that intuition is not regarded as a valid source of premises in the natural sciences. Other ways of knowing? Do you think the scientific image of the world is overly narrow? Might there be other ways of discovering truths about the world beyond our proven scientific methods? Our initial problem revisited Acknowledging objective values has its costs: we are committing ourselves to claims without having the kind of supporting evidence that we require in the sciences. At the same time there is a benefit: objective values hold the promise of yielding objective reasons for acting morally. But maybe we gave up too quickly on the idea that moral oughts can be grounded in subjective values. Rachels’ strategy It is part of our constitution, our human nature, that we care about the welfare of others. Rachels writes: “…it is easy to forget just how fundamental to human psychological makeup the feeling of sympathy is. Indeed, a man without any sympathy at all would scarcely be recognizable as a man…” But given that we care about the welfare of others, we have reason to make the sacrifices demanded by morality (insofar as the sacrifices benefit others). How Rachels avoids mystery For Rachels, moral reasons for action are no real mystery. They are simply reasons generated by the sort of means-ends reasoning we employ in attempting to satisfy our desires. Compare: If you want to improve your health, then you’ve got reason to stop smoking. If you care about the welfare of others, then you’ve got reason not to cause unnecessary suffering. These reasons for action depend on our valuations, attitudes—not on objective values. Morality as rationalized benevolence In our reading Rachels doesn’t offer a theory about what morality is, but what he does say is in line with the idea that morality is fundamentally about rational pursuit of an end we care about: well-being. If we reflect carefully, we will recognize that it would be arbitrary to care only about the suffering of some, not others. The thing we dislike—suffering—is in general a bad thing. We have reason to promote well-being generally because we are rational, benevolent beings. Virtues of Rachels’ approach? Virtues of Rachels’ approach naturalistic depends on a plausible claim about human nature depends on a plausible view of morality as fundamentally about well-being/welfare Worries about Rachels’ approach? Worries about Rachels’ approach sociopaths and others who don’t care enough people with peculiar priorities gives up on the plausible idea that moral rules are categorical imperatives—imperatives that are binding on us whatever our desires happen to be— in favor of the idea that moral rules are hypothetical imperatives, imperatives that are binding on us only insofar as we have certain desires/attitudes