Robotic Companions: Some Ethical Considerations about Designing a Good Life with Robots Lawrence M. Hinman, Ph.D. Professor of Philosophy Co-Director, Center for Ethics in Science & Technology University of San Diego Larry@EthicsMatters.net May 17, 2009 Overview Definition: Robotic Companions The General Question • • Designing a good life that encompasses both humans and robots Ethics as experimental science Seven Specific Questions • • • • • • Transforming filial responsibility Transforming expectations of humans Designed for honesty Sexual companions Robotic fungibility Robots as slaves Summary Conclusion Definition: Robotic Companions Principal focus is on sociable robots (following Breazeal et al.): • Roughly humanoid in appearance • Fairly autonomous • Capable of emotion recognition and voice recognition • Basic drive to care for others • Capable of expressing information • Capable of expressing (the appearance of) emotions The General Question Distinguish two conceptions of ethics • Negative, other-directed. Focuses on how others are wrong. • Positive, future-directed. Focuses on how we can create a good life together. The general question here is about what counts as a good life together that encompasses both humans and robotic companions. • Part of a larger domain that includes cyborgs, animals, and more autonomous robots. This means that ethics must do empirical research to determine the ways in which humanity is being transformed. The following specifies areas for research, not a priori answers. Ethics as an Experimental Science This suggests that the job of moral philosophers is not to dictate right and wrong, but to highlight areas of concern for research. Nadeau suggests: Artificial intelligence works by heuristics, and there is one heuristic theory of moral reasoning, rule utilitarianism. The idea is that from experience one learns which patterns of behavior have caused benefit and which have caused harm, and that experience is generalized to moral rules of thumb that guide ethical actions. The rules of thumb can be overridden in circumstances in which it becomes evident that following them will cause harm or fail to do good. They are defaults. Filial Responsibility The first interesting question is about the possible ways that companion robots can transform our understanding of filial responsibility. The moral contours of human life are shaped by certain basic events, including • • • • Being born Creating new life (conceiving) Working Dying • • Being nurtured Nurturing Q1: How will the widespread use of companion robots transform our experience of nurturing and being nurtured? Changing Expectations about Humans Companion robots can be extraordinarily patient, tolerant, and supportive—often far more so than their human counterparts. The second interesting question concerns the impact that human-robot interactions will have on human-human interactions. Bluntly put, will we come to prefer robots? Q2: How will the widespread use of companion robots change our expectations about other humans? Will we expect more of them? Designed for Honesty We face a number of interesting questions about the honesty of companion robots. Here are two. Q3: Should companion robots always tell the exact truth to their charges? We could imagine someone asking his companion robot if he looks healthy today. The robot might always tell the truth, might always say only positive things, or might exaggerated the positive by 10%. Q4: Should companion robots always report accurately on their charges to their supervisors? It would be surprising if companion robots didn’t eventually include a reporting function to send information back to supervisors. Sexual Companions The next interesting question is whether we should allow such robots to provide sexual stimulation or satisfaction to their charges. Q5: Should companion robots function as sexual companions? Fungibility Many objects are fungible—one instance can be substituted for another without loss or change. A dollar bill is a paradigm case—one is just as good as any other. Objects of emotional attachment are generally not fungible. If I am married to someone who has a twin, I couldn’t substitute the twin for my spouse in the way in which I could substitute one dollar bill for another. Q6: Should companion robots be treated as fungible? In other words, are robotic companions to be seen as interchangeable or as Robots and slaves I wonder whether we don’t implicitly think about robotic companions as slaves, available to do our bidding but not centers of interest in themselves. Q7: Should companion robots be designed and treated as slaves? I don’t know the answer to this question, but the question is implicit in several of the preceding questions. It seems that we might be able to understand some of the possible dangers here by looking at the literature on slavery: Aristotle on the natural slave, Hegel on the master-slave dialectic, Marx on Hegel, narrative accounts of slaves, etc. Summary Q1: Q2: Q3: Q4: Q5: Q6: Q7: How will the widespread use of companion robots transform our experience of nurturing and being nurtured? How will the widespread use of companion robots change our expectations about other humans? Will we expect more of them? Should companion robots always tell the exact truth to their charges? Should companion robots always report accurately on their charges to their supervisors? Should companion robots function as sexual companions? Should companion robots be treated as fungible? Should companion robots be designed and treated as slaves? Conclusion Companion robots will be a fact of life in the near future, barring some major disaster. The interesting question is how can we construct a good life together with companion robots and human beings? The intent of the preceding seven questions is to highlight areas of concern, factors that might make it more difficult to construct a good life together.