A Brief Historical and Theoretical Perspective on Patient Autonomy

advertisement
Use of this content is subject to the Terms
and Conditions
A Brief Historical and Theoretical Perspective on Patient Autonomy and Medical
Decision Making
Chest - Volume 139, Issue 6 (June 2011) - Copyright © 2011 The American College of Chest Physicians
MDC Extra Article: This additional article is not currently cited in MEDLINE®, but was found in MD Consult's full-text literature database.
Medical Ethics
A Brief Historical and Theoretical Perspective on Patient Autonomy and
Medical Decision Making
Part II: The Autonomy Model
Jonathan F. Will, JD *
a From the Bioethics and Health Law Center, Mississippi College School of Law, Jackson, MS
* Correspondence to: Jonathan F. Will, JD, Bioethics and Health
Law Center, Mississippi College School of Law, 151 East Griffith St,
Jackson, MS 39201
E-mail address: will@mc.edu
Manuscript received February 27, 2011 , accepted March 19, 2011
Editor's note: This essay is a continuation of the first topic in the Law and Medicine curriculum of the ongoing “Medical Ethics” series. To
view all articles from the core curriculum, visithttp://chestjournal.chestpubs.org/cgi/collection/medethics
.—Constantine
A. Manthous, MD, FCCP, Section, Editor, Medical EthicsReproduction of this article is prohibited without written permission from the
American College of Chest Physicians (http://www.chestpubs.org/site/misc/reprints.xhtml
).
PII S0012-3692(11)60310-3
DOI 10.1378/chest.11-0516
As part of a larger series addressing the intersection of law and medicine, this essay is the second of two
introductory pieces. Beginning with the Hippocratic tradition and lasting for the next 2,400 years, the
physician-patient relationship remained relatively unchanged under the beneficence model, a paternalistic
framework characterized by the authoritative physician being afforded maximum discretion by the trusting,
obedient patient. Over the last 100 years or so, in response to certain changes taking place in both research
and clinical practice, the bioethics movement ushered in the autonomy model, and with it, a profoundly
different way of approaching decision making in medicine. The shift from the beneficence model to the
autonomy model is governed legally by the informed consent doctrine, which emphasizes disclosure to
patients of information sufficient to permit them to make intelligent choices regarding treatment alternatives.
As this legal doctrine became established, philosophers identified an inherent value in respecting patients as
autonomous agents, even where patient choice seems to conflict with the physician's duty to act in the
patient's best interests. Whereas the beneficence model presumed that the physician knew what was in the
patient's best interests, the autonomy model starts from the premise that the patient knows what treatment
decision is in line with his or her true sense of well-being, even where that decision is the refusal of treatment
and the result is the patient's death.
Abbreviations
AMA
American Medical Association
This is the second of two introductory essays that are the initial contributions to a larger series addressing the
intersection of law and medicine, particularly with regard to the dynamics of the physician-patient relationship
and decision making at the end of life. The first essay explored the early history of the medical profession and
the dominance of the beneficence model, a paternalistic approach to the practice of medicine whereby
physicians exercised decision-making authority within the relationship at the expense of patient selfdetermination. [1] This essay picks up with the changes taking place beginning around the turn of the 20th
century and continuing through the bioethics movement that would lead patients (and the public at large) to
question the trust in and obedience to physicians upon which the beneficence model depended.
The shift from the beneficence model to the autonomy model is reflected in the United States by the legal
doctrine of informed consent. Though obtaining patient consent was not completely foreign to the early
practice of medicine, it had little if anything to do with honoring patient decision making. It was not until
lawyers, philosophers, and others external to the medical profession suggested an inherent value in
respecting the decision-making capacity of patients as autonomous agents that the duty to obtain consent
became the duty to obtain informed consent recognized under current legal and medical standards.
With limited exceptions, a physician must obtain a patient's informed consent prior to initiating treatment,
which of course means that the patient can withhold such consent. As the later pieces in this series will
address, patient refusal of medical treatment resulting in the patient's death can present complications in the
face of the government's interest in preserving life and the physician's duty (dating back to the Hippocratic
tradition) to use his or her medical skill and judgment to act in the patient's best interests.
Part 2—Patient Knows Best (the Autonomy Model)
The Golden Age of Medicine
By the second half of the 19th century, American patients, even with their emphasis on liberty and
individualism, had acknowledged that the practice of medicine was beyond the reach of the lay populace. At
that point, the American physician, with a reputable medical degree and state license in hand, had little
difficulty commanding the trust and obedience of patients. [2] Patient trust and obedience was an essential
component of the beneficence model and was an often unspoken characteristic of the so-called “golden age
of medicine,” the period in American medicine beginning when this trust was attained and lasting until medical
malpractice litigation became widespread. [3] The idealized image frequently associated with the early days of
this period finds the stoic physician riding triumphantly into town to make house calls, driving a horse-drawn
carriage and armed only with his black bag of diagnostic tools. [4] During a time when physicians did not have
high-tech diagnostic equipment, reliance on a knowledge of patient history (which often ran three generations
deep) was paramount, and treatment in the home was important for the development of the early physicianpatient relationship. That said, under this framework, the practice of benevolent deception was commonplace.
[1]
The intimate nature of the physician-patient relationship, coupled with a medical ethic based in the
beneficence model, led to certain unilateral decisions being made by physicians that, by today's standards,
would be viewed as unsettling. For instance, physicians might have identified a severely disabled newborn
(such as one with spina bifida) as a stillbirth in order to spare the parents from making a difficult choice, or
decided to withhold antibiotics from an elderly person with pneumonia to let the disease “serve as the old
man's best friend.” [4] Any discussion of the morality of such physician behavior would have been conducted
solely within the medical profession. [1] , [4] As the 19th century came to a close, however, certain changes
were taking place that would cause patients to question the wisdom of this unfettered trust of and obedience
to their physicians.
The Changing Face of Research/Clinical Practice and the Early Consent
Cases
Prior to the onset of World War II, research with human subjects looked very different than it does today. Most
recorded research was performed by very few physicians, and they carried out experiments on themselves,
their family members, or possibly their neighbors. [4] Notwithstanding limited reports of royal pardons offered
in England to condemned prisoners willing to participate or of experiments performed on slaves in the United
States during the 19th century, [5] an obvious reason for the limited use of human experimentation was the
lack of willing subjects. As early as 1830, the common law of England acknowledged the need to obtain
subject consent prior to conducting research [4] ; however, given that nearly all early research was performed
on either family and friends or was therapeutic in nature (attempting to treat an existing illness in isolated
subjects), there was thought to be little need for extensive ethical discourse.
Perhaps a less apparent, though no less important, challenge faced by early would-be researchers was the
unification of medical standards driven by the American Medical Association (AMA) in the latter half of the
19th century. With the AMA's stated goal of eliminating “quackery in all its forms,” [5] the early organization
frowned on physicians deviating from standard practice. As will be more fully discussed later in this article, in
addition to drawing the ire of the AMA, experimentation that was viewed as deviating from established medical
standards could subject the physician to liability under a new breed of lawsuit known as medical malpractice.
Research on human subjects began to expand dramatically with the onset of World War II, and with it, there
developed a real need to evaluate the ethics of human experimentation. An important factor permitting this
expansion was the movement from treating patients in the home to treating them in the hospital, which
created a larger subject base. The earliest hospitals in the United States were almshouses operated by
religious orders, and patients typically only went there when care in the home was not possible. [2] In fact, the
traditional almshouse was not uniquely medical; it served the poor, orphans, the disabled, and even strangers
passing through the town. Toward the end of the 19th century, an effort was made to distinguish between
hospitals and institutions or asylums for orphans, the poor, and/or the insane. [2]
Between 1870 and 1920, the number of hospitals in the United States grew from < 200 to > 6,000. [2] While an
American physician in 1870 may have never set foot in a hospital, by the early 1930s, five out of six American
physicians had admitting privileges in at least one hospital, [2] and by 1960, < 1% of physician-patient
interactions were in the patient's home. [4] There are many and complex reasons for this rapid
transformation. [2] , [4] As populations boomed in metropolitan areas, it was clearly more efficient for a
physician to remain stationary and have patients come to him or her. Further, it was not possible for
physicians to bring new diagnostic equipment like radiography to the patient. In addition, increased knowledge
of the complexity of the human body inevitably led to specialization within the medical profession, such that by
the 1960s, only 20% of American physicians described themselves as being general practitioners. [4]
While it would be hard to argue that an increased understanding of the human body is a bad thing or that
more accurate diagnostic equipment should be avoided, the impacts of these advances on the physicianpatient relationship cannot be ignored. The intimate relationship that developed and was indeed necessary
when the physician came into the patient's home and used family history as the primary diagnostic tool came
to be less and less relevant as physicians from different specialties used multiple tests and various types of
equipment to diagnose instead. [4]
It can hardly be disputed that the practice of medicine became less personal as the emphasis on scientific
knowledge increased. One need only look to the medical reports prepared by physicians, which took on a
markedly less personal nature and a more scientific tone beginning in the latter half of the 19th century. [5]
Though identifying a singular cause would be difficult, the rise of medical malpractice litigation, developing at
the close of the 20th century and continuing to this day, may be no coincidence. [6] Indeed, a commission
established by President Nixon in the early 1970s (in response to what some viewed to be a malpractice
crisis) suggested that “one critical element in the rise of malpractice suits was the breakdown of the doctorpatient relationship.” [4] Early lawsuits against physicians also contributed to the establishment of a consistent
practice within the profession of obtaining patient consent, a previously foreign concept. [1]
Considering the ancient roots of both law and medicine, medical malpractice litigation is a relatively recent
phenomenon, [5] with one of the first commonly cited cases coming out of England in 1767. [7] Of course, once
the AMA was successful in establishing a sense of uniformity for medical practice, there was now a standard
against which to measure physician performance. [2] In the consent context, four lawsuits [8] , [9] , [10] , [11]
against physicians decided in the United States between 1905 and 1914 served as the foundation for the
law's treatment of patient consent over the next forty years. [5] Prior to these cases, courts had a rather broad
interpretation of consent. Once a surgical procedure was underway for which consent had been obtained (for
instance, surgery on the left ear), courts would then determine that implied consent existed for any other
treatment decision made by the physician during the procedure (as when the physician determined that the
right ear actually needed the procedure). In Mohr v Williams [8] and Pratt v Davis, [9] the Supreme Courts of
Minnesota and Illinois marked a departure from this practice by making it clear that implied consent could only
be used in very limited circumstances, such as emergencies. Using similar rationales, the courts determined
that patients have a right to protect their bodily integrity, which entitles them to evaluate the different risks and
dangers associated with each medical decision prior to giving consent. [5] Similarly, in Rolater v Strain, [10] the
court held that the physician could not depart from the specific procedure consented to by the patient.
While physicians did develop a more consistent practice of obtaining patient consent in the early 20th century,
the medical literature indicates that the practice was fueled more by a desire to respond to lawsuits than by a
moral imperative to respect patient autonomy. In a 1911 article, physician George W. Gay suggested that
“careful and explicit explanations of the nature of serious cases, together with the complications liable to arise
and their probable termination,… be given to the patient… for our own protection.” [5] Just three years after
that article, the last of the four early consent cases was decided.
In Schloendorff v Society of New York Hospitals, [11] Justice Cardozo planted the seed for what would
become the informed consent doctrine when he wrote, “Every human being of adult years and sound mind
has a right to determine what shall be done with his own body; and a surgeon who performs an operation
without his patient's consent commits an assault, for which he is liable in damages.” [11] This oft-quoted
language from the legal system suggests notions of patient self-determination, but it would be another 50
years before philosophers, also external to the medical profession, would associate this legal right with the
ethical obligation to respect patient autonomy. While the courts had imposed their will on the medical
profession by influencing consent practice, certain events would occur in both the research and clinical
contexts over the next several decades that would give visibility to philosophers, theologians, and the public at
large of that which had historically gone on behind the closed doors of the medical profession.
Questionable Practices and the Need for Reform
The lack of ethical guidelines did not prevent experimentation with human subjects from becoming
widespread during World War II. The Nuremberg trials, which included prosecution of alleged crimes against
humanity, identified atrocities that gave worldwide exposure to the risks of not having such guidelines. The
types of experiments performed by Nazi physicians on concentration camp prisoners included prolonged
immersion in freezing water to study the process of freezing to death and exposing prisoners' genitals to xrays in an effort to determine the most efficient way to sterilize large groups of people. [4] , [5] During trial,
attorneys argued that much of the experimentation was justified as an effort to prepare and protect German
soldiers. Hitler himself suggested in 1942 that it was unacceptable for “someone in a concentration camp or
prison to be totally untouched by war, while German soldiers had to suffer the unbearable.” [4]
In the end, these justifications were ineffective. The judges hearing the case took it upon themselves to
establish the “basic principles [that] must be observed [regarding human experimentation] in order to satisfy
moral, ethical and legal concepts.” [5] The resulting Nuremberg Code admonishes that “the voluntary consent
of the human subject is absolutely essential” and that the research subject “should be so situated as to be
able to exercise free power of choice” and “should have sufficient knowledge and comprehension of the
elements of the subject matter involved as to make an understanding and enlightened decision.” [12] The
Nuremberg Code was needed because of the fact that research involving human subjects had become
widespread and was now acknowledged as being essential to medical development. Over the following years,
the World Medical Association recognized a need for standards broader in scope than those set forth in the
Nuremberg Code, and it thus adopted the Declaration of Helsinki [13] in 1964.
Research involving human subjects was also common in the United States, both during World War II and in
the decades following. But because no one could imagine atrocities like those exposed in Germany, little
attention was paid to ensuring that subject consent was part of American research protocols. For instance, not
unlike Germany, the United States government had an interest in protecting American troops facing diseases
such as dysentery and malaria. To test treatments for dysentery, which was common in the United States,
researchers needed only to look at orphanages to find conditions equally as deplorable as those faced by
soldiers at the front. Experimental treatments were thus administered without consent to institutionalized
children aged thirteen to seventeen. [4] Malaria, on the other hand, was a disease foreign to American soil.
Therefore, researchers went to state hospitals for the insane as well as prison hospitals to nonconsensually
infect otherwise healthy individuals with malaria so as to test the efficacy of experimental antimalaria
treatments. [4] In fact, most wartime research was performed in the United States on the institutionalized poor,
orphans, prisoners, the mentally disabled, minorities, and the like, without consent.
During World War II, little concern was raised about such research protocols, since “all citizens were—or were
supposed to be—contributing to the war effort,” including those incapable of fighting, like children and the
mentally infirm, [4] a justification not unheard of for the times. Unfortunately, and notwithstanding the
Nuremberg Code, such questionable and nonconsensual research practices would continue to thrive in the
United States in the years following the war.
Concerned about the potential for abuse in human research sparked by the Nazi trials, Henry Beecher [14]
undertook an evaluation of research practices in the United States, which had heretofore been self-regulated
within the medical profession. In 1966, he published an exposé in the New England Journal of Medicine
outlining 22 examples of questionable postwar research experiments, none of which involved subject consent.
Realizing that the journal was primarily read by physicians, Beecher also alerted the press of its pending
publication. [4] The public was outraged to learn of such things as live cancer cells being injected into human
subjects without their knowledge and the infection of healthy (though mentally disabled) children with
hepatitis. In other cases, similar to what was later discovered to have happened in the Tuskegee Syphilis
Study, known treatments were withheld from patients in order to monitor the progression of the given disease.
The ethical failings of medical professionals were not limited to the research context, however. As David
Rothman [4] points out, “A reluctance to trust researchers to protect the well-being of their subjects soon
turned into an unwillingness to trust physicians to protect the well-being of their patients.” Events surrounding
both the beginning and end of life in the clinical setting would soon gain public exposure, suggesting that such
reticence was well founded. In 1969, Baby Doe was born at Johns Hopkins with Down syndrome as well as a
surgically correctable intestinal blockage. The parents refused consent for the surgery to correct the blockage,
the hospital complied, and the baby ultimately died of starvation 15 days later. [4] As previously discussed,
decisions like this were not uncommon behind the closed doors of the medical profession; however, in this
case, certain physicians treating the newborn were sufficiently troubled by the events that they made a short
film that was shown with the support of the Joseph P. Kennedy Foundation. Public outcry ensued, with
philosophers, theologians, and members of the general populace wanting a say as to whether this was
acceptable. [4]
At around the same time, in response to advances in heart transplantation (the first transplant having been
performed in 1967), a group at Harvard Medical School developed a new definition of death (brain death as
opposed to cessation of cardiopulmonary function). The Harvard Brain Death Committee developed medical
criteria for brain death that they assumed would gain broad social consensus. They were wrong. The public
was aware of the committee's findings and quickly pointed out that definitions of death are not solely the
province of medicine; they also include questions of social, theologic, and philosophical significance. [4] Given
the inherent conflict of interest in a physician certifying a patient as brain dead to better preserve organs for
transplantation, the public was once again on edge about putting unfettered trust in physicians.
New Voices: Informed Consent and Respecting Patient Autonomy
Perhaps the most lasting impact of Beecher's exposé and the public exposure of events like the death of Baby
Doe, the Harvard Brain Death Committee actions, and the Karen Ann Quinlan case [15] (the famous case from
1976 where the New Jersey Supreme Court authorized removal of ventilator support from a young woman in
a persistent vegetative state) was that these events brought into public light that which had historically only
been the subject of internal regulation and discussion. With new voices, such as those of lawyers,
philosophers, and theologians, came a new approach to evaluating decision making in medicine. In addition,
these new discussions, part of what would be called the bioethics movement, were going on within the
broader context of the civil rights movement, where the rights of the oppressed were of particular import.
Protecting patients from the imbalance of knowledge within the physician-patient relationship was a primary
goal of the informed consent doctrine.
Though the term “informed consent” has been attributed to a case from 1957, [16] it would be several years
before it would be identified as being consistent with the concept of respecting patient autonomy. The theory
of informed consent is straightforward. Given the complexity of many medical decisions, it is not sufficient that
the patient merely consent to the procedure. Rather, as the court found in Salgo v Leland Stanford Jr
University Board of Trustees, [16] the physician owes the patient a duty to inform him or her of “any facts which
are necessary to form the basis of an intelligent consent.”
The seminal case of Canterbury v Spence [17] was decided in 1972, during the height of the bioethics
movement. After citing Schloendorff for the fundamental proposition that patients have a right of selfdetermination, the court expanded the concept by stating that “true consent to what happens to one's self is
the informed exercise of a choice, and that entails an opportunity to evaluate knowledgeably the options
available,” which can only be accomplished when a patient is able to look to the physician “for enlightenment
with which to reach an intelligent decision.” [17] By discussing the concept of knowledgeable or intelligent
consent in the context of a right of self-determination, the Canterbury court used terminology along the lines of
what philosophers would call respecting autonomous authorization.
In the same year that Canterbury was decided, the American Hospital Association published the Patient Bill of
Rights, which focused on improving standards for respecting patients admitted to hospitals. [5] It was also
during the 1970s that philosophers gained their most prevalent seats at the table. In response to the public
outcry over Beecher's exposé, the National Research Act was signed into law, which established the National
Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Importantly, the
commission's membership was dominated by outsiders to the medical profession, including more members
with law degrees and PhDs than MDs. [4] The commission was directed to identify the basic principles that
should guide research involving human subjects, and the fruit of the commission's effort was the Belmont
Report, published in 1979. [18] The report served as the foundation for future government regulation and
oversight of research with human subjects, and its principles (focusing on respect for persons, beneficence,
and justice) were codified in Title 45 of the Code of Federal Regulations. [19] It also served as the model for
later commissions that would focus on ethical guidelines in clinical practice.
In identifying “respect for persons” as a fundamental principle underlying research with human subjects, the
Belmont Report acknowledged that “individuals should be treated as autonomous agents.” [18] From the
Greek autos (self) and nomos (rule or law), “personal autonomy encompasses, at a minimum, self-rule that is
free from both controlling interference by others and from certain limitations such as inadequate
understanding that prevents meaningful choice.” [20] The emphasis on respecting patient autonomy in the
informed consent context has been tied to the American association with a liberal Western tradition that
advocates the importance of individual freedom and choice. [5]
The more philosophical reason for respecting patient autonomy stems from a “fundamental and universal
moral truth… that humans are owed respect for their ability to make reasoned choices that are their own and
that others may or may not share.” [21] As philosophers introduced the concept of respecting patients as
autonomous agents, there soon followed the “view that informed consent is not merely a legal doctrine, but
also a moral right of patients that generates moral obligations for physicians.” [5] Whereas it was implicit under
the paternalism of the beneficence model that knowledge of what was in the patient's best interests was solely
the province of physicians, now came a presumption under the autonomy model that “competent individuals
are better judges of their own good than are others.” [22] As might be imagined, the practice of benevolent
deception is largely frowned on under the autonomy model; however, it is still permissible under the label
“therapeutic privilege,” where a physician is capable of showing that full disclosure of information would be
contrary to the best interests of the patient. [5] Thus, therapeutic privilege lives on as an uncomfortable
reminder of a long history under the beneficence model, though, as the Canterbury court itself suggested, the
privilege must be “carefully circumscribed… for otherwise it might devour the disclosure rule itself.” [17]
Conclusion: Informed Consent and Refusal
The autonomy model is founded on the assumption that if given adequate information, a patient will be
capable of making an informed decision consistent with his or her sense of well-being, that is, an autonomous
choice, free from controlling interference that is deserving of respect as such. This is true even when a
patient's decision conflicts with a physician's recommendation. Nowhere is this conflict more apparent than in
the context of refusal of medical treatment in which the result is the patient's death.
As later pieces in this series will show, the right of a patient to make decisions at the end of life (for instance,
via advance directives) is rooted in the concept of the patient's right of self-determination as ethically justified
by the principle of respecting patient autonomy. While it may appear that the exercise of such a right by a
patient is in conflict with the physician's duty to act in the patient's best interests, where the patient is deemed
to know what is in his or her best interests, the principles would seem to be entirely consistent.
Acknowledgments
Financial/nonfinancial disclosures: The author has reported to CHEST that no potential conflicts of interest
exist with any companies/organizations whose products or services may be discussed in this article.
Other contributions: Special thanks to Constantine Manthous, MD, for organizing this series of articles and
to Kathy Cerminara, JD, JSD, for her guidance. Thanks also to the Mississippi College School of Law for
continued support.
REFERENCES:
1 Will JF: A brief historical and theoretical perspective on patient autonomy and medical decision making:
part I: the beneficence model. Chest 139. (3): 669-673.2011; Abstract
2 Starr P: The Social Transformation of American Medicine, Basic BooksNew York, NY1982: 33-179.
3 Katz J: The Silent World of Doctor and Patient, Free Press, Macmillan Inc.New York, NY1984: 1-2.
4 Rothman DJ: Strangers at the Bedside, Aldine TransactionsNew York, NY 1991: 1-284.
5 Faden RR, Beauchamp TL: A History and Theory of Informed Consent, Oxford University PressNew York,
NY1986: 53-113.
6 Hall MA, Bobinski MA, Orentlicher D: Medical Liability and Treatment Relationships, 2nd ed. Aspen
PublishersNew York, NY2008: 273-300.
7 Slater v Baker and Stapleton. 95 Eng Rep 860 (KB 1767)
8 Mohr v Williams. 104 NW 12 (Minn 1905)
9 Pratt v Davis, 79 NE 563 (Ill 1906)
10 Rolater v Strain, 137 P 96 (Okla 1913)
11 Schloendorff v Society of New York Hospitals, 105 NE 92 (NY 1914)
12 Nuremberg Code. Trials of War Criminals before the Nuremberg Military Tribunals under Control Council
Law No. 10, Vol 2. US Government Printing OfficeWashington, DC1949: 181.
13 National Institutes of Health : World Medical Association Declaration of Helsinki: ethical principles for
medical research involving human subjects. Office of Human Subjects Research Web site. Accessed April 1,
2011. http://ohsr.od.nih.gov/guidelines/helsinki.html
14 Beecher HK: Ethics and clinical research. N Engl J Med 274. (24): 1354-1360.1966; Citation
15 In re Quinlan, 355 A2d 647 (NJ 1976).
16 Salgo v Leland Stanford Jr Univ Bd of Tr, 317 P2d 170 (Cal Ct App 1957).
17 Canterbury v Spence, 464 F2d 772-790 (DC Cir 1972).
18 National Institutes of Health : The Belmont report: ethical principles and guidelines for the protection of
human subjects of research. Office of Human Subjects Research Web site. Accessed February 15,
2011. http://ohsr.od.nih.gov/guidelines/belmont.html
19 Protection of Human Subjects. 45 CFR §46 (March 18, 2011).
20 Beauchamp TL, Childress JF: Principles of Biomedical Ethics, 6th ed. Oxford University PressNew York,
NY2009: 99.
21 Pellegrino ED, Thomasma DC: The Virtues in Medical Practice, Oxford University Press1993: 21.
22 Buchanan AE, Brock DW: Deciding for Others: The Ethics of Surrogate Decision Making, Cambridge
University PressNew York, NY1990: 29.
Download