Uploaded by RAMYAATARSINI A/P MARTHANDAN

Handbook of Research Ethics in Psychological Science

advertisement
Copyright © 2021 by the American Psychological Association. All rights reserved. Except
as permitted under the United States Copyright Act of 1976, no part of this publication
may be reproduced or distributed in any form or by any means, including, but not limited
to, the process of scanning and digitization, or stored in a database or retrieval system,
without the prior written permission of the publisher.
Chapters 1, 4, 10, and 14 were coauthored by employees of the United States government
as part of official duty and are considered to be in the public domain.
The opinions and statements published are the responsibility of the authors, and such
opinions and statements do not necessarily represent the policies of the American
Psychological Association.
Published by
American Psychological Association
750 First Street, NE
Washington, DC 20002
https://www.apa.org
Order Department
https://www.apa.org/pubs/books
order@apa.org
In the U.K., Europe, Africa, and the Middle East, copies may be ordered from Eurospan
https://www.eurospanbookstore.com/apa
info@eurospangroup.com
Typeset in Meridien and Ortodoxa by Circle Graphics, Inc., Reisterstown, MD
Printer: Gasch Printing, Odenton, MD
Cover Designer: Anthony Paular Design, Newbury Park, CA
Library of Congress Cataloging-in-Publication Data
Names: Panicker, Sangeeta, editor. | Stanley, Barbara, 1949- editor.
Title: Handbook of research ethics in psychological science / edited by
Sangeeta Panicker and Barbara Stanley.
Description: Washington : American Psychological Association, 2021. |
Includes bibliographical references and index.
Identifiers: LCCN 2021006219 (print) | LCCN 2021006220 (ebook) |
ISBN 9781433836367 (paperback) | ISBN 9781433837302 (ebook)
Subjects: LCSH: Psychology—Research. | Research—Moral and ethical aspects.
Classification: LCC BF76.5 .H346 2021 (print) | LCC BF76.5 (ebook) |
DDC 150.72—dc23
LC record available at https://lccn.loc.gov/2021006219
LC ebook record available at https://lccn.loc.gov/2021006220
https://doi.org/10.1037/0000258-000
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
CONTENTS
Contributors
Preface
1.Framework for the Ethical Conduct of Research: The Ethical
Principles of the Belmont Report
vii
ix
3
Ivor A. Pritchard
2. Planning Research: Regulatory Compliance Considerations
23
Sangeeta Panicker
3. Risk–Benefit Assessment and Privacy
35
Camille Nebeker, Rebecca J. Bartlett Ellis, and Danielle Arigo
4. Informed Consent
55
Barton W. Palmer
5. Addressing Conflicts of Interests in Behavioral Science Research
73
Gerald P. Koocher and Konjit V. Page
6. Data Sharing
83
Rick O. Gilmore, Melody Xu, and Karen E. Adolph
7. Understanding Research Misconduct
99
Alison L. Antes and James M. DuBois
8.Ethics in Coercive Environments: Ensuring Voluntary
Participation in Research
113
Michael D. Mumford, Cory Higgs, and Yash Gujar
9. Ethical Challenges in International and Global Research
125
B. R. Simon Rosser, Eleanor Maticka-Tyndale, and Michael W. Ross
v
vi Contents
10. The Ethics of Mental Health Intervention Research
139
Andrew W. Jones, Matthew V. Rudorfer, and Galia D. Siegel
11. Ethical Issues in Neurobiological Research
163
Young Cho and Barbara Stanley
12. Research Using the Internet and Mobile Technologies
177
Timothy J. Trull, Ashley C. Helle, and Sarah A. Griffin
13. Research in and With Communities
191
Mary Cwik
14. Legal and Ethical Requirements for Research With Minors
205
Mary Ann McCabe and Maryland Pao
Appendix A: The Belmont Report
Appendix B: Title 45 Code of Federal Regulations Part 46, Subparts A–E
Appendix C: Title 34 Code of Federal Regulations Part 98
Appendix D: Title 34 Code of Federal Regulations Part 99
Appendix E: Additional Resources
Index
About the Editors
221
235
279
285
323
325
337
CONTRIBUTORS
Karen E. Adolph, PhD, New York University, New York, NY, United States
Alison L. Antes, PhD, Washington University School of Medicine, Bioethics
Research Center, St. Louis, MO, United States
Danielle Arigo, PhD, Rowan University, Glassboro, NJ, United States
Rebecca J. Bartlett Ellis, PhD, Indiana University, Indianapolis, IN,
United States
Young Cho, BA, New York State Psychiatric Institute, New York, NY,
United States
Mary Cwik, PhD, Johns Hopkins University School of Public Health,
Baltimore, MD, United States
James M. DuBois, DSc, PhD, Washington University School of Medicine,
Bioethics Research Center, St. Louis, MO, United States
Rick O. Gilmore, PhD, Pennsylvania State University, University Park, PA,
United States
Sarah A. Griffin, PhD, University of Missouri–Columbia, Columbia, MO,
United States
Yash Gujar, MS, The University of Oklahoma, Norman, OK, United States
Ashley C. Helle, PhD, University of Missouri–Columbia, Columbia, MO,
United States
Cory Higgs, MS, The University of Oklahoma, Norman, OK, United States
Andrew W. Jones, MEd, National Institute of Mental Health, Bethesda, MD,
United States
Gerald P. Koocher, PhD, Harvard Medical School, Boston, MA, United States
Eleanor Maticka-Tyndale, PhD, University of Windsor, Windsor, ON, Canada
vii
viii Contributors
Mary Ann McCabe, PhD, ABPP, Independent Practice, Falls Church, VA,
United States
Michael D. Mumford, PhD, The University of Oklahoma, Norman, OK,
United States
Camille Nebeker, EdD, University of California, San Diego, San Diego, CA,
United States
Konjit V. Page, PhD, Fielding Graduate University, Santa Barbara, CA,
United States
Barton W. Palmer, PhD, Veterans Affairs San Diego Healthcare System;
University of California, San Diego, San Diego, CA, United States
Sangeeta Panicker, PhD, American Psychological Association,
Washington, DC, United States
Maryland Pao, MD, FAACAP, National Institute of Mental Health,
Bethesda, MD, United States
Ivor A. Pritchard, PhD, Office for Human Research Protections, U.S.
Department of Health and Human Services, Rockville, MD, United States
Michael W. Ross, PhD, MPH, University of Minnesota, Minneapolis, MN,
United States
B. R. Simon Rosser, PhD, MPH, University of Minnesota, Minneapolis, MN,
United States
Matthew V. Rudorfer, MD, National Institute of Mental Health, Bethesda,
MD, United States
Galia D. Siegel, PhD, National Institute of Mental Health, Bethesda, MD,
United States
Barbara Stanley, PhD, Columbia University and New York State Psychiatric
Institute, New York, NY, United States
Timothy J. Trull, PhD, University of Missouri–Columbia, Columbia, MO,
United States
Melody Xu, BA, New York University, New York, NY, United States
PREFACE
The impetus for this volume is the rapidly evolving research landscape in
psychology. Gone are the days when research aimed at understanding behavioral, cognitive, and psychological processes and their underlying mechanisms
was conducted by individual researchers, siloed in departments of psychology
at universities across the nation. Although the traditional model of individual
researchers toiling away in their own laboratories, with their own students and
trainees, at a single university is still an important but much smaller part of the
discipline, there is an ever-increasing trend toward team science involving
multiple investigators from different institutions or across departments with
expertise in different disciplines, with data often collected at several research
sites. This shift has enabled researchers to explore more complex questions and
avail themselves of a diverse array of methodologies, techniques, and scientific
advances in related disciplines without having to become an expert in every
technique and method used in these multiple-investigator studies.
Rapid advancements in a wide range of technologies play a significant role
in the evolving research landscape. Although many of the familiar ethical
issues in research with human participants still and will always apply (e.g.,
informed consent), the increased use of digital technologies in research, combined with the ubiquity of these technologies in everyday life, has raised new
ethical challenges. Other developments in science have also contributed to
the changing landscape. With the public much more aware of and engaged
in research, especially publicly funded research, notions of open access, open
science, and citizen science not only have taken root but are trending toward
becoming the norm. Collectively, these changes have had a significant impact
on core research ethics concepts, such as informed consent, privacy, and risk
ix
x
Preface
of harms, and have necessitated a reconsideration of the ethical framework
for the conduct of research with human participants. These changes have also
posed challenges to the prevailing system of regulatory oversight by federal
and local institutional entities.
The intent of this edited volume is to address ethical issues in human
research broadly by describing the array of familiar as well as emerging challenges confronting both new and seasoned researchers. Whereas the focus of
this book is on ethical principles of research with human participants, reference also is made to pertinent regulations and compliance requirements and
other resource documents, such as federal policies and professional codes
of conduct, in which the ethical principles are translated into policy and
regulation.
In the United States, the ethical framework for research with human participants was laid out by the National Commission for the Protection of Human
Subjects in Biomedical and Behavioral Research (1979) in its seminal document, The Belmont Report: Ethical Principles and Guidelines for the Protection of
Human Subjects in Biomedical and Behavioral Research, known as the Belmont
Report. Subsequently, regulations and numerous discipline-specific codes of
conduct have been promulgated based on the three basic ethical principles of
respect for persons, beneficence, and justice first enunciated in the Belmont
Report. Chapter 1 provides the historical context for current regulations for
the protection of human participants in research and elucidates the connections between the ethical principles and their translation into regulatory
requirements for the conduct of research with human participants.
Chapter 2 provides an overview of the current system for oversight of
research with human participants in the United States. In addition to describing
regulations known as the Common Rule (Protection of Human Subjects, 2018)
for the protection of human research participants, which require oversight by
local institutional bodies such as institutional review boards (IRBs) or research
ethics boards, the chapter also summarizes other federal regulations and
policies that may be relevant to specific types of research, for example, research
that involves access to individually identifiable health information that is
subject to the Health Insurance Portability and Accountability Act of 1996,
known as HIPAA.
Chapter 3 describes factors that need to be taken into consideration when
systematically assessing risks of harm in research, the different types of
potential harms, and mechanisms for mitigating risks in different research
settings and contexts, especially in the digital age. This chapter also describes
the impact of the current trend toward increased open access and sharing of
research data combined with scientific (genetic) and technological advances
(digital technologies) on the core research ethics concepts of privacy and
confidentiality.
Based on the ethical principle of respect for persons, informed consent has
become the bedrock of research with human participants. Chapter 4 describes
Preface
xi
the basics of informed consent as a process, including the three fundamental
characteristics of information, understanding, and voluntariness. The chapter
also elucidates the nature and meaning of “informed” consent when the
research involves incomplete disclosure or outright deception and in contexts
in which waiving or altering the process is scientifically or ethically justified. Mechanisms for obtaining valid informed consent from individuals who
have impairments in decision-making capacity are described.
A conflict of interest in the research setting occurs when an investigator
has competing interests that have the potential to influence the research.
Potential conflicts occur between the obligation to conduct research and
evaluate its outcomes according ethical and scientific standards versus the
desire for financial or other forms of personal gain. Conflicts of interest have
the potential to result in poor-quality design, distorted research findings,
and exposure of research participants to undue risk. Conflicts of interest
often involve possible financial gain to the researcher or the institution.
Chapter 5 discusses sources of potential conflicts of interest and ways to
mitigate the impact.
The move toward collaborative and multisite team science is challenging
traditional notions of who owns research data, and the increasing push for
open access to research data poses challenges to data security and participant
confidentiality. Chapter 6 describes the ethics underlying data sharing, reviews
challenges to sharing research data, and provides guidance on how research
data can be shared effectively and efficiently in advancing science.
Research misconduct is defined as fabrication, falsification, or plagiarism in
proposing, performing, or reviewing research or in reporting research results.
Chapter 7 describes the complex nature of research misconduct, discusses the
impacts of findings of misconduct both traditionally and in the age of team
science, highlights lessons that can be learned from reported cases of research
misconduct, and proposes preventive measures and provides guidance on
responding to potential misconduct.
When research is conducted in institutional settings such as educational
institutions, the military, organizations, and prisons, the fundamental ethical
challenge is ensuring the voluntariness of research participation. Chapter 8
addresses ways in which the issue of voluntariness can be addressed in potentially coercive environments.
Global research and research collaborations may present novel scientific
and ethical challenges. Chapter 9 summarizes issues that researchers need to
consider, including the relevance of the questions for research participants,
the meanings of psychological constructs across sociocultural contexts and
their implications for assessment of research risks, and the need to maintain
fair and equitable research partnerships, especially when collaborating with
researchers in resource-poor settings.
Intervention research poses special considerations and constraints and
often must meet additional regulatory requirements (e.g., trial registration,
xii Preface
data and safety monitoring boards). There is a broad range of intervention
research conducted by psychologists, from population-based interventions
to clinical trials with highly selected samples. Because manipulations (e.g.,
psychotherapies, psychoeducation) are being performed, investigators must
consider both the positive and the potentially negative impact of the manipulations. Chapter 10 addresses the ethical, scientific, and regulatory issues
in intervention research.
Chapter 11 covers unique ethical issues that arise when traditional psychological, behavioral, and cognitive research studies incorporate neurobiological
components. Examples are studies examining brain mechanisms using biomedical technologies such as functional magnetic resonance imaging and
neuroscience studies exploring brain–behavior interactions.
The use of digital technologies has freed researchers from the constraints
of time and physical location, has increased accessibility to larger participant
pools, and has allowed for real-time data collection. However, the use of these
technologies to collect data raises new concerns regarding the veracity and
validity of the data, assurance of valid consent, and data security and information protection to safeguard the privacy of participants and the confidentiality of their data. Chapter 12 describes ways to effectively apply the Belmont
Report principles to research using these new technologies and new media;
ethical, regulatory, and scientific issues in ensuring the veracity of research
participants and the data they contribute; and unique challenges in ensuring
that the rights and welfare of participants in such research are protected.
Chapter 13 covers unique challenges posed in research in and with communities (e.g., American Indian and Alaska Native communities, immigrant
communities). The author describes issues that arise in community-based
participatory research, navigation of community–academic partnerships, and
the role of community advisory boards and provides guidance on dealing with
ethical issues that arise in such research.
Chapter 14 addresses issues involved in conducting research with children and adolescents. Such issues include obtaining parental permission
and minors’ assent, the need to renew consent in longitudinal studies with
individuals who are minors at the start of the study, research with emancipated minors and mature minors, and protections against undue influence
and coercion.
Often discussions about the ethics of research with human participants
revolve around the systems and processes for obtaining approval to conduct
research from one’s IRB. IRBs are entities charged with ensuring compliance
with prevailing regulations, regulations that are indeed grounded in three
basic ethical principles. However, obtaining IRB approval is not synonymous
with addressing the nuanced ethical challenges that arise in research with
human participants. Our primary goal in putting together this volume was
not to provide a primer on navigating the regulatory compliance process, but
rather to raise awareness of both traditional and emerging ethical issues and
to shed light on ways to address these issues within the current regulatory
framework.
Preface
xiii
REFERENCES
Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104–191,
110 Stat. 1936 (1996). https://www.govinfo.gov/content/pkg/PLAW-104publ191/pdf/
PLAW-104publ191.pdf
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/read-the-belmont-report/index.html
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
1
Framework for the Ethical
Conduct of Research
The Ethical Principles of the Belmont Report
Ivor A. Pritchard
P
sychological research studies in the United States are routinely reviewed
and approved in accordance with the federal regulations for the protection of human subjects in research. There are three ethical principles that
were deliberately embedded into those regulations when they were written,
and those principles serve as the predominant basis for the ethical evaluation
of research in the United States. This chapter provides some historical background for the identification of these principles in an influential document
called the “Belmont Report” (National Commission for the Protection of
Human Subjects in Biomedical and Behavioral Science [hereinafter referred
to as “the National Commission”], 1979). The chapter shows how the Belmont
Report’s principles are reflected in the federal regulations and discusses ethical
complexities of the use of those principles and some developments that have
arisen since the time the regulations were originally crafted. The subsequent
revisions of the regulations and the evolution of ethical perspectives on research
involving human subjects—also known as “participants”—continue to reflect
the basic ethical orientation provided by the Belmont Report, although other
concerns have also received greater attention in subsequent years.
This chapter was authored by an employee of the United States government as part of
official duty and is considered to be in the public domain. Any views expressed herein
do not necessarily represent the views of the United States government, and the
author’s participation in the work is not meant to serve as an official endorsement.
https://doi.org/10.1037/0000258-001
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
In the public domain.
3
4
Ivor A. Pritchard
“TUSKEGEE SYPHILIS STUDY,” KENNEDY HEARINGS, AND
NATIONAL RESEARCH ACT
On July 26, 1972, The New York Times published a story by a reporter named
Jean Heller about a research study supported by the U.S. Public Health Service
(PHS) of the effects of syphilis in untreated men, which has come to be called
the “Tuskegee Syphilis Study” (Heller, 1972). As Heller reported, in 1932 the
PHS had initiated a study of the longitudinal effects of syphilis in untreated
men. The subjects in the study were poor Black men with little formal education from the area around Tuskegee, Alabama. The subjects were misinformed
about the nature of their disease and misled about the procedures of the
research study, which included physical examinations, blood draws, and lumbar
punctures. When penicillin was found to be an effective form of treatment, the
subjects were not offered treatment, and the researchers took active steps
to prevent subjects from obtaining a true diagnosis of their disease to keep
them from getting treatment that would make them unsuitable for further
participation. Although reports of the study had been published in the scientific
literature, Heller’s New York Times article brought the study to public attention,
and the public reaction led to the termination of the study.
The revelation of the syphilis study occurred at a time when concerns were
being raised about the research enterprise in a number of different scientific
areas. In 1973, led by Senator Edward Kennedy (D-MA), the Subcommittee on
Health of the U.S. Senate Committee on Labor and Public Welfare (1973) held
a series of hearings on the quality of health care and human experimentation.
The congressional hearings devoted substantial attention to the syphilis study,
but it also heard testimony regarding ethically questionable features of a wide
variety of other practices in health care and human experimentation. Health
care topics included the unapproved use of drugs for contraception in dis­
advantaged populations, involuntary sterilization of disadvantaged African
American women, and the use of psychosurgery to control behavior. In human
experimentation, testimony (including by B. F. Skinner) and submitted communications addressed the topics of experimentation to control behavior and
research on the relationship between intelligence and race and the policy
implications of such research. In response to these hearings, Congress passed the
National Research Act of 1974, which was signed into law on July 12, 1974.
The National Research Act created the National Commission and directed
the secretary of the U.S. Department of Health, Education and Welfare (HEW) to
establish regulations for the protection of human subjects in research. The
National Research Act charged the National Commission to develop ethical
principles for research: “The Commission shall . . . conduct a comprehensive investigation and study to identify the basic ethical principles which
should underlie the conduct of biomedical and behavioral research involving
human subjects.” More specifically, the act directed the National Commission
to consider
(i)The boundaries between biomedical or behavioral research involving human
subjects and the accepted and routine practice of medicine.
Framework for the Ethical Conduct of Research 5
(ii)The role of assessment of risk–benefit criteria in the determination of the
appropriateness of research involving human subjects.
(iii)Appropriate guidelines for the selection of human subjects for participation
in biomedical and behavioral research.
(iv)The nature and definition of informed consent in various research settings.
(Title II, Part A, § 202(a)(1)(B))
In addition to the charge of identifying the ethical principles that should
underlie research and making general recommendations about the oversight
of research involving human subjects in biomedical and behavioral research,
the National Research Act also charged the National Commission with making
recommendations about research involving several vulnerable populations of
research subjects. The act charged the National Commission to examine the
nature of informed consent for research involving children, prisoners, and
the “institutionalized mentally infirm,” from such persons or their legal representatives. It charged the National Commission to assess the ethics of the
use of psychosurgery, irrespective of whether it was being used in research or
clinical practice. The act’s charges to examine research involving the “institutionalized mentally infirm” and the ethics of psychosurgery were the most
directly relevant to research in psychology.
The National Commission (1978a, 1978b) obtained and published numerous
papers on topics related to its charge. These included manuscripts about
the uses of deception in psychological research (Baumrind, 1978; Berkowitz,
1978), which included discussions of Milgram’s (1974) studies of obedience
and Zimbardo’s (1973; Zimbardo et al., 1973) Stanford prison experiment.
THE BELMONT REPORT
The content of the Belmont Report (National Commission, 1979) responded
directly to the National Research Act’s charges; it discussed the distinction
between research and clinical practice and presented three ethical principles
and their application in the context of research: The principle of beneficence
addresses the role of assessment of risk–benefit criteria, the principle of justice
addresses the selection of human subjects, and the principle of respect for
persons addresses informed consent.
The Belmont Report distinguished between biomedical and behavioral
research and the practice of therapy by means of the design of the activity: Practice is generally designed solely to improve the well-being of the individual
patient or client, whereas research is designed to contribute to generalizable
knowledge. Some research may also serve a practical purpose, as in a study evaluating the effects of a therapy; however, when both elements are included, the
report recommends that the activity be reviewed as research. If a form of therapy
is innovative or “experimental,” it is not necessarily considered research; it would
have to be embedded in an activity designed to evaluate the general effectiveness or safety of the intervention in order to be considered research.
6
Ivor A. Pritchard
The primary point here is that an objective other than the individual research
participant’s personal interests shapes the research activity; the individual’s
participation is the means to achieve a larger purpose, which might not coincide
with the individual’s own purposes in participating. It is this circumstance
that triggers the specific importance of considering whether the research
participant is being treated ethically and how to treat them ethically in the
research context.
Respect for Persons
The first principle discussed in the Belmont Report is respect for persons,
which is primarily construed in terms of the concept of autonomy. Autonomy
is the ability people have to deliberate about their lives and act based on their
deliberation and choice. Respect for persons values that autonomy, affirming
that people should have the freedom to make decisions for themselves, so
long as these decisions do not interfere with the autonomy of others in some
illegitimate way. The Belmont Report also acknowledged that at times people
have limited autonomy—for example, young children and people who are
incapacitated by serious illness or disability—and declared that there is an
obligation to respect and protect people whose autonomy is limited.
In the context of research, respect for persons is applied first and foremost through the individual’s informed consent to participate in research.
The Belmont Report identified three elements of informed consent: information, comprehension, and voluntariness. The prospective subject should
be provided with the information that a reasonable volunteer would need
to decide whether they are willing to assist in the advancement of science,
even if it may not be in their personal self-interest to do so. Informed consent also requires comprehension, the individual’s ability to absorb and process the information and consider it in light of their own life. And the
informed consent must be voluntary; that is, the individual must be free to
choose to consent or decline participation, without being coerced or unduly
influenced to agree.
The Belmont Report acknowledged that circumstances may fall short of full
satisfaction of the three elements. Of particular interest to some psychology
research, the report noted that full disclosure of the research study to the subject may impair its scientific validity. For these studies, the report claimed that
incomplete disclosure is justified only if it is truly necessary to achieve the
study’s objectives, if the risks are minimal, and if subjects will be debriefed
and receive the results of the study, if appropriate. When comprehension is
limited, the wishes of the subjects should be honored to the degree that they
are capable of understanding their options, and third parties who will act
in the subjects’ best interest are also included to protect them. Voluntariness
should not be inhibited by threats of harm, offers of excessive rewards, or
other undue influences such as having someone in a position of authority urge
the individual to participate.
Framework for the Ethical Conduct of Research 7
Beneficence
Beneficence is the second principle identified in the Belmont Report. The
report initially referred both to the ancient Greek Hippocratic maxim “do no
harm” and to the need to “maximize possible benefits and minimize possible
harms” as ways to express the principle of beneficence, but then set aside the
former in the context of research, recognizing that the nature of research is
such that it is unrealistic to expect that all harms can be avoided. Beneficence
should be pursued both at the level of the actual research investigation, in
terms of what possible benefits and reduction of risks may achieved for those
participating directly, and more broadly in terms of the contributions of the
research to the development of knowledge and the applications of that
knowledge to improve the social welfare.
Applied to the research context, the principle of beneficence is viewed in
the Belmont Report in terms of the obligation to consider all the possible benefits and harms that could result from the research, including those that are
psychological, physical, legal, social, and economic. Although special weight
should be given to the positive and negative impact on the subjects themselves, the impact on the welfare of the society at large should also be taken
into consideration. The report acknowledged that balancing the various risks
and possible benefits can be difficult but maintained that this difficulty does
not preclude a thorough and systematic evaluation of them. A determination
of the validity of the scientific presuppositions of the research should be made,
and evidence pertaining to the probability and magnitude of the possible
harms and benefits should be used in these assessments, when available.
The Belmont Report also declared that brutal or inhumane treatment of
subjects is never justified; that although risks cannot be eliminated, they can be
minimized by consideration of alternative procedures; and that significant risks
call for special justification. In addition, the report stated that the inclusion of
vulnerable populations deserves special scrutiny and that risks and possible
benefits should be clearly identified in the informed consent process.
Justice
Finally, the Belmont Report included the principle of justice. Justice is characterized in terms of the distribution of research burdens and benefits, with
the understanding that research studies place burdens on the subjects and
that the research benefits may be distributed in various ways. Although
equality is the most obvious way of thinking about such distributions, the
report acknowledged that there are other formulations of ostensibly fair distributions besides simple equality, namely, according to individual need, individual effort, societal contribution, and merit. The report went on to note that
historically certain classes have been singled out unfairly to bear the burdens
of research, specifically poor hospitalized ward patients, prisoners in Nazi
concentration camps, and the disadvantaged rural black men in the PHS
Syphilis Study. The report noted that it is against the principle of justice to
8
Ivor A. Pritchard
impose the burdens of research on classes of individuals simply because they
are easier to recruit, and not because the research is directed at solving a
problem that is specific to those particular populations.
In the context of research, the Belmont Report asserted that there should
be fairness in the procedures and outcomes of the selection of research subjects
at both the individual and social levels. Research holding out the prospect of
a special benefit should not be reserved only for favored individuals, nor
should risky research enroll only “undesirable” persons. More generally,
social class and racial, gender, or cultural biases should not be allowed to
unbalance the fair distribution of burdens and potential benefits of research.
The report was generally critical of research studies enlisting already disadvantaged populations on the basis of their comparative availability and low
social standing when those populations run the risks of harm and more
advantaged populations will reap the eventual benefits.
It is no accident that the three principles identified by the National Commission were all violated by the PHS Syphilis Study; informed consent did not
occur, the subjects suffered unnecessarily, and the burden of syphilis was not
unique to the population of poor, undereducated African Americans. No
doubt the National Commission was attentive to explaining why that particular study was unethical. At the same time, however, the materials of the
Kennedy hearings and the National Commission’s own work clearly demonstrated that the commission was intent on providing a broader framework for
the ethical assessment of biomedical and behavioral research involving
human subjects.
Origins of the Three Principles
The National Commission did not invent the three principles out of nothing.
Each of them is drawn from a different ethical tradition within the history of
Western philosophy, and the meaning of each of the principles should be
understood in the context of the ethical traditions from which they are drawn.
The three traditions differ significantly in how ethical evaluations are made,
which influences how those principles relate to one another for the overall
judgment of research studies.
The principle of respect for persons is clearly derived from the deontological tradition of ethical thinking, most prominently represented by the
18th-century German philosopher Immanuel Kant (1785/1981). In this view,
people deserve a special moral status because they are rational beings who are
capable of freely choosing their actions, using their reasoning capacity to
decide what to do. The exercise of rational choice is what makes the person a
moral being, and because other people are also rational agents, each person’s
actions should respect the rational agency of everyone else as well. This
means that each person should treat themselves, and everyone else, as being
morally valuable for their own sake, and therefore no one should ever be
treated merely as a means to someone else’s goals. Ethical evaluation is a
matter of the quality of the intention of the agent, who must recognize the
Framework for the Ethical Conduct of Research 9
equally important free rational decision-making capacity of any other person
who is affected. This is how respect for persons comes to value the autonomy
of the participant over anything else.
The principle of beneficence comes from utilitarianism, a mostly British philosophical tradition largely developed through the 18th and 19th centuries,
whose most well-known proponents were Jeremy Bentham (1789/1948) and
John Stuart Mill (1861/1979). The first principle of utilitarianism is that
actions are right to the degree that they lead to the greatest happiness of
the greatest number of those affected. This is the principle of beneficence—
essentially, the idea that what is ethically best is what produces the most good
and the least harm. For utilitarianism, it is the results, or consequences, that
matter. Utilitarianism motivates the systematic assessment of risks of harm
and potential benefits.
Justice as an ethical ideal goes back at least as far as ancient Greek philosophy in the fourth century BCE. Aristotle (340 B.C.E./1962), for example, provided an analysis of justice as giving people what they deserve in the distribution
of goods, privileges, and responsibilities. He affirmed the idea that justice is
related to equality, insofar as people who are equals deserve equal treatment,
but he also recognized that there are differences that may be morally significant, such that, for example, people of different merit deserve different treatment. Aristotle’s perspective on ethics placed a strong emphasis on the exercise
of moral virtues, of which justice is an important one, but justice also plays a
role at the political level in terms of the fair distribution of power and responsibilities, set in the context of a community with a shared vision of valued social
practices. Justice calls for the fair distribution of social burdens and benefits.
The different ethical traditions of the three principles explain in part why
the three principles can be at odds with each other in the evaluation of
research activities. The value placed on autonomy in the deontological view
prohibits using one person solely as a means to the ends of another, regardless
of the significance of the outcome. Utilitarianism, on the other hand, allows
for one person’s interest and autonomy to be subordinated to that of another,
so long as the consequences in total are the best. In this sense, the utilitarian
ideal of beneficence is more naturally suited to addressing the basic predicament of human research, in which the participant essentially does serve as
means to an ulterior end of scientific discovery. If human research will promote the good of all, beneficence favors it. Respect for persons is more neutral
with respect to the merit of research, insofar as scientific progress is not automatically imperative but rather proceeds only if people are willing to exercise
their autonomy in choosing to allow themselves to be used for scientific purposes. People are free to choose or not choose to participate in research, and
only if they do consent to participate should the research go forward. Distributive justice likewise imposes a condition on how science advances: One
group, especially an already disadvantaged group, must not be made to bear
the burdens of research risks while another group is allowed to enjoy the
benefits of scientific progress, even if the former group is a small minority and
the latter an overwhelmingly large majority.
10 Ivor A. Pritchard
People who are more sympathetic to one ethical tradition rather than
another may thus be more inclined to give one of the three principles more
weight than the others in assessing a given research study. The three ethical
principles do not serve to eliminate controversy entirely from the ethical
assessment of research.
FEDERAL REGULATIONS
The National Research Act called for the secretary of HEW to issue regulations
for the protection of human subjects in research, and the secretary did not
wait for the National Commission to deliver its recommendations before
responding; on October 9, 1973 (HEW, 1973), and May 30, 1974 (HEW, 1974),
HEW issued notices creating the first version of the regulations at 45 C.F.R.
Part 46. The regulations largely followed the content of a 1966 National Institutes of Health (NIH) policy that applied to externally funded research grants
(U.S. Surgeon General, 1966). The 1966 policy and the first regulations
included provisions for obtaining the informed consent of the subjects and
required an assessment of the risks to subjects and the importance of the
research by a reviewing committee before the research could begin. The policy
also clearly included behavioral and psychological research in its scope.
When the Belmont Report was issued in 1979, the regulations already
contained requirements consistent with the report’s principles of respect for
persons and beneficence. The first version of the regulations did not, however, include an explicit requirement pertaining to the application of the principle of justice. In addition to the Belmont Report, the National Commission
(1978c) produced a report with recommendations about various aspects of
the reviewing committees, called institutional review boards (IRBs), and in
1979 HEW proposed to revise the regulations on the basis of a National Commission recommendation to add a requirement about the equitable selection
of subjects, corresponding to the principle of justice.
HEW also proposed to revise the regulations in other ways in response to
the National Commission’s recommendations. Those proposals, and the
ensuing public comment, began a process of deliberation and debate that led
to publication of the final regulations by the federal agency that by then (in
1979) had become the U.S. Department of Health and Human Services
(HHS); the regulations were published in 1981 (HHS, 1981) and amended in
1983 (HHS, 1983b). A controversy emerged about the appropriate scope of the
regulation. Two issues emerged: (a) whether research that was not supported
or conducted by HHS should be required to follow the regulations and
(b) whether certain kinds of research—most notably social science research—
should be required to follow the regulations.
With respect to the funding source, some argued that the source of support
should not be relevant to the question of whether the research participants
deserved protection. Major themes about whether social science research
should be excluded from regulations involved (a) the claim that most if not
Framework for the Ethical Conduct of Research 11
all social science research involved little or no risk to subjects, (b) charges that
the regulations represented an infringement on constitutionally protected
free speech and academic freedom, and (c) complaints about delays and the
administrative costs and resources that would be wasted overseeing such
research for no good purpose. Others argued that some social science research
poses psychological, economic, or legal risks or threats to individuals’ privacy
and that participants deserve to have their rights respected, in particular the
right to refuse consent, even in research posing no risks. The revised final
regulations created exemptions for a portion of social, behavioral, and education research, but certainly not all of it, and also included provisions for the
expedited review of certain kinds of minimal risk studies (HHS, 1983b).
By 1981, the regulations included provisions that reflect all of the Belmont
Report’s three ethical principles. The most prominent of these provisions are
outlined in the sections that follow.
Regarding Respect for Persons
The regulations include a general requirement to obtain informed consent for
the approval of a research study (Protection of Human Subjects, 2018):
Informed Consent will be sought from each prospective subject or the subject’s
legally authorized representative, in accordance with, and to the extent required
by, § 46.116. (45 C.F.R. § 46.111(a)(4))
An investigator shall seek informed consent only under circumstances that
provide the prospective subject or the legally authorized representative sufficient opportunity to discuss and consider whether or not to participate and that
minimize the possibility of coercion or undue influence. (§ 46.116(a)(2))
Informed consent is not universally required, however. Waiver or alteration
of consent is permissible under the regulations if the risks are minimal, the
rights and welfare of subjects are not violated, and the research could not
practicably be conducted otherwise (§ 46.116(f)). The waiver provision
appears to acknowledge the kind of research studies frequently carried out
in psychology, as it also includes a requirement for debriefing the subjects as
appropriate.
Regarding Beneficence
The regulations include the following two requirements for the approval of a
research study (Protection of Human Subjects, 2018):
Risks to subjects are minimized:
(i)by using procedures that are consistent with sound research design and that
do not unnecessarily expose subjects to risk, and
(ii)whenever appropriate, by using procedures already being performed on the
subjects for diagnostic or treatment purposes. (45 C.F.R. § 46.111(a)(1))
Risks to subjects are reasonable in relation to anticipated benefits, if any, to
subjects, and the importance of the knowledge that may reasonably be expected
to result. In evaluating risks and benefits, the IRB should consider only those
12 Ivor A. Pritchard
risks and benefits that may result from the research (as distinguished from risks
and benefits of therapies subjects would receive even if not participating in the
research). The IRB should not consider possible long-range effects of applying
knowledge gained in the research (e.g., the possible effects of the research on
public policy) as among those research risks that fall within the purview of its
responsibility. (§ 46.111(a)(2))
It is important to note that the last sentence of the second provision applies
to risks but not to potential benefits. Certainly, the long-range effects of
research studies of risky interventions for serious medical conditions are only
justified in part because if those interventions prove to be successful, many
future lives may be saved or improved. Long-range potential benefits should
be taken into account. What this provision is saying is that the IRB should not
take it upon itself to disapprove research studies out of a concern that the
findings of a research study will be used to inform a public policy that will be
harmful to society.
The National Commission’s (1978c) recommendation that this provision
be included was made in part in the context of a contemporary debate about
the policy implications of recent research analyses by Schockley (1972) and
Jensen (1969). Schockley argued that intelligence was determined largely by
genetic inheritance, that the intelligence of the Black population was significantly lower than that of the White population, and that the U.S. Black population was reproducing at a faster rate than the U.S. White population, which
could generate social problems unless effective policies could be introduced to
reduce the rate of reproduction of the Black population. Jensen produced
data to support the idea that intelligence correlated with race, which explained
the disparity in educational achievement of Black people compared to White
people, and argued that the educational system needed to be reformed so as
to better serve the different educational needs of Black and White students.
The National Commission believed that IRBs should not be the vehicle to
decide whether such socially controversial research studies should be disapproved on the basis of their potential policy implications, but rather that such
decisions should be made in some other more public forum.
Regarding Justice
The regulations include the following requirement for the approval of a
research study (HHS, 1981):
Selection of subjects is equitable. In making this assessment the IRB should take
into account the purposes of the research and the setting in which the research
will be conducted. (45 C.F.R. § 46.111(a)(3))
In 1991, this provision was revised to include the following added language
(Office of Science and Technology Policy, 1991):
and should be particularly cognizant of the special problems of research
involving vulnerable populations, such as children, prisoners, pregnant women,
mentally disabled persons, or economically or educationally disadvantaged
persons. (§ 46.111(a)(3))
Framework for the Ethical Conduct of Research 13
The National Commission’s (1977) report on psychosurgery to control
behavior recommended a compromise stance. It did not recommend banning
such psychosurgery, but it recommended provisions to limit its use. Its report
on research involving persons institutionalized as mentally infirm provided a
discussion of the challenges of such research in terms of the tensions between
the Belmont Report’s three principles; for example, the report noted that
respect for persons could support an argument for allowing prospective subjects to decide for themselves whether or not to participate, whereas beneficence could be used to argue for a more paternalistic stance about whether
the prospective subjects should be allowed to decide for themselves (National
Commission, 1978d). Justice could be used to exclude them on the grounds
that they already bore a greater burden than the general population, rather
than permitting autonomous choices. The report did not conclude that any
of the three principles had a clear priority over the others. HEW (1978b)
proposed regulations addressing research involving this population, but the
regulations were never finalized. Regulatory subparts were eventually issued
regarding other populations, namely, pregnant women, fetuses, and neonates
(HHS, 2001); children (HHS, 1983a); and prisoners (HEW, 1978a).
Regulatory Evolution After the National Commission
The regulations were originally issued by HEW, which became HHS in 1979.
The National Commission was aware of other federal agencies that were also
carrying out research involving human subjects and recommended that the
same regulations be adopted by the other federal agencies. A working group of
the National Science and Technology Council was charged with accomplishing
this task, and on June 18, 1991, a revised version of the HHS regulations was
adopted by 15 federal agencies and departments, thereby establishing a
“Common Rule” for the protection of human subjects in research (Office
of Science and Technology Policy, 1991). The content of the regulations was
revised slightly, but the main achievement was the creation of a uniform
set of regulations across the federal government for the basic policy. There
was variation, however, in whether federal agencies adopted the subpart
protections for the specific populations, namely, research involving pregnant
women, fetuses, and neonates (Subpart B; HHS, 2001); research involving
prisoners (Subpart C; HEW, 1978a); and research involving children (Subpart D;
HHS, 1983a).
For the next 2 decades, the Common Rule went unchanged, and the
received wisdom in the regulated community was that the Common Rule
would never be revised. But in 2009, the federal government began a process
that led to the publication of a revised Common Rule on January 19, 2017
(HHS, 2017). Major changes in the revised Common Rule included changes
to the provisions regarding informed consent, the required use of a single
IRB for cooperative multisite research studies, the revision of the categories
of exempt research, and a reduction in the requirement for continuing
review of research. Of particular interest to psychological scientists was a new
14 Ivor A. Pritchard
exemption for research involving benign behavioral interventions (45 C.F.R.
§ 46.104(d)(3)), which was clearly directed toward some portion of the studies
academic psychology researchers frequently conduct with undergraduate
students drawn from research participant pools set up by the host postsecondary
institution.
The changes in the provisions regarding informed consent reflect more
attention to the principle of respect for persons because they buttress the
emphasis on enabling prospective participants to understand the reasons why
they might or might not want to participate in the contemplated research
study. The reduction of administrative burdens realized by the other major
changes are in keeping with the principle of beneficence, insofar as they serve
to facilitate the conduct of research hopefully leading to the generation of
knowledge to advance science and benefit society. The changes do not appear
to alter the requirements directly related to the principle of justice.
EMERGING ETHICAL ISSUES: THE MANY FACETS OF JUSTICE
Although the Belmont Report’s principles continue to be the primary influence on research ethics in the United States, there have been several important
developments over the past 4 decades that have shaped the evolution of the
ethical dimensions of research. At least four topics deserve notice. First, there
has been a fundamental shift in the way that the ethical principle of justice is
applied to many research studies. Second, greater attention has been devoted
to the idea of avoiding group harm. Third, the research enterprise has shifted
in the direction of a more democratic or community-driven model of selecting
and designing research studies. And fourth, in medicine, the idea of “learning
health care systems” has bolstered the view that people have an ethical obligation to participate in research. All of these topics could arguably be associated
with the idea of justice.
During the late 1980s and early 1990s, there was a shift in the way the
ethical principle of justice was sometimes applied as a distributive principle in
the evaluation of research studies. The research that is most commonly asso­
ciated with this shift in perspective is the early studies on the treatment
of acquired immunodeficiency syndrome (AIDS) in people infected by the
human immunodeficiency virus (HIV). At the time, AIDS was having a significant impact on the health of the gay community in the United States.
According to the traditional interpretation of the principle of justice, researchers
should have been cautious about actively recruiting prospective research participants from the gay population for clinical trials because gay people were a
vulnerable population frequently discriminated against in the United States at
that time. In this view, the gay population would be unfairly exploited by having
them bear a large portion of the risks of participating in the early treatment
studies, when the eventual benefit of research findings would lead to the
effective treatment of people with AIDS who were more socially advantaged,
for example, people who had acquired HIV through blood transfusions.
Framework for the Ethical Conduct of Research 15
But the gay community actively advocated for a different posture toward the
research: It pressed to be included in significant proportion in the treatment
studies (Fauci, 2012). The gay community espoused the view that the research
findings would have a greater likelihood of benefiting their community because
the findings would avoid the problem of whether there were any differences
between the study population and the gay population that might make the
findings not applicable to them. The principle of justice was turned upside
down; instead of representing the concern that a population was being
exploited by being included in the research, the dominant concern became one
of avoiding being excluded from research participation because research on
any given population is considered a potential benefit to that population. This
perspective has become widespread in the United States: Various group representatives, including representatives of vulnerable populations and patient
advocates, typically advocate for the inclusion of their population in relevant
research studies in the view that leaving them out means they could fail to get
their fair share of the benefits of research. This perspective is also embodied in
such policies as the NIH’s policies regarding the inclusion of women, minorities,
and children in research studies, as appropriate (NIH, 1993).
Second, in the 1990s and 2000s there was growing attention to the concept
of “group harm” and the question of whether research studies should also be
required to ensure that the studies do not lead to harms to identified groups
as a result of their members’ participation. One example of this concept that
received considerable attention in the research community was a series of
studies using biological samples from members of the Arizona Havasupai tribe
(Harmon, 2010). The Havasupai had agreed to provide samples to enable a
research study addressing the frequency of diabetes in the tribe, but later
secondary studies were proposed to examine questions related to mental disease in the tribe and to study their geographic ancestry. Once they learned of
these studies, the Havasupai objected that the studies were potentially stigmatizing and inconsistent with Havasupai traditional beliefs and opposed
them. In the evaluation of this and other controversies, people argued that
the Common Rule focuses solely on the protection of individuals, not groups,
and some suggested that avoiding group harm should be formulated into a
principle to be added to the Belmont Report (Friesen et al., 2017). This position could also be viewed in terms of the idea of social justice, rather than
distributive justice, insofar as the ethical focus is on whether society shows
the proper respect for the rights and interests of particular groups in society,
especially those who have historically been the victims of some form of discrimination. Others have maintained that the Belmont Report’s discussion of
avoiding the exploitation of vulnerable populations is sufficient.
A third development in research ethics over the past few decades has been
the increasing role of research participant populations and communities
in the selection and design of research studies. The growing popularity of
community-based participatory action research, of citizen science, and of initiatives such as the Patient-Centered Outcomes Research Institute reflects this
trend. The ethical impulse underlying this movement could also be construed
16 Ivor A. Pritchard
in terms of the idea of social justice, of trying to address inequities in society
through research aimed at finding ways to ameliorate those inequities. It also
reflects a democratic ideal of the public’s having more of a direct say in what
research society is willing to support, rather than leaving those questions to
the peer review decisions of expert scientists.
Fourth, efforts to improve the ways research studies are integrated into
health care systems providing clinical care have led to an argument that
people have an ethical obligation to participate in research. The argument is
that people who make use of the health care system of today are benefiting
from improvements in the delivery of health care that have been generated
in part as a result of research involving human participants in the past; the
norm of reciprocity, of giving back in response to having been given a benefit,
implies that people now should also participate in research in appreciation of
what they have been given (Faden et al., 2013). This, too, could be construed
as a dimension of the principle of justice, of the fairness of receiving and giving
in return. It is not a new argument—the idea was discussed in papers written
for the National Commission (Engelhardt, 1978; Veatch, 1978)—but its appeal
appears to be growing in popularity.
TECHNOLOGICAL CHANGE AND PRIVACY
The research world has been significantly affected by technological changes
in the way information is generated, stored, transferred, and used, and this
evolution has drawn considerable attention to the evaluation of the right to
privacy. Computers, the cloud, the internet, social media, mobile devices, and
surveillance technologies have served to generate and preserve vast amounts
of information and have created an entire landscape in cyberspace where it
is often not clear whether the available information is public, private, or
something else. It should come as no surprise that technology is forcing us
to reconsider the right to privacy as the first (dissenting) U.S. Supreme Court
opinion to assert a constitutional right to privacy was generated by a technological advancement, namely, the ability to tap telephone conversations over
landlines of a liquor bootlegging operation during Prohibition in the early
20th century (Brandeis, 1928).
What a reasonable expectation of privacy is in today’s world has become
much more complex. Because researchers frequently seek to obtain guarded
information that was generated for some other purpose, the question of whether
they can be allowed to gain access to that information without going back to
the original source for permission is an ongoing concern. Respect for persons
would seem to incline toward honoring individuals’ autonomy to decide who
may have access to their information and for what purpose; beneficence
would push in the opposite direction, of allowing access to the rich resources
of existing information without restrictions. And a common traditional compromise, namely the anonymization of data to protect privacy and still enable
Framework for the Ethical Conduct of Research 17
data access, is becoming less and less feasible as technological advances enable
researchers to get better and better at reidentifying anonymous data.
ACADEMIC DEVELOPMENTS IN ETHICS AND MORALITY
Academic discourse about ethics has evolved since the publication of the
Belmont Report. In philosophical circles, there are still adherents to interpretations of the Kantian philosophy underlying the principle of respect for
persons and adherents of various interpretations of the utilitarian perspective
supporting the principle of beneficence. Another ethical perspective that has
drawn considerable attention in the past few decades goes by the name of
“virtue ethics,” in which the primary focus is on the cultivation of moral
virtues that enable people to more successfully pursue chosen social practices,
of which the practice of medicine is an example (MacIntyre, 1981). A philosophical approach to ethics centered on the idea of care in human relationships
has drawn significant attention as well (Held, 2018; Noddings, 1992). There
are also ethicists who are prepared to embrace a vision of ethics in which there
are plural moral ideals not derived from a single leading principle, for which
analyzing how those ideals can be reconciled in practice is always a work
in progress. With respect to the idea of justice, the period after the National
Commission was dominated by discussions of Rawls’s (1971) theory of justice,
in which he argued for two principles regarding the distribution of rights
and privileges in a just society. Simply put, first, rights and privileges should
be distributed equally; second, if specific rights or privileges are distributed
unequally, this is fair only if the results are better than an equal distribution
for those who are worst off. More recently, attention has shifted in the direction of analyses of the idea of social justice, of how societies have inherited and
continue to reflect unjust social practices and distributions of goods, rights, and
privileges and how those injustices should be rectified (Mills, 2018).
In moral psychology, for several decades the most prominent perspective was Kohlberg’s moral cognitive–developmental theory (Kohlberg, 1975;
Kohlberg & Hersh, 1977). Kohlberg argued that moral reasoning and judgment
progressed through an invariant series of stages of moral reasoning and that
the principle of justice was the ultimate and universal source of morality.
Kohlberg’s theory was normative, insofar as he claimed that each stage was
superior to the previous one in that it could resolve some moral predicaments
that were not resolvable using the kind of reasoning that was paradigmatic of
the earlier stage. Gilligan (1982) presented an alternative stage theory of
moral development that she claimed was more reflective of women’s moral
thinking, with care as the ultimate moral principle. (Gilligan’s work was also
a source of the philosophical interest in the ethics of care mentioned above.)
Kohlberg’s perspective has a greater affinity with the Belmont Report’s principle of respect for persons, whereas Gilligan’s would be more sympathetic to
the principle of beneficence.
18 Ivor A. Pritchard
Kohlberg’s claim to a universal normative theory was subsequently called
into question by cross-cultural research in moral judgments that appeared to
upend the hierarchy of the stages of moral reasoning (Shweder et al., 1990).
Following this development, research in social psychology that also relied on
evidence drawn from other academic disciplines led to moral foundations
theory. Moral foundations theory focuses on a limited set of innate moral
traits that are the product of human evolution and whose form develops
through experience (Graham et al., 2013): Care/harm, fairness/cheating,
loyalty/betrayal, authority/subversion, and sanctity/degradation are the first
five foundational elements of the theory, but its theorists acknowledge that
other elements may deserve addition. Unlike Kohlberg’s and Gilligan’s theories, moral foundations theory claims to be entirely descriptive, making no
argument about the ethical superiority of any particular constellation or hierarchy of foundations; the aim of the theory is to explain the commonalities,
differences, and patterns evident within and across morality in different cultures. The foundational elements of care/harm and fairness/cheating bear
some resemblance to the Belmont Report’s principles of beneficence and justice. The relationship of moral foundations theory to the principle of respect
for persons is less clear. But what is clear is that moral foundations theory
includes a broader array of moral appeals than the Belmont Report, but
because it claims to be purely descriptive, moral foundations theory is silent
on the question of whether the foundations of the ethics of research should
be expanded to include a wider array of normative principles.
CONCLUSION
Ethical and regulatory discussions and events have not called into question
the ethical principles of the Belmont Report, nor have those principles been
erased from the regulations. What has happened since then is that reinterpretation of those principles, as well as other ethical considerations, have made
the ethical dimensions of research more complicated. Whether this represents
progress in research ethics, or just greater confusion, is open to debate.
REFERENCES
Aristotle. (1962). Nicomachean ethics. Bobbs Merrill. (Original work published 340 B.C.E.)
Baumrind, D. (1978). Nature and definition of informed consent in research involving
deception. In National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, Appendix Vol. 2: The Belmont Report: Ethical principles
and guidelines for the protection of human subjects of research (HEW Publication No.
[OS] 78-0014, pp. 23-1–23-71).
Bentham, J. (1948). An introduction to the principles of morals and legislation. Hafner
Publishing. (Original work published 1789)
Berkowitz, L. (1978). Some complexities and uncertainties regarding the ethicality of
deception in research involving human subjects. In National Commission for the
Protection of Human Subjects of Biomedical and Behavioral Research, Appendix Vol. 2:
Framework for the Ethical Conduct of Research 19
The Belmont Report: Ethical principles and guidelines for the protection of human subjects of
research (HEW Publication No. [OS] 78-0014, pp. 24-1–24-34).
Brandeis, L. (1928). Olmstead v. United States, Dissenting Opinion, 277 U.S. 438.
Engelhardt, H. T. (1978). Basic ethical principles in the conduct of biomedical and
behavioral research involving human subjects. In National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, Appendix Vol. 1:
The Belmont Report: Ethical principles and guidelines for the protection of human subjects of
research (HEW Publication No. [OS] 78-0013, pp. 8-1–8-45).
Faden, R. R., Kass, N. E., Goodman, S. N., Pronovost, P., Tunis, S., & Beauchamp, T. L.
(2013). An ethics framework for a learning health care system: A departure from
traditional research ethics and clinical ethics. The Hastings Center Report, 43(Suppl. 1),
S16–S27. https://doi.org/10.1002/hast.134
Fauci, A. S. (2012). Preface: Evolving ethical issues over the course of the AIDS pandemic. Public Health Reviews, 34, Article 2. https://doi.org/10.1007/BF03391654
Friesen, P., Kearns, L., Redman, B., & Caplan, A. L. (2017). Rethinking the Belmont
Report? The American Journal of Bioethics, 17(7), 15–21. https://doi.org/10.1080/
15265161.2017.1329482
Gilligan, C. (1982). In a different voice: Psychological theory and women’s development.
Harvard University Press.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013).
Moral foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental Social Psychology, 47, 55–130. https://doi.org/10.1016/B978-0-12-407236-7.
00002-4
Harmon, A. (2010). Indian tribe wins fight to limit research of its DNA. The New York
Times. https://www.nytimes.com/2010/04/22/us/22dna.html
Held, V. (2018). Philosophy, feminism, and care. Proceedings and Addresses: The American
Philosophical Association, 92, 133–157.
Heller, J. (1972, July 26). Syphilis victims in U.S. study went untreated for 40 years.
The New York Times. https://www.nytimes.com/1972/07/26/archives/syphilis-victimsin-us-study-went-untreated-for-40-years-syphilis.html
Jensen, A. R. (1969). How much can we boost IQ and scholastic achievement? Harvard
Educational Review, 39(1), 1–123. https://doi.org/10.17763/haer.39.1.l3u15956627424k7
Kant, I. (1981). Grounding for the metaphysics of morals. Hackett Publishing. (Original
work published 1785)
Kohlberg, L. (1975). The cognitive–developmental approach to moral education. Phi
Delta Kappan, 56(10), 670–677.
Kohlberg, L., & Hersh, R. H. (1977). Moral development: A review of the theory.
Theory Into Practice, 16(2), 53–59. https://doi.org/10.1080/00405847709542675
MacIntyre, A. (1981). After virtue: A study in moral theory. University of Notre Dame Press.
Milgram, S. (1974). Obedience to authority: An experimental view. Harper and Row.
Mill, J. S. (1979). Utilitarianism. Hackett Publishing. (Original work published 1861)
Mills, C. W. (2018). Through a glass, whitely: Ideal theory as epistemic injustice. Proceedings and Addresses: The American Philosophical Association, 92, 43–77.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1977, May 23). Protection of human subjects: Use of psychosurgery
in practice and research: Report and recommendations of National Commission for
the Protection of Human Subjects of Biomedical and Behavioral Research. 42 F.R.
26318–26332. https://wayback.archive-it.org/all/20160202182959/http://archive.hhs.
gov/ohrp/documents/19770523.pdf
National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research. (1978a). Appendix Vol. 1: The Belmont Report: Ethical principles and guidelines
for the protection of human subjects of research (DHEW Publication No. [OS] 78-0014).
https://repository.library.georgetown.edu/bitstream/handle/10822/779133/ohrp_
appendix_belmont_report_vol_1.pdf#page=1
20 Ivor A. Pritchard
National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research. (1978b). Appendix Vol. 2: The Belmont Report: Ethical principles and guidelines
for the protection of human subjects of research (DHEW Publication No. [OS] 78-0014).
https://repository.library.georgetown.edu/bitstream/handle/10822/779133/ohrp_
appendix_belmont_report_vol_2.pdf#page=1
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1978c, November 30). Protection of human subjects: Institutional
review boards; Report and recommendations of the National Commission for the
Protection of Human Subjects of Biomedical and Behavioral Research. 43 F.R.
56175–56198. https://wayback.archive-it.org/all/20160202182925/http://archive.hhs.
gov/ohrp/documents/19781130.pdf
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1978d, March 17). Protection of human subjects: Research involving
those institutionalized as mentally infirm: Report and recommendations of
the National Commission for the Protection of Human Subjects of Biomedical
and Behavioral Research. 43 F.R. 11327–11358. https://wayback.archive-it.org/all/
20160202182950/http://archive.hhs.gov/ohrp/documents/19780317.pdf
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. https://www.hhs.gov/ohrp/sites/default/files/
the-belmont-report-508c_FINAL.pdf
National Institutes of Health. (1993). Inclusion of women and minorities as participants in
research involving human subjects. https://grants.nih.gov/policy/inclusion/womenand-minorities.htm
National Research Act of 1974, Pub. L. No. 93–348, 88 Stat. 342 (1974).
Noddings, N. (1992). The challenge to care in schools: An alternative approach to education.
Teachers College Press.
Office of Science and Technology Policy, Executive Office of the President. (1991,
June 18). Federal policy for the protection of human subjects; Notices and rules.
56 F.R. 28002–28032. https://wayback.archive-it.org/all/20160202182843/http://
archive.hhs.gov/ohrp/documents/19910618b.pdf
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
Rawls, J. (1971). A theory of justice. Harvard University Press.
Schockley, W. (1972). Dysgenics, geneticity, raceology: A challenge to the intellectual
responsibility of educators. Phi Delta Kappan, 53(5), 297–307.
Shweder, R. A., Mahapatra, M., & Miller, J. G. (1990). Culture and moral development.
In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 1–83).
University of Chicago Press.
U.S. Department of Health and Human Services. (1981, January 26). 45 C.F.R. Part 46:
Final regulations amending basic HHS policy for the protection of human research
subjects. 46 F.R. 8366–8391. https://wayback.archive-it.org/all/20160202182914/
http://archive.hhs.gov/ohrp/documents/19810126.pdf
U.S. Department of Health and Human Services. (1983a, March 8). 45 C.F.R. Part 46:
Additional protections for children involved as subjects in research. 43 F.R. 9814–9820.
https://wayback.archive-it.org/all/20160202182857/http://archive.hhs.gov/ohrp/
documents/19830308.pdf
U.S. Department of Health and Human Services. (1983b, March 4). 45 C.F.R. Part 46:
Exemption of certain research and demonstration projects from regulations for protection of human research subjects. 48 F.R. 9266–9270. https://wayback.archive-it.
org/all/20160202182858/http://archive.hhs.gov/ohrp/documents/19830304.pdf
U.S. Department of Health and Human Services. (2001, November 13). 45 C.F.R. Part 46:
Protection of human research subjects. 66 F.R. 56775–56780. https://www.govinfo.
gov/content/pkg/FR-2001-11-13/pdf/01-28440.pdf
Framework for the Ethical Conduct of Research 21
U.S. Department of Health and Human Services. (2017, January 19). 45 C.F.R. Part 46:
Federal policy for the protection of human subjects. 82 F.R. 7149–7274. https://
www.govinfo.gov/content/pkg/FR-2017-01-19/pdf/2017-01058.pdf
U.S. Department of Health, Education, and Welfare. (1973, October 9). 45 C.F.R.
Part 46: Protection of human subjects: Proposed policy. 38 F.R. 27882–27885.
https://wayback.archive-it.org/all/20160202183022/http://archive.hhs.gov/ohrp/
documents/19731009.pdf
U.S. Department of Health, Education, and Welfare. (1974, May 30). 45 C.F.R. Part 46:
Protection of human subjects. 39 F.R. 18914–18920. https://wayback.archive-it.
org/all/20160202183017/http://archive.hhs.gov/ohrp/documents/19740530.pdf
U.S. Department of Health, Education, and Welfare. (1978a, November 16). 45 C.F.R.
Part 46: Protection of human subjects: Additional protections pertaining to biomedical and behavioral research involving prisoners as subjects. 43 F.R. 53655–
53656. https://wayback.archive-it.org/all/20160202182931/http:/archive.hhs.gov/
ohrp/documents/19781116.pdf
U.S. Department of Health, Education, and Welfare. (1978b, November 17). 45 C.F.R.
Part 46: Protection of human subjects: Proposed regulations on research involving those institutionalized as mentally disabled. 43 F.R. 53950–53956. https://
wayback.archive-it.org/all/20160202182929/http://archive.hhs.gov/ohrp/
documents/19781117.pdf
U.S. Department of Health, Education, and Welfare. (1979, August 14). 45 C.F.R. Part 46:
Proposed regulations amending basic HEW policy for protection of human research
subjects. 44 F.R. 47688–47698. https://wayback.archive-it.org/all/20160202182918/
http://archive.hhs.gov/ohrp/documents/19790814.pdf
U.S. Senate Committee on Labor and Public Welfare. (1973). Quality of health care—
Human experimentation: Hearings before the Subcommittee on Health of the
Committee on Labor and Public Welfare on S. 974, S. 878, S.J. Res. 71, Parts I–IV.
U.S. Surgeon General. (1966, February 8). Memo to heads of institutions conducting research
with public health service grants.
Veatch, R. (1978). Three theories of informed consent: Philosophical foundations and
policy implications. In National Commission for the Protection of Human Subjects
of Biomedical and Behavioral Research, Appendix Vol. 2: The Belmont Report: Ethical
principles and guidelines for the protection of human subjects of research (DHEW Publication No. [OS] 78-0014, pp. 26-1–26-66).
Zimbardo, P. G. (1973). On the ethics of intervention in human psychological research:
With special reference to the Stanford prison experiment. Cognition, 2(2), 243–256.
https://doi.org/10.1016/0010-0277(72)90014-5
Zimbardo, P. G., Banks, W. C., Haney, C., & Jaffe, D. (1973, April 8). The mind is a
formidable jailer: A Pirandellian prison. The New York Times Magazine. https://www.
nytimes.com/1973/04/08/archives/a-pirandellian-prison-the-mind-is-a-formidablejailer.html
2
Planning Research
Regulatory Compliance Considerations
Sangeeta Panicker
E
thical considerations are a critical component of a research study from its
very conception. Ethical considerations drive which research question or
questions are pursued. Not every interesting and intellectually stimulating
question lends itself to study in an ethical manner. Considerations include
respecting individual autonomy and right to self-determination, gauging the
risks of harm in participating in the study vis-à-vis the potential benefits to
participants and/or their communities, and ensuring equitable distribution of
risks and potential benefits. In addition to ethical considerations, the considerations of methodology, design, and analysis are critical to sound science.
And lastly, there are practical considerations pertaining to compliance with
federal regulations and policies.
Regulations are not static and evolve over time, and this chapter provides
an overview of regulatory requirements deriving from current federal regulations and policies for research with human participants. It should be noted
that although federal regulations technically apply only to federally funded
research, many academic institutions choose to apply them uniformly across
all research conducted at the institution, regardless of the source of funding
for the research. Furthermore, state and local laws and regulations also influence the implementation of federal regulations at the local institutional level.
Therefore, it is very important that researchers be aware of their state and local
laws and institutional policies. It can be challenging to navigate this overlapping system of research oversight; this chapter provides a broad overview of
relevant oversight bodies, regulations, and policies with which researchers who
conduct research with human participants should be familiar.
https://doi.org/10.1037/0000258-002
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
23
24 Sangeeta Panicker
INSTITUTIONAL REVIEW BOARDS
Before initiating a study involving human participants, investigators must obtain
approval from a local oversight body, commonly known as the “institutional
review board” (IRB) and sometimes referred to as an “ethics review board.”
In the United States, the bulk of the research conducted in the behavioral,
cognitive, and psychological sciences is subject to the Federal Policy for the
Protection of Human Subjects, also known as the Common Rule. The regulation
governing the protection of human participants in biomedical and behavioral
research (Protection of Human Subjects, 2018) is codified by the U.S. Department of Health and Human Services (HHS) as Title 45 (Public Welfare), Subtitle A (HHS), Subchapter A (General Administration), Part 46 (Protection of
Human Subjects), Subpart A (Basic HHS Policy for the Protection of Human
Research Subjects—referred to as 45 C.F.R. Part 46, Subpart A). This regulation is referred to as the Common Rule because in addition to HHS, 17 other
federal agencies have adopted the same set of regulations for research conducted
or supported by that specific agency.1
The Common Rule specifies requirements for research institutions that
receive federal funding for biomedical as well as social, behavioral, and educational research with human participants. As discussed in Chapter 1 of this
volume, the ethical principles enunciated in the Belmont Report (National
Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979; see also Appendix A at the end of this volume) provide
the framework for the current federal policy. In addition to requirements for
ensuring compliance with the regulations, the rule also specifies mechanisms
for the review of proposed research by a duly constituted IRB. A duly constituted IRB is one that meets the specific requirements for membership, operations, and recordkeeping. The rule details the various components of the
research protocol that must be reviewed by the IRB to ensure that the protocol
meets minimum specified requirements. These components include
• types of research covered by the regulations and those that are exempt
from the requirements of IRB review and approval;
• types of research protocols that may be approved by the IRB using an
expedited review procedure and those that require review by the full
convened board;
The 18 agencies that are official signatories to the Common Rule are as follows:
Department of Health and Human Services, Department of Homeland Security,
Department of Agriculture, Department of Energy, National Aeronautics and Space
Administration, Department of Commerce, Social Security Administration, Agency
for International Development, Department of Housing and Urban Development,
Department of Justice, Department of Labor, Department of Defense, Department
of Education, Department of Veterans Affairs, Environmental Protection Agency,
National Science Foundation, Department of Transportation, and Consumer Product
Safety Commission.
1
Planning Research
25
• mandatory criteria for obtaining informed consent and its documentation,
as well as provisions for altering prescribed requirements for informed consent, completely waiving the requirement to obtain informed consent, and
obtaining informed consent but waiving the requirement to document the
consent; and
• types of studies that require continuing review, typically on an annual basis,
and those that do not need to be reviewed by the IRB after initial approval.
Subparts B–D of 45 C.F.R. Part 46 detail additional requirements for research
with populations deemed vulnerable (see Appendix B). Subpart B lists additional protections for pregnant women, human fetuses, and neonates; Subpart C lists additional requirements for research involving prisoners; and
Subpart D provides additional protections for children involved in research.
Finally, Subpart E specifies requirements for the registration of IRBs with the
HHS Office for Human Research Protections, which is charged with administering the regulations for the protection of human research participants.
However, unlike Subpart A, the other subparts of 45 C.F.R. Part 46 have not
been uniformly adopted by other agencies and apply only to research funded
by or conducted at HHS.
RESEARCH INVOLVING STUDENTS AND ACCESS TO EDUCATION
RECORDS
In addition to the protections afforded by the Common Rule and 45 C.F.R.
Part 46, Subpart D, there are other regulations, issued by the U.S. Department
of Education, that researchers need to comply with when the research involves
collecting data in schools from precollege and undergraduate students or
gaining access to their education records: the Protection of Pupil Rights Amendment (PPRA; 2002) and the Family Educational Rights and Privacy Act of 1974
(FERPA), respectively. Both statutes apply to educational institutions—primary
and secondary schools, as well as colleges and universities—that receive funds
from the Department of Education and were enacted to protect the privacy
rights of students and their parents.
The Protection of Pupil Rights Amendment, also known as “Student Rights
in Research, Experimental Programs, and Testing” (PPRA, 2002, 34 C.F.R. §98;
see also Appendix C) affords parents certain rights with regard to their children
under the age of 18 participating in any surveys, analyses, or evaluations on
the following eight specific protected areas:
1. political affiliations or beliefs of the student or the student’s parent;
2. mental or psychological problems of the student or the student’s family;
3. sex behavior or attitudes;
4. illegal, antisocial, self-incriminating, or demeaning behavior;
26 Sangeeta Panicker
5. critical appraisals of other individuals with whom respondents have close
family relationships;
6. legally recognized privileged or analogous relationships, such as those of
lawyers, physicians, and ministers;
7. religious practices, affiliations, or beliefs of the student or student’s parent; or
8. income (other than that required by law to determine eligibility for participation in a program or for receiving financial assistance under such program).
(34 C.F.R. §98.4(a))
Under PPRA, the IRB cannot approve waivers of parental permission for
surveys, analyses, or evaluations that reveal information concerning one or
more of the eight protected areas listed here, and parental permission must be
obtained prior to the administration of such a survey, even if the study is deemed
eligible for a waiver of consent or parental permission and assent from the
minor, under the Common Rule (45 C.F.R. § 46.116) or under the regulations
that provide additional protections for children at Subpart D (§ 46.408(c)). To
comply with this law, schools are required to establish policies to inform
parents about their privacy rights, provide advance notification of any survey
on the eight protected areas, and provide an option to opt out of their child’s
participation in such research.
The Family Educational Rights and Privacy Act (FERPA, 1974; see also
Appendix D) regulates the disclosure of personally identifiable information
maintained in the students’ education records. Under FERPA, “personally
identifiable information” includes but is not limited to
(a) the student’s name;
(b) the name of the student’s parent or other family members;
(c) the address of the student or student’s family;
(d)a personal identifier, such as the student’s social security number, student
number, or biometric record;
(e)other indirect identifiers, such as the student’s date of birth, place of birth,
and mother’s maiden name;
(f)other information that, alone or in combination, is linked or linkable to a
specific student that would allow a reasonable person in the school community, who does not have personal knowledge of the relevant circumstances,
to identify the student with reasonable certainty; or
(g)information requested by a person who the educational agency or institution reasonably believes knows the identity of the student to whom the
education record relates. (34 C.F.R. § 99.3)
Under FERPA, researchers seeking access to personally identifiable information contained in the education records of students under 18 years of age
can do so only with parental permission, even if the proposed study meets
the criteria for waivers of parental permission under other statutes such as
the Common Rule (45 C.F.R. § 46.116) or under the regulations that provide
Planning Research
27
additional protections for children at Subpart D (§ 46.408(c)). Similarly, access
to such information in the education records of students above 18 years of age
requires prior consent from the student, even if the researcher has access to this
information in their capacity as the student’s instructor. Parental permission
or student consent must be in writing and must include information about
the specific records that will be disclosed, the purpose of the disclosure, and the
recipient of the records.
The rule does not apply to directory information, which typically includes
students’ names, addresses, telephone numbers, dates of birth, honors and
awards, and dates of attendance. However, FERPA requires that eligible students (over 18 years old) or their parents be allowed to opt out of sharing of
their directory information.
PRIVACY BOARDS
In addition to obtaining IRB approval before initiating a study, research that
involves data that are protected by other federal laws, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA), has other requirements. As mandated by HIPAA, HHS (2002) issued regulations entitled
“Standards for Privacy of Individually Identifiable Health Information,” also
known as the “Privacy Rule,” and authorized the HHS Office for Civil Rights to
oversee its implementation and enforcement (45 C.F.R. Parts 160 and 164).
The Privacy Rule established conditions under which individually identifiable
health information, referred to as “protected health information” (PHI), can be
used or disclosed by “covered entities” (i.e., individuals or groups to whom the
rule applies) for research purposes. Although the Privacy Rule does not apply
to most researchers because it does not recognize the typical academic researcher
as a covered entity, a researcher who seeks to obtain PHI for research purposes
from a covered entity, such as a health care provider, hospital, health plan, or
health care clearinghouse, will have to meet the requirements of the rule.
In addition, researchers who are also health care providers and who electronically transmit health information in connection with certain transactions,
including claims, benefit eligibility inquiries, referral authorization requests,
and other transactions for which HHS has established standards under the
HIPAA Transactions Rule, must also meet the requirements of the Privacy
Rule to use their own patients’ health information for research. The Privacy
Rule allows covered entities to use such information or to disclose such information to others for research purposes with authorization from the individual
whose information is being is being used or disclosed. It also allows for use
or disclosure without individual authorization under limited circumstances.
Similar to the Common Rule stipulations for an alteration or waiver of informed
consent by an IRB, the user or data requester or recipient must provide documentation that an alteration or a waiver of the requirement for participants’
authorization has been approved by an IRB or privacy board.
28 Sangeeta Panicker
The charge of a privacy board is much narrower than that of an IRB and is
restricted to acting on requests for alterations or waivers of authorization for
the use of PHI for research. Very often the IRB also serves as the privacy
board. The rule also specifies conditions under which PHI may be used or
disclosed without authorization for purposes preparatory to research. Lastly,
covered entities may use or disclose “limited data sets,” defined as data sets
that exclude 18 specific direct identifiers, including name, street address, telephone and fax numbers, social security number, vehicle identification number,
email address, and full-face photographs, after obtaining a data use agreement
from the requester that specifies permitted uses and disclosures of the health
information and that limits who can use or receive the data. When a covered
entity discloses PHI in a limited data set to a researcher who has entered into an
appropriate data use agreement, then documentation of IRB or privacy board
approval of waiver of individual authorization is not required.
It should be noted that in contrast to the Common Rule requirement for
an individual’s informed consent to participate in the research study as a
whole, the Privacy Rule regulates only the content and conditions of the
documentation that covered entities must obtain before using or disclosing
PHI for research purposes. Unlike the core elements of informed consent,
which include a description of the study, risks and potential benefits, mechanisms for maintaining data confidentiality, and so forth, an authorization
focuses solely on privacy risks and details of how PHI will be used or disclosed, by and to whom, and for what research purposes.
CERTIFICATES OF CONFIDENTIALITY
When the research involves collecting identifiable sensitive information from
participants, investigators can protect participants from risks of harm resulting from forced or compelled disclosure (e.g., subpoena pursuant to judicial
proceedings) of identifiable sensitive information by obtaining a Certificate of
Confidentiality (CoC). Identifiable, sensitive information is defined as
information about an individual that is gathered or used during the course of biomedical, behavioral, clinical, or other research, where the following may occur:
• an individual is identified; or
• for which there is at least a very small risk, that some combination of the
information, a request for the information, and other available data sources
could be used to deduce the identity of an individual. (National Institutes of
Health [NIH], 2017, para. 3)
Privacy CoCs protect the confidentiality of participants’ private data and shield
researchers and research institutions from compelled disclosure of sensitive
information that could negatively impact the participants by, for example,
jeopardizing their job status, health insurance, social reputation, or financial
status. Examples of studies that are eligible for CoCs include research on illegal
activities, sexual behaviors, substance abuse, and mental health. The protections
offered by CoCs are in perpetuity; thus, investigators conducting secondary
Planning Research
29
research using the data are also bound by the CoC obtained by the original
investigator.
Research grants funded by NIH that involve collection of identifiable, sensitive information are automatically granted a CoC as the standard term and
condition of the award. Other HHS agencies, such as the Centers for Disease
Control and Prevention, Food and Drug Administration, Health Research
Services Administration, Substance Abuse and Mental Health Services Administration, and Indian Health Services, also issue CoCs for the research they
support. Similarly, the National Institute of Justice issues Privacy Certificates
to protect the privacy of research participants and the confidentiality of their
sensitive data. CoCs for health-related research funded by other federal agencies or by nonfederal entities can also be requested from NIH. NIH issues CoCs
for such research at its discretion. (See Appendix E for additional information
and resources on Certificates of Confidentiality.)
DATA AND SAFETY MONITORING
One of the criteria for IRB approval is that, when appropriate, there is a plan
for monitoring the data collected to ensure the safety of research participants
(45 C.F.R. § 46.111(a)(6)). Thus, an IRB may require that study protocols
include a detailed data and safety monitoring plan (DSMP) commensurate
with the complexity of the study (e.g., single-arm, single-site study vs. randomized controlled multisite study) and the risks of harm to the participants.
Furthermore, study protocols funded by NIH to test the effects of biomedical
or behavioral interventions are required to include a DSMP to ensure both
the integrity of the data collected and the safety of the research participants
(NIH, 1998, 2000). The contents of the DSMP are based on the nature and
complexity of the study and the probability and magnitude of risks of harm to
participants. At a minimum, the DSMP must include descriptions of the type
of data and the frequency with which the data will be monitored, the entities
responsible for monitoring the data, and mechanisms for reporting adverse
events to the IRB and/or the sponsor or funding agency.
Depending on the nature of the study, the nature of the study population,
and the risks involved, researchers may also be required to convene a data and
safety monitoring board. This independent group of experts is charged with
supervising the ongoing conduct of a study by examining data collected and
adverse incidents, if any, on a regular basis to ensure both the scientific merit
of the study and the safety of the participants. (See Appendix E for additional
information and resources on data and safety monitoring.)
REPORTING REQUIREMENTS
In addition to requiring investigators to develop a plan for monitoring data to
ensure the safety of research participants, current regulations require institutions that engage in research to have procedures in place for the investigators
30 Sangeeta Panicker
to report to the IRB, relevant institutional officials, and/or the funding agency
any unanticipated problems that arise during the conduct of the study that affect
the risks of harm to research participants or others (45 C.F.R. § 46.108(4)(i)).
A research protocol typically includes a description of known or foreseeable
risks so that participants can make an informed decision about participating
in the study. An unanticipated problem or adverse event is one that (a) was
not reasonably expected to be a risk of participation in the study in terms of its
nature, severity, or frequency; (b) is related or possibly related to participation
in the study; and (c) places the participant or others (e.g., family members) at
greater risk than anticipated prior to the launch of the study. (See Appendix E for
additional information and resources on reporting requirements.)
DATA MANAGEMENT PLAN
With open access rapidly becoming the norm, both federal funding agencies
and private sponsors of research have instituted policies that require investigators to include a data management plan, that includes specifics about how
data will be shared when applying for funding. The IRB might also review the
plan with an eye to protecting the privacy of the participants and the confidentiality of their data, when appropriate. A data management plan includes
descriptions of the types of data collected, the methodology for data collection, the format in which data are stored, the name of the data repository,
who will have access, how data will be shared, and policies for data use and
reuse. Although the component elements of a data management plan are
similar, different federal funding agencies in the United States require use of
different templates. (See Appendix E for additional information and resources
on data management plans.)
CLINICAL TRIALS
The term “clinical trial” typically refers to studies testing the effects of behavioral or pharmacological interventions or the use of biological devices, including
neuroprosthetics, on health-related outcomes. In 2014, NIH adopted a broadened definition of a clinical trial as “a research study in which one or more
human subjects are prospectively assigned to one or more interventions (which
may include placebo or other control) to evaluate the effects of those interventions on health-related biomedical or behavioral outcomes” (para. 4). NIH defined
key terms in the revised definition of clinical trial as follows:
The term prospectively assigned refers to a pre-defined process (e.g., randomization) specified in an approved protocol that stipulates the assignment of research
subjects (individually or in clusters) to one or more arms (e.g., intervention,
placebo, or other control) of a clinical trial.
An intervention is defined as a manipulation of the subject or subject’s environment for the purpose of modifying one or more health-related biomedical or
Planning Research
31
behavioral processes and/or endpoints. Examples include: drugs/small molecules/
compounds; biologics; devices; procedures (e.g., surgical techniques); delivery
systems (e.g., telemedicine, face-to-face interviews); strategies to change healthrelated behavior (e.g., diet, cognitive therapy, exercise, development of new habits);
treatment strategies; prevention strategies; and diagnostic strategies.
Health-related biomedical or behavioral outcome is defined as the pre-specified
goal(s) or condition(s) that reflect the effect of one or more interventions on
human subjects’ biomedical or behavioral status or quality of life. Examples
include: positive or negative changes to physiological or biological parameters
(e.g., improvement of lung capacity, gene expression); positive or negative
changes to psychological or neurodevelopmental parameters (e.g., mood management intervention for smokers; reading comprehension and/or information
retention); positive or negative changes to disease processes; positive or negative
changes to health-related behaviors; and, positive or negative changes to quality
of life. (paras. 7–9)
Subsequently, in an effort to increase accountability, transparency, and
broad dissemination of information on NIH-funded clinical trials, the NIH
issued a policy that required registration of supported studies and submission
of summary results information on the web-based portal ClinicalTrials.gov
(NIH, 2016b). At the same time, the institute also issued a policy requiring
mandatory training in good clinical practices for all NIH-funded researchers
and their staff involved in clinical trials (NIH, 2016d). One significant outcome
of the new policies is that much NIH-funded basic behavioral, cognitive, and
psychological research meets the revised NIH definition of clinical trials. Given
that compliance with this policy is a standard term and condition of the grant
award, investigators conducting basic behavioral research are also required to
register their NIH-funded studies and submit a summary of the study results
on ClinicalTrials.gov and to maintain good clinical practices training.
NIH (2016c) also issued a policy requiring that all applications involving
clinical trials be submitted through a Funding Opportunity Announcement
(FOA) specifically designed for clinical trials. In implementing this policy, NIH
clarified that FOAs designated as “Basic Experimental Studies involving
Humans (BESH) Required” are studies that meet the definition of clinical trial
as well as the definition of basic research (32 C.F.R. § 272.3). Thus, BESH are
defined as
studies that prospectively assign human participants to conditions (i.e., experimentally manipulate independent variables) and that assess biomedical or
behavioral outcomes in humans for the purpose of understanding the fundamental aspects of phenomena without specific application towards processes or
products in mind. (NIH, 2018, para. 2; see also NIH, 2019)
This new designation was likely created in response to the strong opposition
to the broadened definition of clinical trials from the behavioral and social
science communities and is intended to encompass much NIH-funded basic
behavioral and psychological research more clearly. However, because of
continuing challenges for basic scientists to meet these requirements, as of
this writing, full implementation of this policy has been delayed until September 2021, with NIH not requiring registration of studies and posting of
32 Sangeeta Panicker
summary results of such research at ClinicalTrials.gov but allowing the use of
alternative comparable public available platforms (NIH, 2019).
It should be noted that the revised Common Rule also includes an almost
identical definition of clinical trial:
Clinical trial means a research study in which one or more human subjects are
prospectively assigned to one or more interventions (which may include placebo
or other control) to evaluate the effects of the interventions on biomedical or
behavioral health-related outcomes. (45 C.F.R. § 46.102(b))
But unless a study that meets this definition is NIH funded, the only requirement per the Common Rule is that the consent document be posted on a publicly accessible website. In addition, investigators of NIH-funded studies that are
deemed exempt from the requirements of the Common Rule under the new
category of “benign behavioral interventions” are subject to the requirements
of the NIH clinical trials policy (Riley et al., 2018). (See Appendix E for additional information and resources on clinical trials.)
REVIEW OF MULTISITE STUDIES
In the past, when researchers from different institutions were involved in
collecting data for one study—so-called multisite studies—the protocol was
typically reviewed and approved by the IRB at each individual researcher’s
university. However, in 2016 NIH issued a policy requiring the use of a single
IRB (sIRB) as the IRB of record for NIH-funded multisite studies conducted
within the United States (NIH, 2016a). The policy was aimed at streamlining
the review process by eliminating duplicative efforts and thereby reducing
regulatory burden and expediting the conduct of research while ensuring that
the rights and welfare of human research participants are protected. Grant
applications and proposals for multisite studies submitted to the NIH must
include a plan for the use of an sIRB, which upon acceptance by NIH becomes
a term and condition of the award. NIH makes exceptions to the policy when the
use of an sIRB is prohibited by federal, state, local, or tribal law. NIH considers
requests for exceptions for other reasons only if there is a compelling justification for the exception (NIH, 2016a).
Although the sIRB is responsible for review and approval of the research
protocol for all participating sites, the IRBs at participating sites retain responsibility for overseeing the implementation of the protocol at their own institution, including the reporting of unanticipated problems. The policy also
requires a clear plan for communications between the sIRB and IRBs at
participating sites.
In 2018, the revised Common Rule extended the requirement for the
use of an sIRB to all institutions located in the United States that are engaged
in cooperative research (i.e., research projects that involve more than one
U.S.-based institution) that is conducted or supported by any department
or agency that is a signatory to the Common Rule (45 C.F.R. § 46.114(a) and
Planning Research
33
§ 46.114(b)(1)). According to this rule, the IRB of record is identified either
by the funding department or agency or the lead institution contingent on
approval by the supporting department or agency (§ 46.114(b)(1)). Like the
NIH policy for the use of an sIRB, the Common Rule allows for exceptions
based on other laws or when the supporting department or agency deems the
use of an sIRB inappropriate for a particular cooperative research protocol
(§ 46.114(b)(2)). (See Appendix E for additional information and resources
on single IRB policy for multisite studies.)
CONCLUSION
This chapter has provided a broad overview of some of the main regulatory
requirements for the conduct of research with human participants. However,
regulations and policy are constantly changing in response to both advancements in science and technology and evolving public sentiments. Thus, it is
incumbent on researchers to stay abreast of current regulations and policies
for the conduct of research with human participants. Although personnel in
university offices for grants and contracts and research administration are best
equipped to guide researchers on regulatory compliance requirements as implemented through their own institutional policies, investigators can also avail
themselves of the expertise of staff in regulatory agencies such as the HHS
Office for Human Research Protections, funding agencies such as NIH and the
National Science Foundation, and their discipline-specific scientific societies.
REFERENCES
Family Educational Rights and Privacy Act of 1974, 20 U.S.C. § 1232g; 34 C.F.R.
Part 99 (1974). https://www.govinfo.gov/content/pkg/USCODE-2011-title20/pdf/
USCODE-2011-title20-chap31-subchapIII-part4-sec1232g.pdf
Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104–191, 110
Stat. 1936 (1996). https://www.govinfo.gov/content/pkg/PLAW-104publ191/pdf/
PLAW-104publ191.pdf
National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of
human subjects of research. https://www.hhs.gov/ohrp/sites/default/files/the-belmontreport-508c_FINAL.pdf
National Institutes of Health. (1998). NIH policy for data and safety monitoring (Notice No.
NOT-OD 98-084). https://grants.nih.gov/grants/guide/notice-files/not98-084.html
National Institutes of Health. (2000). Further guidance on data and safety monitoring for
Phase I and Phase II trials (Notice No. NOT-OD-00-038). https://grants.nih.gov/
grants/guide/notice-files/not-od-00-038.html
National Institutes of Health. (2014). Notice of revised definition of NIH “clinical trial.”
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-015.html
National Institutes of Health. (2016a). Final NIH policy on the use of a single institutional
review board for multi-site research (Notice No. NOT-OD-16-094). https://grants.nih.
gov/grants/guide/notice-files/NOT-OD-16-094.html
National Institutes of Health. (2016b). NIH policy on the dissemination of NIH-funded clinical trial information. Federal Register, 81(183), 64922–64928. https://www.govinfo.gov/
content/pkg/FR-2016-09-21/pdf/2016-22379.pdf
34 Sangeeta Panicker
National Institutes of Health. (2016c). Policy on Funding Opportunity Announcements
(FOA) for clinical trials (Notice No. NOT-OD-16-147). https://grants.nih.gov/grants/
guide/notice-files/NOT-OD-16-147.html
National Institutes of Health. (2016d). Policy on good clinical practice training for NIH
awardees involved in NIH-funded clinical trials (Notice No. NOT-OD-16-148). https://
grants.nih.gov/grants/guide/notice-files/NOT-OD-16-148.html
National Institutes of Health. (2017). Notice of changes to NIH policy for issuing Certificates
of Confidentiality (Notice No. NOT-OD-17-109). https://grants.nih.gov/grants/guide/
notice-files/NOT-OD-17-109.html
National Institutes of Health. (2018). Notice of intent to publish parent funding opportunity
announcements for basic experimental studies with humans (Notice No. NOT-OD-19-024).
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-19-024.html
National Institutes of Health. (2019). Extension of certain flexibilities for prospective basic
experimental studies with human participants (Notice No. NOT-OD-19-126). https://
grants.nih.gov/grants/guide/notice-files/NOT-OD-19-126.html
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
Protection of Pupil Rights Amendment, 20 U.S.C. § 1232h, 34 C.F.R. Part 98 (2002).
https://www.govinfo.gov/content/pkg/USCODE-2011-title20/pdf/USCODE-2011title20-chap31-subchapIII-part4-sec1232h.pdf
Riley, W. T., Riddle, M., & Lauer, M. (2018). NIH policies on experimental studies with
humans. Nature Human Behaviour, 2(2), 103–106. https://doi.org/10.1038/s41562017-0265-4
U.S. Department of Health and Human Services. (2002). 45 C.F.R. Part 160 (General
Administrative Requirements) and Part 164 (Security and Privacy), Subpart E
(Privacy of Individually Identifiable Health Information). https://www.hhs.gov/sites/
default/files/ocr/privacy/hipaa/administrative/privacyrule/privrulepd.pdf
3
Risk–Benefit Assessment and
Privacy
Camille Nebeker, Rebecca J. Bartlett Ellis, and Danielle Arigo
T
hree ethical principles intended to guide behavioral and biomedical research
were first published in 1979 in the Belmont Report (National Commission
for the Protection of Human Subjects of Biomedical and Behavioral Research,
1979). These principles, as detailed in Chapter 1 of this volume, address respect
for persons, beneficence, and justice, which are generally applied via (a) the
process of obtaining informed consent (respect for persons), (b) a careful evaluation of the probability and magnitude of possible harms in contrast with
the benefits of knowledge that may result (beneficence), and (c) the involvement of people to participate who are most like those who will benefit from
the knowledge to be gained (justice). In this chapter, we examine the principle of beneficence with a focus on privacy and data confidentiality and then
delve into the process of risk assessment in the era of technology-facilitated
research.
BENEFICENCE
The principle of beneficence speaks to factors that influence the welfare of
research participants, but also the populations they represent in the particular
study (e.g., individuals with a particular learning disability, individuals with
compromised cognitive capacity or decision-making ability) and the downstream societal implications. Unlike biomedical research, in which the risk to
participants often can be expressed as quantifiable physical risk (e.g., rate of
https://doi.org/10.1037/0000258-003
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
35
36 Nebeker, Bartlett Ellis, and Arigo
injury per 100 participants), harms associated with behavioral and social science
research can be more difficult to quantify. The harms may be economic, legal,
or psychological in nature, and they can occur immediately or over a longer
time frame; such harms could be difficult to observe without long-term
follow-up, and participants may be less willing to disclose negative emotional experiences (e.g., symptoms of depression). In both biomedical and
behavioral and social science research, risk of harm also is likely to vary by
person depending on a variety of individual factors (e.g., financial and support resources, access to appropriate care). Thus, in addition to the types of
harms, researchers need to think about the severity, duration, and intensity
of potential harm to individual participants. To better understand how to
consider, assess, and mitigate the probability and magnitude of potential
harms, it is important to understand the nature and scope of the research
such that the potential threats or risks of harm to research participants can be
better understood.
With respect to the domains of privacy and data confidentiality as factors
in the assessment of risks and potential benefits, it is important to clarify how
privacy and confidentiality, although related conceptually, are distinct and
not to be used interchangeably. Part of the confusion can be attributed to
having no universally accepted definition of privacy, though an accepted perspective focuses on the ability of an individual to control personal information
(Westin, 2008). In the context of research involving human participants, privacy is about an individual’s interest in controlling personal information and
how that is communicated during the consent process. Confidentiality practices are the measures taken to protect information that research participants
provide to the researcher. Resnik (2010) spoke to privacy in terms of informational and physical privacy, with the former being about control of information such as health status, income, and sexual practices and the latter focusing
on controlling access to one’s body and biological specimens. In research,
a participant’s privacy is taken into account when considering the setting (e.g.,
private room vs. public setting) in which they are asked to disclose or offer
personal and potentially sensitive information (e.g., responding to interview
questions, having weight or blood samples taken). The federal regulations,
specifically the Health Insurance Portability and Accountability Act (HIPAA)
and the Common Rule (U.S. Department of Health and Human Services
[HHS], 2018), address the governance of some, but not all, personal information used in research. For regulated research, an institutional review board
(IRB) reviews the consent document to evaluate the disclosure of what personal information will be needed and how those data will be collected and
under what conditions to demonstrate respect for the participant’s privacy
expectations. Data confidentiality is a thread that cuts across the broader data
management domain of research ethics. Data management includes the process of data collection, storage, and sharing practices, with data confidentiality
being a consideration throughout. The regulations require a description of
whether and how the confidentiality of research records will be maintained
(Common Rule; HHS, 2018) and an authorization of how information will be
Risk–Benefit Assessment and Privacy
37
disclosed or used or shared and for how long (DeSalvo & Samuels, 2016).
Data management practices and protocols to enhance data confidentiality are
typically addressed both in the research protocol and in the informed consent
document. The IRB reviews these documents prior to approving the research
to evaluate risk of harm and risk management strategies.
Over the past decade, the use of emerging technologies (e.g., mobile apps,
wearable sensors, social media platforms) in the conduct of health and other
behavioral social science research has increased dramatically (Dunseath et al.,
2018) and, subsequently, has introduced new challenges in identifying and
assessing possible harms to participants. “Digital technologies” is a term that
encompasses these emerging electronic technologies and other electronic systems, tools, or devices connected to the Internet that allow for the collection,
storage, processing, and sharing of data (Bhavnani et al., 2016; Sharma et al.,
2018). Not only are the tools different, but the volume and granularity of data
produced make strategies for sound data management, including collection,
storage, and sharing, and the concept of respecting participant privacy expectations more difficult (Gostin et al., 2018; Ostherr et al., 2017). This difficulty is
exacerbated by an increase in unregulated research conducted by investigators
not bound by federal regulations for human research protections, combined
with a lack of relevant guidance and established best practices for all parties
involved in the digital health research sector (Rothstein et al., 2020).
In this chapter, the nature and scope of digital health research are described,
and case analyses are provided to contextualize new and nuanced ethical, regulatory, and social dimensions of digital health research. In addition, the Digital
Health Framework and Checklist is introduced, along with how it can be used
by researchers and IRBs to facilitate informed decision making in behavioral
and social sciences research (Nebeker, Bartlett Ellis, & Torous, 2020). Through
an ethics lens, several factors are described relevant to the principle of beneficence by addressing how accessibility and usability, privacy considerations,
and data management affect the risk–benefit evaluation process.
DIGITAL HEALTH RESEARCH SECTOR
The emerging digital health research sector has exploded over the past decade,
with much of its growth focusing on hard-to-reach populations and sensitive
subject matter (Dunseath et al., 2018). A study looking at the extent of research
funding by the National Institutes of Health found that the majority of digital
health research was supported by four institutes (National Institute of Child
Health and Human Development, National Institute of Mental Health, National
Institute on Drug Abuse, & National Cancer Institute), yet a total of 16 institutes
and centers were involved to some extent. The health foci are diverse and
include studies on obesity (including sedentary behaviors and physical activity
research), mental health, substance abuse, anxiety, depression, and vaccination. For example, a physical activity researcher may use wearable wireless
sensor technology to monitor an adolescent participant’s heart rate, physical
38 Nebeker, Bartlett Ellis, and Arigo
activity, and other weight-related behaviors 24/7. The sensors allow for passive
monitoring and create the opportunity for immediate feedback in real time
through a smartphone display and text messaging (Emken et al., 2012). Using
a wearable camera combined with a global positioning system (GPS) sensor
and accelerometer devices, researchers can evaluate when and where older
adults are most active and deploy a just-in-time adaptive intervention to facilitate achieving the individual’s physical activity goals (Nahum-Shani et al.,
2015). Moreover, commercial-grade activity trackers can be used to identify
whether wearing an activity monitor can facilitate recommended activity
levels across a range of demographics, and although this development may be
exciting, the need to consider privacy and data management issues is critical
(Grindrod et al., 2016).
Going beyond wearable sensors, social media platforms are increasingly
used to access hard-to-reach populations, including men who have sex with
men and young adults, to study health topics ranging from cardiovascular
disease to HIV prevention to management of depression (Arigo et al., 2018;
Nebeker, Dunseath, & Linares-Orozco, 2020). In addition to using social
media to observe or intervene with participants in real time, data created by
these digital tools can be used for health surveillance purposes to detect, for
example, suicide ideation (Coppersmith et al., 2018) or an outbreak of influenza (Charles-Smith et al., 2015). Another tool that falls within the digital
health domain is the smartphone, which can be used to deploy ecological
momentary assessments (EMA) to gauge, for example, self-reported stress in
the moment rather than using a standard method of asking a participant to
recall daily stress several weeks after the fact (Shiffman et al., 2008; Smyth
et al., 2017). Similarly, for people with bipolar disorder, a mobile app on a
smartphone can be used to examine the relationship between mood and
neurocognitive function using keystroke dynamics (e.g., typing speed, typing
errors; Leow et al., 2019) along with other information from a passive sensor
such as an accelerometer.
Digital tools have changed how research is designed, conducted, and
reported. Given that we live in a smart and connected society, the characteristics of digital health research influence how researchers collect, store, and
share personal health information—much of which may not be protected by
HIPAA or other regulations. Not only is data management challenging, but
another consideration is the extent to which a “bystander” (i.e., a person not
involved in the research study) who is inadvertently captured by the data
collection tool (e.g., an outwardly facing camera worn by a participant) has
rights to protection (Kelly et al., 2013). Bystanders are not covered by the
Common Rule because they are not technically research participants in that
the data collected are not used to answer a question about them (Nebeker
et al., 2017). The process of informed consent is challenging even in traditional
research, but with digital research, the information conveyed about a particular study protocol (e.g., data collection and transmission process) requires
that the participant be somewhat data and technology literate.
Risk–Benefit Assessment and Privacy
39
In addition to data management, bystander rights, and consent challenges,
digital health research is not regulated in all digital health sectors. For example,
a behavioral scientist working for a digital health startup company may not be
required to have an IRB review the protocol in advance of the research taking
place, simply because the company does not receive federal research funding and
thus is not bound by the Common Rule (Nebeker, 2020).
In a nutshell, the digital health ecosystem is moving quickly and is becoming
more diverse. For regulated researchers, IRBs will continue to review research
protocols, though many IRBs may not include members with expertise relevant to digital data collection. For unregulated research, resources are needed
to guide the ethical design of a study, including accessible guidelines to assist
with the risk–benefit evaluation and risk mitigation strategies.
It is important that study risks be considered throughout the research
design process. There are a few resources to guide the risk evaluation process.
One resource is the Digital Health Checklist for Researchers (DHC-R; Nebeker,
Bartlett Ellis, & Torous, 2020), developed by affiliates of the Research Center
for Optimal Digital Ethics in Health (ReCODE Health). This checklist was
developed to help researchers think through important elements that could
influence the design and conduct of their digital health research (Nebeker,
Bartlett Ellis, & Torous, 2020). It can also be adapted for use by both health
researchers and IRB members to assess digital research tools. The DHC-R
stems from a framework consisting of four domains—(a) access and usability,
(b) privacy, (c) data management, and (d) risks and benefits—with ethical
principles of beneficence, justice, and respect for persons as the foundational
core. These three ethical principles of the Belmont Report are at the core of
the decision-making framework and are a thread throughout the checklist.
A brief description of each domain follows:
1. The access and usability domain focuses on the design of the product, including
how it works and how that is communicated to the user (informed consent
or terms of service), whether it has been used with the target population,
any accessory tools that may be needed (smartphone, internet access), and
the extent to which the product can be used in both the short and long term.
2. The privacy domain is about the personal information collected and the
expectations of the participant to keep information secure or, if shared,
accessible information about how and with whom data are shared.
3. Data management is the practice of collection, storage, and sharing of data,
and this domain includes clear statements of data ownership, data inter­
operability, encryption standards, and compliance with existing regulations.
4. The risks and benefits domain refers to the types of possible risks as well as the
extent of possible harm if the risk occurs, including the severity, duration,
and intensity. Assessment of risks and benefits is influenced by the evidence
available to support the reliability of the product, risk mitigation strategies,
and recognition of unknown risks.
40 Nebeker, Bartlett Ellis, and Arigo
CASE ANALYSES
To apply these domains in context, we describe two cases depicting authentic
situations that can arise in social and behavioral health research. In the first
case, the DHC-R framework and elements of the checklist are applied to evaluate whether to use a commercial product to assist with data collection and
management. For the second case, a study was designed and deployed to
evaluate the feasibility, acceptability, and preliminary effectiveness of integrating the social support features of Fitbit in a health promotion study.
Using the Digital Health Framework, the cases are evaluated across the
domains of access and usability, privacy, data management, and risks and benefits, with the ethical principle of beneficence emphasized. The first case presents
a prospective decision-making process based and authentic case (the company
has been given a fictitious name). The second case is a reflective analysis of a
digital health study that has since been published.
Case Evaluating a Commercial Product and Vendor Supporting Digital
Health Researchers
Digi-Trial (a fictitious name) is a cloud-based digital platform developed for use
in clinical trials. The product is designed to enable data collection from participants in real time using EMA. The platform facilitates data collection using an
app installed on the research participant’s smartphone and other connected
Internet of Things (IoT) devices. For example, researchers can connect a
wireless blood pressure monitoring device, a wireless scale to obtain weight,
or other type of IoT device to collect biometric readings at the same time they
administer survey questionnaires via the smartphone. The platform offers an
efficient way to capture real-time EMA data by sending participants a brief
survey via their smartphone, which they can respond to immediately. In this
sense, the platform helps researchers collect more accurate information in the
moment rather than collecting data retrospectively at set data collection points
and then relying on participants to accurately remember past events, emotions, and perceptions of those experiences. Recall of past events can be highly
inaccurate, biased, and distorted because people cannot remember details
(Shiffman et al., 2008; Stone et al., 1998). Digi-Trial has the ability to collect
additional environmental data including participant location using GPS technology. GPS data can then be combined with the other data collected from the
app to form a more complete understanding of the context.
A researcher is interested in using Digi-Trial in an upcoming study to
improve medication adherence to a blood pressure lowering medication. This
product offers a ready-to-use digital platform and app as a cost-effective alternative to developing an app from scratch. The researcher learns that the platform is HIPAA compliant, which is a desirable feature for the planned research
study. As the researcher considers this product, there are potential ethical and
regulatory considerations that will affect how the study is planned and carried
Risk–Benefit Assessment and Privacy
41
out. The responsibility to explore these ethical and regulatory considerations
lies with the researcher and should be considered early in the selection of
digital technologies.
Access and Usability
The Digi-Trial platform is a relatively new product available for researchers.
On the surface, the product looks easy for the research team to use. The platform allows researchers to easily administer surveys to research participants
with little effort, scheduling delivery of surveys at specific times or in response
to information obtained from connected IoT devices used in the research. For
example, the researcher is interested in using IoT devices including a wireless
blood pressure monitor, a physical activity monitor, and a wireless pillbox to
track adherence. When participants use other technologies, there might be
a potential risk that personal information is shared, such as the participant’s
location acquired through GPS. All of the data are collected and automatically
stored in the researcher’s dashboard, which allows for easy visualization,
tracking, and monitoring of participant data and export of data for further
analysis. The efficiency created with the use of the Digi-Trial platform requires
less human effort while providing consistent application of trial protocols.
However, the researcher should consider to what extent the product is
designed to be accessible for the specific population who will participate in the
planned study. The platform relies on participant use of a personal smartphone, so the researcher should either provide a study-purchased smartphone
or specify ownership of a smartphone as an inclusion criterion. Likewise, if the
platform is designed for either an Android or iOS model of smartphone, this
will influence decision making. If Digi-Trial is designed for either model of
smartphone, it will be important to know whether performance is equivalent
regardless of model. Another important aspect of usability is whether the app
will use the internet and draw from the data allocation, potentially adding costs
to the participant’s phone plan. If so, detail about additional costs and any plans
to reimburse for those costs should be made clear during the consent process.
Privacy
Privacy is about the personal information collected and the expectations of the
participant that information collected for research purposes will be kept private. Because the data will be collected through a device that is not in the direct
control of the study team, it is important that the researcher carefully evaluate
the vendor’s privacy policy. One might assume that a privacy policy outlines
what a vendor is doing to protect the consumer’s privacy, but it actually discloses how the vendor uses individual-level data to improve the functionality
of the product, which may include sharing individual-level data with third parties unaffiliated with the vendor or study team. Not all vendors address privacy
and data-sharing practices; if they do not, the lack of information should
prompt further inquiry so that the researcher has the information needed to
evaluate the potential risk of a privacy concern. If personal data are collected
42 Nebeker, Bartlett Ellis, and Arigo
and shared outside of the research scope, potential harms may include profiling, discrimination, and other social harms.
Data Management
The researcher recognizes that personal information will be collected as part
of the research study and, therefore, needs to consider how the cloud-based
platform collects this information, how it is stored, and what data are shared
and with whom. Because this is a commercial product, the researcher must
be knowledgeable about the product terms of service (ToS) and privacy agreement to make sure the participant information is protected. The first step is
locating, reading, and interpreting this information and deciding whether
vendor practices increase risk to the participant. One area to pay close attention to when reviewing the ToS is whether the language prohibits the user
from filing a claim if harmed while using the product. If the ToS use exculpatory language limiting the user’s ability to pursue litigation, then there is a
direct conflict with the federal regulations for human subject protections (see
Protection of Human Subjects, 2018, 45 C.F.R. § 46.117). If the vendor agreements are in compliance with federal regulations and participant data will be
maintained so that risk of a breach of confidentiality is low, then the vendor’s
product may be a contender for use in the research.
Risks and Benefits
Risks and benefits associated with the use of digital technologies in health
research are often unknown. Very little research has been conducted to qualify
or quantify risks of harm, which is why researchers, along with other stakeholders such as IRBs, must be cognizant of the digital landscape more broadly.
It is only after a comprehensive evaluation of risks of harm with respect to
the type of harm (e.g., economic, physical, psychological, legal), likelihood of
occurrence, and potential severity and duration is considered against the
potential knowledge to be gained that a decision to move forward can be
made. This evaluation requires the ability to imagine plausible scenarios of risk
in order to conduct a risk assessment.
What is also challenging is determining what constitutes minimal risk of
harm. The federal regulations define “minimal risk of harm” as follows: “The
probability and magnitude of harm or discomfort anticipated in the research
are not greater in and of themselves than those ordinarily encountered in daily
life or during the performance of routine physical or psychological examinations
or tests” (45 C.F.R. § 46.102(e)(7)(j)). In this digital age, what may have been
considered quite risky 10 years ago (e.g., facial recognition, 24/7 monitoring) is
becoming more routine and something society encounters on a daily basis.
A rule of thumb when determining what may be considered to be no greater
than minimal risk of harm is comparing the possible risk to what the individual
may experience in their daily life. Using daily life as an evaluation criterion may
need to be reconsidered moving forward and should be a point of discussion
when using digital strategies in health research. It should be noted that using
experiences of everyday life as a criterion to assess minimal risk impacts all
Risk–Benefit Assessment and Privacy
43
research, not just digital health research. This is not to say that technologies
used to support health research are inherently risky, only that researchers
may be unaware of the potential risks of harm and unknown unknowns that
ubiquitous and pervasive computing introduce.
In this case, Digi-Trial uses GPS data to track participant location and thus
collects latitude and longitude information on the participant throughout the
day. Granular and potentially sensitive location data are stored in the cloud
with other participant data and therefore could make the participant easily
identifiable in a data set because location data can reveal home location and
daily patterns of movement, even if other identifiable participant data are
removed. Given that the Digi-Trial platform for data collection and storage is
cloud based, meaning that data are stored off-site and out of the researcher’s
direct control, others unaffiliated with the research may have access to data
collected for research purposes. Although the platform advertises that it is
HIPAA compliant, the collection of GPS location data elevates the potential
risk to a participant. Because digital data are granular and voluminous, how
they are stored, protected, used, and shared needs to be understood by all
involved in evaluating study risk and benefits. In the case of Digi-Trial, the
inability of the research team to control the collection of GPS data and related
threats to participant privacy and data confidentiality made the risk–benefit
calculation unfavorable, and the product was not selected for use in the proposed clinical trial.
Case Using a Fitbit-Supported Intervention
An intervention study was conducted on the campus of a small university in
the northeastern United States with recruitment targeting university employees
(18 of 20 participants were employed by the supporting institution; Arigo,
2015). The aims of the study were to test the feasibility, acceptability, and preliminary effectiveness of integrating the self-monitoring and social (community)
features of Fitbit technology into a behavioral program to promote physical
activity among women. Previous research demonstrated that women identify
lack of physical activity role models and support as critical barriers to activity
engagement. Thus, this study was designed to harness the processes of social
comparison and social support between participants via both face-to-face
interaction and a multicomponent mHealth tool (Fitbit). Key social features
of interest in this study were the Fitbit leaderboard, which ranked participants’ steps per month from most to least using a visual display (social comparison), and the community message board, which allowed researchers and
participants to interact using text posts (social support).
Upon enrollment, participants were provided with a Fitbit Flex device and
asked to download the associated software to a personal or work-issued desktop
or laptop computer (at the time, the community features of Fitbit were not
accessible via mobile app). Participants were informed that the Fitbit device
was on loan from the supporting institution and that they would be expected to
return all study equipment at the end of the program. During this individual
44 Nebeker, Bartlett Ellis, and Arigo
orientation meeting, participants were guided through the process of setting
up a profile on the Fitbit web platform, which was linked to a study-specific
email address (assigned by the research team). All participants were encouraged
to choose an account name and profile picture or avatar as desired. Because all
participants subsequently met one another in person, there was no assumption
of anonymity, though participants could decline to use their real names or
pictures of themselves on the Fitbit platform if they chose to do so.
Building on feedback from a previous iteration of this intervention (Arigo
et al., 2015), participants met one another in person at the start of the study to
ensure that they felt connected to one another when they interacted through
Fitbit. They met for a skill-building session, which reviewed concepts such as
goal setting, using Fitbit data to inform decisions about physical activity, and
using the Fitbit community as a resource. In addition to interacting through
the Fitbit community, each participant was assigned another participant as
a support partner, and dyads were encouraged to communicate with one
another in any way they chose (e.g., emails, text messages). The investigator
was an experienced behavioral scientist with a background in psychological
interventions for health behavior change and use of Fitbit in such programs.
As an intervention study focused on measuring participant behavior change,
this research required full review by the university IRB and received approval
prior to study implementation.
Participants were added to a private Fitbit community group that shared
messages and physical activity totals via the leaderboard with other program
members only. Participants were encouraged to use the community message
board to interact with one another as desired, with a minimum expectation of
posting about their progress at least once per week, and they could engage
with one another outside of the community function at their convenience.
The investigator facilitated conversations among participants to discuss progress in the Fitbit community by posting weekly prompts. Participants also were
asked to sync their data at least once per day to keep the leaderboard feature
updated. The study lasted 7 weeks (including 1 week of baseline assessment
and 6 weeks of intervention), and all participants engaged in study activities
during the same 7-week period.
At the end of each week of intervention, participants received a link to an
electronic survey that assessed the frequency of posting to and viewing the
message board, viewing the leaderboard, and interacting with their partner or
other program members outside of the Fitbit platform (e.g., via text message).
Program evaluation focused on overall change in steps and active minutes
from baseline to postintervention, as well as time-sensitive relations between
engaging with Fitbit social features and change in activity.
Access and Usability
This study was designed to evaluate the feasibility and acceptability of a digital
health product (Fitbit) as an adjunct to behavioral skills development. Fitbit
technology was selected for its ease of wear and use (relative to research-grade
devices) and its demonstrated validity for capturing ambulatory physical activity.
Risk–Benefit Assessment and Privacy
45
The Fitbit Flex device that all participants used synced using wireless Internet
(to desktop or laptop computers) or Bluetooth (to mobile devices or tablets).
GPS technology was not available on the device.
To increase short- and long-term engagement with the device, an orientation was provided at study onset. During this initial meeting, participants were
provided with a brief training in how to use the Fitbit, including how to wear
and care for it, how often to charge it, and how to upload their data via desktop or laptop. Importantly, participants were not expected to intuit the process
of using Fitbit’s data monitoring to help them increase activity, as this process
requires several steps that are not obvious without appropriate training. Thus,
the orientation also covered how to use Fitbit data to inform their physical
activity goals and assessment of their own progress, and these concepts were
reviewed at the subsequent face-to-face skills session. For example, participants were encouraged to set daily goals and to check their Fitbit every few
hours to assess their progress toward that day’s goal and plan for ways to meet
their goal by the end of the day. The last section of the orientation focused on
using the social features of Fitbit, including how to find and use these features,
who had access to their messages and ranked activity totals, and how their
engagement data would be used to evaluate the program. Participants were
encouraged to ask questions at any time during the program and to provide
their feedback once the program was over.
Privacy
The investigator had extensive experience with the Fitbit platform prior to the
development of this intervention study. This experience included familiarity
with privacy and security policies and distinctions between web platform and
app functionality. Participants engaged in face-to-face discussion of privacy
and safety considerations with the investigator during the orientation session
and were encouraged to ask questions about the Fitbit technology prior to
documenting informed consent to participate in writing. This discussion outlined the personal data collected by Fitbit, where these data would be stored,
and who would have access to the files, and participants were encouraged to
review Fitbit’s privacy policy and terms of use for further information about
the technology.
Situating this study on a university campus with employee participants
afforded opportunities for these participants to forge lasting connections with
one another to support their ongoing physical activity, and it raised two
interesting questions regarding privacy. The first concerned the institution’s
knowledge of employees’ participation in the study and access to their data.
Participants who wished to download Fitbit sync software to their employee
computers had to receive authorization from the information technology department notifying their department of their participation. Once the investigative
team recognized this necessity, it was included as part of the informed consent
discussion, and participants were informed that they could use their personal
computers to avoid disclosure of their participation to their employer. To the
best of the investigator’s knowledge, the institution did not have access to
46 Nebeker, Bartlett Ellis, and Arigo
employee participants’ profiles or their community posts even if they used their
work computers to log into Fitbit and engage with the intervention content.
Thus, it was expected that participants’ privacy was fully protected from their
employer with respect to these details about their participation. However, the
possibility that the institution could access this information if it chose to do so
could not be ruled out entirely.
The second question arose about participants’ knowledge of each other’s
physical activity and information shared via the message board, as several participants knew each other as coworkers. Participants were strongly encouraged to consider the community a safe and private space and were asked not
to discuss shared content with anyone who did not participate in the program.
The investigator also made clear that participants should share with each
other only what made them comfortable and that they should report any concerns about privacy during the program. During exit interviews, two participants
described mild discomfort with coworkers knowing their information, although
both acknowledged that the benefits of the program outweighed this concern.
In sum, participant privacy (i.e., how personal information is stored, secured,
and accessed) and potential risks to participant data should receive careful
consideration throughout the course of a digital intervention study.
Data Management
Participants were able to access their ongoing activity data and message board
posts throughout the 7-week period of data collection, and profiles were transferred to their personal email addresses at the end of the program (e.g., for
syncing with personal devices) so that they could continue to use the Fitbit
platform if they chose to do so. The investigator explained the process of
exporting 30-day data files if a participant expressed interest and then sent the
participant’s full 7-week data file if requested. Other than Fitbit employees
with authorization to view user data, only the investigator had access to participants’ profiles and physical activity data files during data collection. At the
end of the program, participants’ data files were deidentified, and trained
research assistants were given access to allow them to conduct data cleaning
and management activities. Files were stored on password-protected computers
in locked rooms to which only research staff had access. No data were stored
on cloud-based servers, and no external data management services were
used for this study.
Assessment of Risks and Benefits
Risks related to this study were physical in that the participants were encouraged to increase activity; however, they were advised to do this slowly and
progressively to avoid injury. Another risk may have been psychological in
that participants may have experienced discomfort sharing information
about their activity with other participants whom they knew from the workplace. These conditions of the program were presented during the informed
consent process, and potential participants were able to decline participation
Risk–Benefit Assessment and Privacy
47
if they so chose. These risks are not unique to digital health research but are
important nevertheless.
A more serious potential risk for studies using a commercial product
involves whether the product measures what it purports to measure and does
so consistently over time. Fitbit has been used as a data collection tool in
numerous studies on physical activity and sedentary behaviors and is generally considered trustworthy (Feehan et al., 2018). That being said, during
data collection for this study, a question arose regarding Fitbit’s estimation of
physical activity minutes. At that time, best practices for physical activity
promotion involved recommending engagement in aerobic-intensity activity
in bouts of 10 minutes or more, and research typically focused on aerobicintensity activity that occurred in bouts of this length. In contrast, Fitbit’s
algorithm counted “active minutes” as any activity that reached or exceeded
the threshold for moderate intensity. Two weeks into data collection for this
study, Fitbit announced a change to its algorithm to count active minutes as
only those occurring in bouts of 10 minutes or more. Although this change
improved consistency between studies using Fitbit and other physical activity
research, its timing presented a challenge for the reliability and validity of a
primary research outcome.
Fortunately, the investigator was able to export 30-day summaries after
the change was implemented and verify that the change applied to all data in
this time window. Were this not the case, significant adjustments would have
been necessary (e.g., requesting to extend the time frame for the study), and
it is not clear whether baseline data would have been valid for comparison
against during- or posttreatment data. Issues related to validity and reliability
are critical in evaluating whether study risks are reasonable in relation to
potential benefits of knowledge to be gained. If the data are not trustworthy,
then the result is a waste of time for researchers and participants, funding
agencies, and, potentially, public trust.
DISCUSSION
The ethical principle of beneficence is applied when evaluating study risks
and benefits, which are influenced by many factors and often subjective in
behavioral and social science research. The goal of using a decision-making
framework and checklist is to make application of the ethical principles more
tangible by prompting investigators and IRBs to think through what may
be unfamiliar with respect to the digital health landscape. In this chapter,
we described the digital health research landscape and provided authentic
use cases that introduce ways of objectively thinking about factors that influence risk assessment and mitigation and the overall risk–benefit assessment.
These factors, organized within the Digital Health Framework and the compa­n­
ion DHC-R, are a starting point to support the decision-making process and
are by no means comprehensive, as health technologies are emerging along
with actors involved in the conduct of biomedical and behavioral research.
48 Nebeker, Bartlett Ellis, and Arigo
One certainty, as the digital health landscape grows and evolves, is that
all involved—from the technology makers to researchers to regulators to
participants—must be active in shaping the standards for ethical and responsible digital health research practices. No longer can review be outsourced to a
regulatory compliance body with the assumption that this will be sufficient to
identify and mitigate possible harms. Moreover, as the unregulated research
sector (e.g., not federally funded or covered by the Common Rule) grows,
resources will be necessary to support the safe and responsible conduct of
research. Moving forward, we offer recommendations to advance this growing
field of research in three specific areas: education, crowdsourcing of resources,
and support, including funding, for digital health research infrastructure.
Education
Digital health research necessitates an understanding of the landscape of terms
of service, privacy agreements, available features, and potential risk of harm
associated with relevant tools. Entities that conduct regulated research (e.g.,
academic institutions) often require those involved, including investigators
and IRB members, to engage in continuing education activities designed to
refresh and update their knowledge of responsible research conduct. For
example, many entities are members of the Collaborative Institutional Training
Initiative and require their members to complete online training modules
every few years. Yet these training experiences rarely focus on digital research,
and they would need to be updated frequently to keep pace with rapidly
evolving technologies and related risks. Organizations such as ReCODE Health
and professional societies (e.g., American Psychological Association, Public
Responsibility in Medicine and Research) offer digital health training workshops and webinars; however, few (if any) investigators, IRB members, or
industry professionals are required to participate.
Consequently, existing educational resources are underutilized, and very
few of these resources are updated to reflect the latest technologies and associated risks. It falls to individual researchers, IRB members, and industry
professionals to pursue knowledge on their own, without much guidance to
inform decisions about which information sources should be trusted. To
improve protections for participants and users across sectors, it is critical that
the wide range of stakeholders become actively involved in the process of
developing and disseminating new knowledge on digital health research and
factors that impact the principle of beneficence. For example, funding agencies
that support digital health research could require that grant proposals include
a plan to study risks and potential benefits and that these grants include consultation with a health technology ethicist. Researchers could then leverage
research funding to conduct studies that inform methods for obtaining truly
informed consent and objective risk assessment in addition to addressing the
study’s primary scientific aims. Integrating research on the ethical dimensions
of health technologies can then inform IRB decision making, required trainings for IRB members and investigators, and policies to guide practice.
Risk–Benefit Assessment and Privacy
49
Outside the regulated research environment, there are several opportunities for commercial entities to engage in ongoing education and contribute
valuable knowledge. These opportunities include direct involvement of behavioral scientists and health technology ethicists in product design and evaluation and the production of white papers and other informational materials that
describe research-relevant aspects of privacy policies and terms of service. Given
that big technology companies and startups may not be bound by the Common
Rule, however, it may be useful to create incentives for such practices—for
example, providing industry professionals with discounted registration for
in-person trainings and webinars on digital health and creating a public reward
structure for engaging in any of the above behaviors (e.g., unique badges or
other markers in online marketplaces). If researchers consider the landscape to
be a “learning ethics system,” they collectively begin to shape best practices.
Crowdsourcing of Resources
One way to realize the potential for a learning ethics system is for researchers
to collectively share their individual experiences in planning and conducting
digital health research. Shared experiences include both the successes associated with study planning and implementation and the known risks and errors
that may emerge when conducting research. Evidence from health care, in
which there has been a concentrated effort to reduce harm and errors and
share best practices, suggests that sharing of mistakes can inform the generation of new ideas for implementation (Han & Pashouwers, 2018). Finding ways
for researchers and other key stakeholders to share experiences is needed.
Barriers to sharing information learned from digital health research include
the lack of a safe, structured, and systematic way to share information in a
readily usable format with key stakeholders (e.g., researchers, IRB members).
In order for sharing to occur, individuals need to feel safe in sharing their
experiences. Fear of criticism or, at the extreme, fear of retribution or legal
sanctions may prevent such sharing. After all, as researchers navigate new
technologies, many risks are unknown.
Collaboration is a core principle underlying scientific advancement (Merton,
1942), yet open sharing remains a concern in many aspects of science such as
data sharing. The benefits to be gained by openly sharing research materials
include greater replication and uptake of findings, extension of work, and
building of collaborations to further advance science, yet open sharing is not
widespread. Morey and colleagues (2016) suggested that there is a misalignment of incentives that limits open practices of sharing in science. Moreover,
open sharing of best practices with regard to ethical aspects is essentially
absent from the open science movement. Perhaps this absence is attributable
to lack of incentives for sharing best practices. Novel approaches to encourage
sharing of best practices should be considered. For example, institutions could
consider rewarding or incentivizing individuals for sharing their experiences
with others or educating fellow researchers and IRB members on these
ethical aspects.
50 Nebeker, Bartlett Ellis, and Arigo
Undoubtedly, this sharing is happening currently at a micro and individual
level. The micro level entails individuals sharing, often informally, through
research networks and peers who are willing to share their experiences.
Micro-level sharing can impact an individual’s research; however, micro-level
instances of sharing do not have broad reach or occur systematically or in a
generalizable way. A system-level approach to sharing would be useful.
However, by increasing awareness of the potential risks and ethical concerns
associated with digital health technologies, there is a risk that IRBs might deny
approval for some studies. In these cases, it is less likely that investigators
will want to share information because it stands in the way of initiating and
completing their studies. These hurdles will need to be overcome to promote
open sharing of lessons learned. The benefit gained from researchers sharing
their collective social capital far outweighs the risks.
A forum that provides a safe and trustworthy place for crowdsourcing best
practices is needed. The CORE Platform (https://thecore-platform.ucsd.edu)
is one current tool that facilitates the sharing of best practices and resources
associated with planning and conducting digital research studies. The CORE
Platform connects individuals engaged in research with one another and supports the sharing of research materials, including IRB-approved protocols and
consent language. However, there is a need to create an incentive structure
and increased opportunities for sharing successes and failures.
CONCLUSION AND CALL FOR SUPPORT FOR DIGITAL HEALTH
RESEARCH INFRASTRUCTURE
In this chapter, characteristics of the digital health research sector were presented through two authentic use case scenarios. To better understand the
scope and nuanced aspects of assessing both research risks and benefits,
including those associated with privacy, the Digital Health Framework and
companion DHC-R were used. Applying the checklist, we described both concerns and laudable practices specific to access, usability, privacy, data management, and possible risks and benefits associated with the research. Having
access to decision support tools like the DHC-R is a first step. When combined
with capacity building via education and relevant resources via community
sharing, the likelihood of developing and using safe and responsible digital
health research practices increases. To build this digital health research infrastructure, both human and fiscal support is needed. In closing, we call on
funding agencies, research institutions, and professional societies to support
the safe and responsible conduct of digital health research.
REFERENCES
Arigo, D. (2015). Promoting physical activity among women using wearable technology
and online social connectivity: A feasibility study. Health Psychology and Behavioral
Medicine, 3(1), 391–409. https://doi.org/10.1080/21642850.2015.1118350
Risk–Benefit Assessment and Privacy
51
Arigo, D., Pagoto, S., Carter-Harris, L., Lillie, S. E., & Nebeker, C. (2018). Using social
media for health research: Methodological and ethical considerations for recruitment
and intervention delivery. Digital Health, 4. https://doi.org/10.1177/2055207618771757
Arigo, D., Schumacher, L. M., Pinkasavage, E., & Butryn, M. L. (2015). Addressing barriers
to physical activity among women: A feasibility study using social networking-enabled
technology. Digital Health, 1. https://doi.org/10.1177/2055207615583564
Bhavnani, S. P., Narula, J., & Sengupta, P. P. (2016). Mobile technology and the digitization of healthcare. European Heart Journal, 37(18), 1428–1438. https://doi.org/
10.1093/eurheartj/ehv770
Charles-Smith, L. E., Reynolds, T. L., Cameron, M. A., Conway, M., Lau, E. H. Y.,
Olsen, J. M., Pavlin, J. A., Shigematsu, M., Streichert, L. C., Suda, K. J., & Corley,
C. D. (2015). Using social media for actionable disease surveillance and outbreak
management: A systematic literature review. PLOS ONE, 10(10), Article e0139701.
https://doi.org/10.1371/journal.pone.0139701
Coppersmith, G., Leary, R., Crutchley, P., & Fine, A. (2018). Natural language processing
of social media as screening for suicide risk. Biomedical Informatics Insights, 10. https://
doi.org/10.1177/1178222618792860
DeSalvo, K. B., & Samuels, J. (2016). Examining oversight of the privacy & security of health
data collected by entities not regulated by HIPAA. Office of the National Coordinator for
Health Information Technology. https://www.healthit.gov/buzz-blog/privacy-andsecurity-of-ehrs/examining-oversight-privacy-security-health-data-collected-entities-notregulated-hipaa/
Dunseath, S., Weibel, N., Bloss, C. S., & Nebeker, C. (2018). NIH support of mobile,
imaging, pervasive sensing, social media and location tracking (MISST) research:
Laying the foundation to examine research ethics in the digital age. NPJ Digital Medicine, 1(1), Article 20171. https://doi.org/10.1038/s41746-017-0001-5
Emken, B. A., Li, M., Thatte, G., Lee, S., Annavaram, M., Mitra, U., Narayanan, S., &
Spruijt-Metz, D. (2012). Recognition of physical activities in overweight Hispanic
youth using KNOWME Networks. Journal of Physical Activity & Health, 9(3), 432–441.
https://doi.org/10.1123/jpah.9.3.432
Feehan, L. M., Geldman, J., Sayre, E. C., Park, C., Ezzat, A. M., Yoo, J. Y., Hamilton,
C. B., & Li, L. C. (2018). Accuracy of Fitbit devices: Systematic review and narrative
syntheses of quantitative data. JMIR mHealth and uHealth, 6(8), Article e10527.
https://doi.org/10.2196/10527
Gostin, L. O., Halabi, S. F., & Wilson, K. (2018). Health data and privacy in the digital
era. JAMA, 320(3), 233–234. https://doi.org/10.1001/jama.2018.8374
Grindrod, K., Boersema, J., Waked, K., Smith, V., Yang, J., & Gebotys, C. (2016). Locking
it down: The privacy and security of mobile medication apps. Canadian Pharmacists
Journal, 150(1), 60–66. https://doi.org/10.1177/1715163516680226
Han, J., & Pashouwers, R. (2018). Willingness to share knowledge in healthcare organisations: The role of relational perception. Knowledge Management Research and Practice, 16(1), 42–50. https://doi.org/10.1080/14778238.2017.1405144
Kelly, P., Marshall, S. J., Badland, H., Kerr, J., Oliver, M., Doherty, A. R., & Foster, C.
(2013). An ethical framework for automated, wearable cameras in health behavior
research. American Journal of Preventive Medicine, 44(3), 314–319. https://doi.org/
10.1016/j.amepre.2012.11.006
Leow, A., Stange, J., Zulueta, J., Ajilore, O., Hussain, F., Piscitello, A., Ryan, K.,
Duffecy, J., Langenecker, S., Nelson, P., & McInnis, M. (2019). 247. BiAffect: Passive
monitoring of psychomotor activity in mood disorders using mobile keystroke kinematics. Biological Psychiatry, 85(10), S102–S103. https://doi.org/10.1016/j.biopsych.
2019.03.261
Merton, R. K. (1942). A note on science and democracy. Journal of Legal and Political
Sociology, 1, 115–126.
52 Nebeker, Bartlett Ellis, and Arigo
Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D.,
Lewandowsky, S., Morey, C. C., Newman, D. P., Schönbrodt, F. D., Vanpaemel, W.,
Wagenmakers, E. J., & Zwaan, R. A. (2016). The Peer Reviewers’ Openness Initiative: Incentivizing open research practices through peer review. Royal Society Open
Science, 3(1), Article 150547. https://doi.org/10.1098/rsos.150547
Nahum-Shani, I., Hekler, E. B., & Spruijt-Metz, D. (2015). Building health behavior
models to guide the development of just-in-time adaptive interventions: A pragmatic framework. Health Psychology, 34(Suppl.), 1209–1219. https://doi.org/10.1037/
hea0000306
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the
protection of human subjects of research. https://www.hhs.gov/ohrp/sites/default/files/
the-belmont-report-508c_FINAL.pdf
Nebeker, C. (2020). mHealth research applied to regulated and unregulated behavioral
health sciences. Journal of Law, Medicine & Ethics, 48(1 Suppl.), 49–59. https://doi.org/
10.1177/1073110520917029
Nebeker, C., Bartlett Ellis, R. J., & Torous, J. (2020). Development of a decision-making
checklist tool to support technology selection in digital health research. Translational Behavioral Medicine, 10(4), 1004–1015. https://doi.org/10.1093/tbm/ibz074
Nebeker, C., Dunseath, S., & Linares-Orozco, R. (2020). A retrospective analysis of
NIH-funded digital health research using social media platforms. Digital Health, 6.
https://doi.org/10.1177/2055207619901085
Nebeker, C., Harlow, J., Espinoza Giacinto, R., Orozco-Linares, R., Bloss, C. S., &
Weibel, N. (2017). Ethical and regulatory challenges of research using pervasive
sensing and other emerging technologies: IRB perspectives. AJOB Empirical Bioethics,
8(4), 266–276. https://doi.org/10.1080/23294515.2017.1403980
Ostherr, K., Borodina, S., Bracken, R. C., Lotterman, C., Storer, E., & Williams, B.
(2017). Trust and privacy in the context of user-generated health data. Big Data &
Society, 4(1). https://doi.org/10.1177/2053951717704673
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
Resnik, D. B. (2010). Protecting privacy and confidentiality in environmental health
research. Ethics in Biology, Engineering and Medicine, 1(4), 285–291. https://doi.org/
10.1615/EthicsBiologyEngMed.v1.i4.60
Rothstein, M. A., Wilbanks, J. T., Beskow, L. M., Brelsford, K. M., Brothers, K. B.,
Doerr, M., Evans, B. J., Hammack-Aviran, C. M., McGowan, M. L., & Tovino, S. A.
(2020). Unregulated health research using mobile devices: Ethical considerations
and policy recommendations. The Journal of Law, Medicine & Ethics, 48(1 Suppl.),
196–226. https://doi.org/10.1177/1073110520917047
Sharma, A., Harrington, R. A., McClellan, M. B., Turakhia, M. P., Eapen, Z. J., Steinhubl, S.,
Mault, J. R., Majmudar, M. D., Roessig, L., Chandross, K. J., Green, E. M., Patel, B.,
Hamer, A., Olgin, J., Rumsfeld, J. S., Roe, M. T., & Peterson, E. D. (2018). Using
digital health technology to better generate evidence and deliver evidence-based
care. Journal of the American College of Cardiology, 71(23), 2680–2690. https://doi.org/
10.1016/j.jacc.2018.03.523
Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment.
Annual Review of Clinical Psychology, 4, 1–32. https://doi.org/10.1146/annurev.
clinpsy.3.022806.091415
Smyth, J. M., Juth, V., Ma, J., & Sliwinski, M. (2017). A slice of life: Ecologically valid
methods for research on social relationships and health across the life span. Social
and Personality Psychology Compass, 11(10), Article e12356. https://doi.org/10.1111/
spc3.12356
Risk–Benefit Assessment and Privacy
53
Stone, A. A., Schwartz, J. E., Neale, J. M., Shiffman, S., Marco, C. A., Hickcox, M.,
Paty, J., Porter, L. S., & Cruise, L. J. (1998). A comparison of coping assessed by ecological momentary assessment and retrospective recall. Journal of Personality and
Social Psychology, 74(6), 1670–1680. https://doi.org/10.1037/0022-3514.74.6.1670
U.S. Department of Health and Human Services. (2018). Subpart A of 45 C.F.R. Part 46:
Basic HHS policy for protection of human subjects. https://www.hhs.gov/ohrp/
sites/default/files/revised-common-rule-reg-text-unofficial-2018-requirements.pdf
Westin, A. (2008). How the public views privacy and health research. Institute of Medicine.
4
Informed Consent
Barton W. Palmer
“T
he voluntary consent of the human subject is absolutely essential”
(International Military Tribunal, 1949, p. 181). This quote is the first
clause in Item 1 of the 1947 Nuremberg Code and reflects the primacy generally
given to informed consent as a central and necessary (albeit not sufficient)
condition for ethical human research. Although subsequent ethics codes have
delineated exceptions to this rule (e.g., American Psychological Association
[APA], 2017; World Medical Assembly, 1964; World Medical Association,
2018), in most situations the informed consent of each participant remains
essential to ethical research. The importance of a meaningful informed consent
process for psychological research is explicitly addressed within the current
Ethical Principles of Psychologists and Code of Conduct (hereinafter referred to as “the
APA Ethics Code”; APA, 2017) under Standard 3.10, Informed Consent:
When psychologists conduct research . . . they obtain the informed consent of the
individual or individuals using language that is reasonably understandable to
that person or persons except when conducting such activities without consent is
mandated by law or governmental regulation or as otherwise provided in this
Ethics Code. (p. 7)
Conflict of interests disclosure: Dr. Palmer was one of the coauthors of the University
of California, San Diego Brief Assessment of Consent Capacity (UBACC), which is
recommended within this chapter as a tool for screening potential participants for
impaired capacity to consent to research. The UBACC is available at no cost, and the
author has no financial interest in its use.
This chapter was authored by an employee of the United States government as part
of official duty and is considered to be in the public domain. Any views expressed
herein do not necessarily represent the views of the United States government, and
the author’s participation in the work is not meant to serve as an official endorsement.
https://doi.org/10.1037/0000258-004
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
In the public domain.
55
56 Barton W. Palmer
But what does it mean to obtain (and give) informed consent? It is often
noted that “informed consent is a process, not just a form” (Office for Protection From Research Risks, 1993, para. 2), but given the context-specific
nature of consent, the procedures for actualization of a meaningful informed
consent process for research are not always uniformly clear.
This chapter reviews information and considerations needed to pragmatically and effectively provide potential research participants with a meaningful
consent process. By necessity, this chapter serves as an overview because each
of the subtopics has nuances, details, and often disagreements among experts
that cannot be suitably addressed within any single chapter. Nonetheless, this
overview includes a brief description of the immediate history of events leading
to the Belmont Report (National Commission for the Protection of Human
Subjects of Biomedical and Behavioral Research [hereinafter referred to as the
“National Commission”], 1979), which remains a key document in describing
the basic ethical principles underlying ethical human research, as well as a
review of underlying contemporary U.S. federal regulations governing human
subjects protections, including those specific to informed consent. Although
the focus is on the ethical aspects, we will also consider relevant U.S. regulatory provisions are also considered where applicable. The defining features of
informed consent are then discussed. Because full or even partial disclosure is
sometimes infeasible (e.g., in the context of predicting future potential uses of
data) and/or may contraindicated by the scientific question under study, issues
of consent for data sharing and secondary analyses, as well intentional deception, are also addressed. In addition, because a key component of valid consent
includes decisional capacity, the defining features of capacity to consent and
their assessment are also described, as well as use of proxy consent for research
with persons incapable of giving consent. The chapter closes with a summary
of key recommendations to foster authentic consent.
HISTORICAL CONSIDERATIONS
The full history of the concept, use, and regulation of informed consent is long
and complex (reviewed in Faden et al., 1986). However, a key milestone in
the current understanding of research ethics, and in the development of
contemporary federal regulations for protection of human subjects, was the
1979 publication of the Belmont Report (described in more detail in Chapter 1, this volume). In 1974, the U.S. Congress passed the National Research
Act. This act was developed and passed in reaction to a series of events and
ethical breaches in medical research and clinical trials over the 20th century,
such as the thalidomide tragedy in the late 1950s, the hepatitis studies at the
Willowbrook State School (1955–1970), a 1963 study in which live cancer
cells were injected into 22 elderly patients, a landmark 1966 paper by Henry
Beecher documenting a larger series of unethical studies that had been published in major medical journals, and the 1972 public revelations regarding
the Public Health Service Tuskegee Syphilis Study (1932–1972; reviewed in
Informed Consent
57
Advisory Committee on Human Radiation Experiments, 1995; Katz, 1987).
Ethical concerns about research were not limited to biomedical studies but
also included concerns in regard to some psychology and other social science
studies, such as Milgram’s obedience studies, Zimbardo’s prison experiment, and other less well-known research projects (reviewed in Kitchener &
Kitchener, 2009).
Of all these studies, the Tuskegee Syphilis Study appears to have been the
“straw that broke the camel’s back,” most immediately prompting the federal
actions that eventually resulted in the Belmont Report. On July 26, 1972, The
New York Times published a front-page report, immediately picked up and
distributed worldwide through the Associated Press, describing the profound moral and ethical failures in the now infamous Tuskegee Syphilis
Study (Heller, 1972). In response, the federal government appointed an advisory panel to review the timeline and nature of the ethical failures in the study
(U.S. Public Health Service, 1973). Jay Katz, one of the advisory committee
members, wrote,
There is ample evidence in the records available to us that the consent to participation was not obtained from the Tuskegee Syphilis Study subjects, but that
instead they were exploited, manipulated, and deceived. They were treated not
as human subjects but as objects of research. . . . They were never consulted
about the research project, its consequences for them, and the alternatives available to them. (U.S. Public Health Service, 1973, p. 14)
The final recommendations of the advisory panel, in part, catalyzed and
informed the National Research Act, a key component of which was establishing the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. In turn, a key mandate for the National
Commission was “to identify the basic ethical principles which should underlie
the conduct of biomedical and behavioral [emphasis added] research involving
human subjects” (National Research Act, Title II, Part A, § 202(a)(1)(A)(i)).
The result of that mandate was the Belmont Report (National Commission,
1979), which remains a core document guiding ethical research with humans.
DEFINING INFORMED CONSENT
The requirement for voluntary informed consent for research participation is
a logical corollary from the principle of respect for persons described in the
Belmont Report (National Commission, 1979; see Chapter 1 of this volume).
However, when conducting research with populations at risk for impaired
capacity, the principles of beneficence and justice described in the Belmont
Report (and discussed in Chapter 1) also have relevance to the conduct or
prohibition of research with persons with compromised decision-making
capacity. The concept of informed consent has had multiple meanings and
nuances (see Faden et al., 1986), but in application to research participation
it is generally thought to include at least three elements: (a) voluntariness
(absence of coercion or undue influence), (b) disclosure or provision of
58 Barton W. Palmer
information that a reasonable person would want to know in order to discuss
and make an informed decision, and (c) capacity to use the information to
make a meaningful choice (Appelbaum et al., 2009; Berg et al., 2009; Meisel
et al., 1977; National Commission, 1979).
Voluntariness
Although outright coercion is hopefully rare, considerations of subtle coercion
or undue influence remain relevant to evaluating informed consent in research
with humans. Appelbaum et al. (2009) reviewed several of the concerns that
have been discussed among research ethicists, including whether monetary
compensation for participation can sometimes become coercive, situations in
which a potential participant’s clinical care provider and the investigator are
the same person or group, clinical trial recruitment of people who lack access
to standard health care, and recruitment of people from cultures or groups
in which individuals may feel that the decisional authority rests with the
community, community leader, or a family member.
In the context of behavioral and social science research, questions of
potential coercion have also been raised in reference to the use of undergraduate “subject pools” (Adair, 2001; Diamond & Reidpath, 1992; Foot & Sanford,
2004; Kimmel, 2007; Sieber, 1999). Such concerns are at least partially mitigated by the provision of alternatives to participation, such as reading and
summarizing journal articles (see APA Ethics Code, Standard 8.04, Client/
Patient, Student, and Subordinate Research Participants, para. (a)). Another
factor that can help foster voluntariness is ensuring that there is a sufficient
diversity of available protocols such that no student feels external pressure to
participate in any one particular study. And although not directly related to
consent issues, one of the justifications frequently cited for the use of subject
pools is the educational value to psychology students of firsthand experience
as a research participant. Given educational value as an intended benefit, it is
important for investigators to include proactive measures that maximize the
educational value to participants, for example, as part of debriefing and
potentially by providing them with the final study results and conclusions
once published.
Disclosure of Information
The need for potential research subjects to have information relevant to their
decision whether or not to participate in a particular study appears self-evident.
However, determining what is relevant information is partially subjective and
context specific. Federal regulations (see Protection of Human Subjects, 2018,
45 C.F.R. § 46.116(b)) and the APA Ethics Code (Standard 8.02, Informed Consent to Research) are generally in accord in terms of listing the types of information that are required, including the purpose of the study, voluntariness and
freedom to withdraw, potential risks, potential benefits (including the possible
Informed Consent
59
absence of personal benefit), provisions for and limits of confidentiality, and
contact information for the investigators and institutional review board (IRB),
as well as providing an opportunity to ask questions and discuss the information provided. However, as noted in the Belmont Report, “a simple listing of
items does not answer the question of what the standard should be for judging
how much and what sort of information should be provided” (National Commission, 1979, Part C, § 1, para. 4).
In regard to the question of how much information to provide, there is a
potential problem of flooding participants with “too much” or superfluous
information. The length of research consent forms (as well as the necessary
literacy level) has been steadily increasing over the past 4 decades (Albala
et al., 2010; Baker & Taub, 1983; Beardsley et al., 2007; Berger et al., 2009;
Loverde et al., 1989). The increase in length and complexity has steadily continued despite the lack of empirical evidence that longer consent forms
improve participants’ comprehension or satisfaction with consent procedures
(Enama et al., 2012; Stunkel et al., 2010). In fact, results from some studies
suggest that longer consent forms reduce comprehension of the information
most essential to the decision about participation (Beardsley et al., 2007;
Epstein & Lasagna, 1969).
The increasing length of consent forms may be at least partially attributable to changes in institutional risk management rather than the purpose of
managing risks to participants. In this regard, the authors of one report from
the Institute of Medicine suggested that “consent forms have been hijacked as
‘disclosure documents’ for the risk management purposes of research organizations” (Federman et al., 2003, p. 93). Although elimination of institutionally mandated text within printed consent forms may often be outside the
control of investigators (and perhaps even IRB reviewers), the dual purpose
of consent documents further underlines the ethical imperative for investigators to go beyond the consent form as part of the overall consent process.
As unequivocally stated in the Belmont Report, “Investigators are responsible
for ascertaining that the subject has comprehended the information” (National
Commission, 1979, Part C, § 1, para. 7). The more complex the study and
consent documents, the more it becomes essential that consent involves
an interactive discussion between participant and researcher. Although not
explicitly mandated by regulation, that discussion should ideally continue
beyond the signing of the consent document and be integrated into the relevant procedures at every study visit.
An opportunity to read and review the consent form is a mandated component of the initial consent process. That review can often be facilitated by
ensuring an interactive discussion during presentation of the consent form itself
(Palmer et al., 2008; Taub, 1980; Taub & Baker, 1983). The review includes not
just reading the consent form to potential participants or providing participants
an opportunity to read the consent form themselves, but also actively checking
each individual’s comprehension by asking them to explain their understanding
of the critical information in their own words and giving corrective feedback
with additional comprehension checks when appropriate.
60 Barton W. Palmer
In the longer term, emerging models that leverage advantages conferred by
electronic technologies may find greater regulatory acceptance; for example,
“dynamic consent” allows for participants to tailor and manage their consent
preferences over time (Holm & Ploug, 2017; Kaye et al., 2015). This approach
may better maximize support for the autonomy component of respect for
persons, but it may be presently impractical and may violate the principle of
justice with regard to populations having less technical literacy or lacking full
access to computers and the internet.
Deception
An important subtopic of disclosure of information is the use of deception in
research. Principle C: Integrity of the APA Ethics Code states,
In situations in which deception may be ethically justifiable to maximize benefits and minimize harm, psychologists have a serious obligation to consider the
need for, the possible consequences of, and their responsibility to correct any
resulting mistrust or other harmful effects that arise from the use of such techniques. (APA, 2017, p. 4)
The use of deception in research is more specifically addressed under Standard 8.07, Deception in Research:
(a)Psychologists do not conduct a study involving deception unless they have
determined that the use of deceptive techniques is justified by the study’s
significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible.
(b)Psychologists do not deceive prospective participants about research that is
reasonably expected to cause physical pain or severe emotional distress.
(c)Psychologists explain any deception that is an integral feature of the design
and conduct of an experiment to participants as early as is feasible, preferably at the conclusion of their participation, but no later than at the conclusion of the data collection, and permit participants to withdraw their data.
(See also Standard 8.08, Debriefing.) (p. 11)
Boynton and colleagues (2013) distinguished between two forms of
deception in research: (a) indirect, in which “participants agree to postpone
full disclosure of the true purpose of the research,” versus (b) direct, which
involves “deliberate misinformation provided to participants about some
essential component” (p. 7). The latter appears more problematic from the
perspective of respect for persons.
Although described explicitly only within a section on research exempt
from full IRB review, there is a potential solution to the need for deliberate
deception provided within U.S. federal regulations. Specifically,
If the research involves deceiving the subjects regarding the nature or purposes of
the research, this exemption is not applicable unless the subject authorizes the
deception through a prospective agreement to participate in research in circumstances in which the subject is informed that he or she will be unaware of or misled
regarding the nature or purposes of the research. (45 C.F.R. § 46.104(d)(3)(iii))
This consent to be initially deceived appears to parallel the methods of
consent often used in randomized placebo-controlled clinical trials.
Informed Consent
61
Broad Consent
Both the APA Ethics Code and federal regulations generally require that
potential research participants be informed of the purpose of research. But
federal funding agencies and journal editors increasingly require that investigators share their data via data repositories, and it may be impossible to fully
predict how that data might be used by others in secondary analyses (Institute
of Medicine, 2014; Kush & Goldman, 2014). One approach is to include an
explanation of that lack of certainty within the initial consent process and to
specify at least the broad range of types of research questions to which the data
might be applied. There have been some concerns expressed about the notion
of “broad consent” (Hofmann, 2009), but the recently revised Common Rule
includes provisions for broad consent when anticipating “the storage, maintenance, and secondary research use of identifiable private information or identifiable biospecimens” (45 C.F.R. § 46.116(d)).
Note that broad consent is not synonymous with “blanket consent”
(Wendler, 2013). With broad consent, the investigators are required to disclose the plan to archive and/or share the data and to provide a general
description of the types of research to which the data might be applied. In
contrast, under blanket consent, the participants are asked to allow no restrictions on the future use of their data. Blanket consent is more problematic in
that it could potentially result in unforeseen future uses of data for research
that would be contrary to an individual’s personal moral beliefs or values or
personally objectionable in other ways (Tomlinson, 2013; Wendler, 2013).
Decision-Making Capacity
In addition to voluntariness and disclosure of relevant information, the third
key pillar of valid consent is the capacity to comprehend and use the information to make an authentic reasoned choice. The importance of considering decision-making capacity flows from the protection component of respect
for persons.
In contemporary use, the most commonly recognized definition of capacity
to consent to research involves four subconstructs:
1. understanding—the capacity to comprehend the information being disclosed
relevant to the decision whether or not to participate (or continue participating) in the research protocol.
2. appreciation—a construct that is related to understanding but that additionally requires the ability to apply the information in reference to its significance
for oneself. For instance, a patient lacking insight regarding their diagnostic
status or needs might intellectually understand the information disclosed in
regard to an intervention trial but may fail to appreciate the risks to themselves posed by an unproven intervention or the possibility of receiving
a placebo.
3. reasoning—the ability to reason with information, such as the ability to consider and compare alternatives, as well as to identify possible consequences
62 Barton W. Palmer
of those alternatives. Note that this standard refers to the use of a reasoning
process rather than to the specific outcome of that process.
4. expression of a choice—the ability to come to a decision and communicate a
choice. Note that this component refers to the ability not only to communicate but also to come to a clear decision. If the potential participant displays
high ambivalence and rapid, repeated changes in a decision, the investigator may lack confidence that the individual has made and expressed a
clear choice.
These four dimensions were originally identified through review of legal
standards of varied stringency for competence to consent to (or refuse) treatment, but they are now generally viewed as separate necessary dimensions of decision-making capacity to consent to clinical treatment or research
participation (Appelbaum & Roth, 1982; Roth et al., 1977).
In the context of consent for research, it is essential to keep in mind that
capacity to consent is not an intrapersonal stable trait, but rather a contextspecific construct. For example, a person who lacks capacity at a particular time
to consent to a clinical trial with complex procedures and a complex or uncertain risk-to-benefit ratio may be fully capable of consenting to a noncomplex
minimal-risk study (Dunn, Palmer, & Karlawish, 2007; Saks et al., 2006).
Assessment and Determination of Decision-Making Capacity
Although there is substantial heterogeneity in level of cognitive function
among children and adolescents of any given age, persons under age 18 are
deemed legally incapable of consent by fiat in the United States (although for
older children and adolescents, their assent to research, even with legal
consent from a parent or guardian, remains essential). For most adults, however, capacity to consent to research cannot be appropriately determined on
a group level but should be considered within the specific context of the
research protocol at hand.
There have been a number of approaches to assessment or determination
of capacity to consent to research, including assuming presence or absence
based on some characteristic of the participant (e.g., a particular diagnosis),
overall performance on a cognitive screen or neuropsychological test battery,
or use of unstructured interviews. Use of a characteristic such as diagnosis
is generally inappropriate given the substantial heterogeneity present even
within groups known to be at elevated risk for impaired capacity (Dunn,
Palmer, Appelbaum, et al., 2007; Kim, 2006; Kim et al., 2007; Wang et al.,
2018). Use of cognitive screening or neuropsychological measures is also
generally an inappropriate basis for making such determinations, except at
the extreme levels of impairment, as there is no clear optimal cut point that
directly informs capacity to make the decision at hand (Palmer & Harmell,
2016; Whelan et al., 2009). However, neuropsychological assessment of individuals who appear to be decisionally impaired can be helpful in clarifying
the underlying processes impairing decisional capacity and thus in guiding
compensatory strategies (Palmer & Harmell, 2016; Palmer & Savla, 2007).
Informed Consent
63
Use of unstructured interviews to determine decisional capacity is often
contraindicated by the lack of inter-interviewer reliability of such determinations, particularly in reference to people at the boundary of capable–incapable
status, such as persons with mild to moderate dementia (Kim, Appelbaum,
et al., 2011; Marson et al., 1997; Sessums et al., 2011). Interrater reliability of
interview-based determinations can be substantially improved when guided
by the specific criteria for decisional capacity, for example, through use of a
structured-interview-based capacity assessment scale (Marson et al., 2000).
There have been several excellent reviews of instruments for assessing
capacity to consent to research and other domains (Dunn et al., 2006; Gilbert
et al., 2017; Simpson, 2010; Sturman, 2005). One aspect plaguing such reviews
is the lack of a clear criterion standard against which to definitively evaluate
validity, positive predictive value, and negative predictive value. All four of
the presently available comprehensive reviews have noted the need for further
research and development of such instruments. Those concerns noted, and
with appropriate caveats with respect to the need to consider the specific
needs and context of assessment, there seems to be general agreement that
the MacArthur Competence Assessment Tool for Clinical Research (MacCAT-CR;
Appelbaum & Grisso, 2001) presently remains the most widely studied instrument of this type.
The MacCAT-CR uses a semistructured format to guide assessment. The
instrument was developed such that the content of disclosures and the precise
responses deemed “correct” are tailored by the investigator to the specific
study to which the participant is being asked to consent. It consists of four
subdomains, including five questions focused on Understanding, three questions focused on Appreciation, four related to Reasoning, and one item focused
on Expression of a Choice. At least in part because of the differential potential
range of scores, the Understanding subscale of the MacCAT-CR tends to have
the best reliability and strongest correlation with cognitive test performance
(Palmer & Savla, 2007). Of note, across diagnostic populations, even among
persons with severe mental illness such as schizophrenia, cognitive test performance tends to be the strongest predictor of performance on the MacCAT-CR,
even relative to severity of psychopathology, although the magnitude of association remains too low to warrant use of cognitive tests alone to determine
decisional capacity (Palmer & Savla, 2007). In the MacCAT-CR manual, the
authors note that they intentionally avoided providing an overall total score by
which to determine presence or absence of consent capacity (Appelbaum &
Grisso, 2001). Appelbaum and Grisso (2001) explained the avoidance of providing a total score in noting that if an individual lacks capacity in any one of
the four subdomains, this may functionally translate into incapacity, regardless
of performance on the other domains. Even within the subscales, no set cut
point is provided, which is appropriate given that the weight given to false
positive versus false negative errors may vary with the risk level of the actual
study, and there are certain items, such as voluntariness, that are essential to
pass regardless of performance on other items.
64 Barton W. Palmer
One downside to the MacCAT-CR and other comprehensive instruments is
that the added time required for administration may add to participant burden.
A brief screening tool may be helpful for identifying those participants
warranting comprehensive assessment. In this regard, Jeste et al. (2007)
developed and validated a 10-item questionnaire called the University of
California, San Diego Brief Assessment of Consent Capacity (UBACC) that
requires little training for use, can be administered in less than 5 minutes, and
has good reliability and validity. The UBACC was rated positively in the two
comprehensive instrument reviews that have been published since its introduction in 2007 (Gilbert et al., 2017; Simpson, 2010). Although the absence
of assessment of expression of a choice was noted in both reviews, that one
component tends to be more salient during the consent process, even in the
absence of formal testing, because investigators are required to ask participants to make a choice in all consent settings.
Paralleling the assertion that consent is a process, not a form, capacity
assessment is also a process and not merely administration of a capacity
instrument. In a research context, even among potential participants deemed
decisionally incapable, regulations and ethical guidelines generally require
investigators to honor a person’s expressed or exhibited dissent from participation. Thus, the situation for harm from error is when the participant expresses
a desire to participate in a study. In contrast to assessment of competence to
make treatment decisions, determinations of capacity to consent to research
do not generally involve a court determination of legal competence. Thus,
determinations of capacity to consent to research remain a professional determination, whether by the investigator or an objective third-party evaluator
who is not invested in the success or failure of study enrollment.
Research With Persons With Impaired Decision-Making Capacity
Although the ethical importance of informed consent and assurance of decisional capacity are grounded in the two subcomponents of respect for persons,
considerations of beneficence and justice may ethically compel the conduct of
scientifically and otherwise ethically sound clinical research, even among populations of patients with near-universal impaired capacity (e.g., young children, people approaching the later phases of dementia). The principle of
beneficence includes provisions both to “do no harm” and to “maximize possible benefits and minimize possible harms.” Although they were writing in reference to biomedical ethics rather than research ethics per se, Beauchamp and
Childress (2001) labeled the first component as “nonmaleficence” and noted
that “principles of beneficence potentially demand more than the principle of
nonmaleficence because agents must take positive steps to help others, not
merely refrain from harmful acts” (p. 165). As noted in the Belmont Report,
In the case of scientific research in general, members of the larger society are
obliged to recognize the longer term benefits and risks that may result from
the improvement of knowledge and from the development of novel medical,
psychotherapeutic, and social procedures. (National Commission, 1979, Part B,
§ 2, para. 3)
Informed Consent
65
Also of relevance, justice requires not only equitable distribution of the
burdens of research but also equitable opportunity to benefit from the fruits
of research. In short, because of beneficence and justice considerations, it
could be argued that it would be unethical to prohibit scientifically sound
research on conditions that, by their very nature, hinder capacity to give
meaningful consent (Sevick et al., 2003; Sweet et al., 2014).
Borrowing procedures for deciding about clinical treatment, most research
ethics codes, as well as current federal regulations, recognize the role of a
legally authorized representative (LAR) as a provider of proxy consent in the
absence of consent capacity in the participant. Although the risk-to-benefit
ratio and potential for harm to the individual participant are always of relevance in determining whether a study is ethically sound, the ethical and legal
requirements when using proxy consent tend to be even more restrictive in
reference to studies with greater than minimal risk. In regard to determining
who can be the LAR, for children a parent (or parents) or legal guardian is
generally identified as the LAR. For decisionally impaired adults for whom
there has been no court determination of an LAR, investigators in the United
States are bound by local state laws in terms of identifying who can serve as
an LAR and in determining the order of priority (i.e., which persons have
priority as the LAR). Likewise, investigators outside the United States and
those conducting international research are bound by the relevant laws in the
nation where the research is being conducted.
Optimizing Comprehension
The issues of disclosure of information relevant to the decision and decisionmaking capacity are intertwined because the method and content of information disclosure can affect the individual’s manifest capacity to comprehend and
use that information to make a voluntary choice. That manifest ability can be
viewed as a social–cognitive construct in that it results from the interaction
between (a) intrapersonal factors, including traitlike factors (e.g., cognitive
functioning), stable but potentially modifiable intrapersonal factors (e.g.,
health literacy, research literacy), and statelike factors (e.g., fatigue, anxiety,
pain) and (b) extrapersonal (environmental) factors, including the inherent
complexity of the specific protocol and the quality of the consent process
(Dunn, Palmer, & Karlawish, 2007). In addition to considering intrapersonal
statelike factors (e.g., waiting for acute fatigue or pain to subside before finalizing the initial informed consent process), the quality of the consent process
is the one component that may be most directly under the investigator’s
immediate control and therefore warrants particular attention.
Many efforts have been made to enhance the consent process, for example,
with modifications in consent forms, pretraining of participants about research
concepts (e.g., randomization, clinical trials, placebo control, differences in
intent between research and clinical care), use of multimedia teaching aids,
and extended interactive consent discussions. The initial evidence for modified consent forms had been inconsistent (Flory & Emanuel, 2004), but
Nishimura and colleagues’ (2013) review revealed some evidence for better
66 Barton W. Palmer
comprehension when consent forms have been modified, so this component
certainly warrants attention (e.g., ensuring appropriate length and reading
levels, use of bullet points or summaries, use of figures or photographs). Multi­
media tools have proven useful in the context of patient treatment decision
making; the evidence for research consent remains less consistent, but most of
the available studies have not been guided by a clear conceptual model of
the form of multimedia or the specific context in which a picture may be
more effective than text (Palmer et al., 2012; Taber et al., 2016). Although the
potential utility of multimedia aids warrants further empirical study, at present
the use of extended discussions during consent appears to be the most consistently helpful method of improving participant comprehension (Flory &
Emanuel, 2004; Nishimura et al., 2013). These interactive discussions should
include active probing of comprehension (e.g., asking a participant to explain
their understanding of key information in their own words) with corrective
feedback rather than reliance on closed-ended probes (e.g., “Do you have any
questions?”; Palmer et al., 2008). The latter type of probe can generate a “no”
response for a multitude of reasons other than true comprehension.
CONCLUSION AND RECOMMENDATIONS
The primary answer to the question posed at the beginning of this chapter,
regarding ensuring informed consent as a meaningful process and not just a
form, is to focus on having a meaningful interactive discussion with participants or their LAR. Even when brief deception is warranted under circumstances described here, genuine discussions during the initial consent and
debriefing are essential. And even if consent comes from an LAR, investigators should engage the participant in an active assent to the degree possible
(Palmer et al., 2018). In addition, although the current regulatory model is
based on a capable–incapable determination, there are alternative models in
the intermediate range that warrant consideration, such as capacity to appoint
a proxy (Kim, Karlawish, et al., 2011) and supported rather than substituted
decision making (Jeste et al., 2018).
There are some research contexts in which an interactive discussion is not
readily feasible because the written information is in fact central to consent,
such as some types of research surveys or experimental tasks administered
through mail or web-based platforms. Although those forms of research usually tend to be lower risk, there may be some situations involving sensitive
information or trauma-related material in which more consideration of potential harms is needed. But in most contexts, whether the research is conducted
in person, over the telephone, or via computer or mobile technologies, interactive consent discussions such as those described in this chapter are feasible
and important to making consent more than a printed form.
There are no specific ethical or regulatory guidelines or consensus for the
explicit assessment of capacity to consent to research. Thus, individual IRBs
each have their own rules for when and how such assessment is required.
Informed Consent
67
Which form of assessment is best varies by the nature of the study and population. It may be helpful for investigators to examine any of the excellent
comprehensive reviews of such instruments in selecting the method most
appropriate to their specific context (Dunn et al., 2006; Gilbert et al., 2017;
Simpson, 2010; Sturman, 2005). In addition, although the specific focus is on
older adults, and research consent was not among the types of situations
considered, there is an excellent and freely available handbook and guide by
the American Bar Association Commission on Law and Aging and American
Psychological Association (2008) that covers many topics and methods related
to assessing persons with diminished capacity.
There is no single one-size-fits-all solution or recommendation for meaningful consent. Indeed, many of the most complex and common ethical
dilemmas arise not from bad actors or negligent or intentionally malicious
behavior by researchers, but rather from situations in which two ethical principles are in apparent conflict (DuBois, 2008). The complexities of providing
for informed consent for research and situations in which consent may need
modification, waiver, or even intentional deception can each be viewed as
balancing the tensions between respect for persons, beneficence (including
its prescriptive components), and justice (National Commission, 1979). For
issues of informed consent, decision-making capacity, and use of proxy consent, there is also potential tension between the autonomy and protective
subcomponents of respect for persons. But these two components are not
always in opposition; optimizing the informed consent process for each potential
participant simultaneously fosters both components of respect for persons. The
Belmont Report remains helpful in guiding researchers because if investigators keep in mind the principles that may be in conflict and, when necessary,
seek consultation from colleagues or experts, their ability to appropriately
adapt to specific research situations will be facilitated.
REFERENCES
Adair, J. G. (2001). Ethics of psychological research: New policies; continuing issues;
new concerns. Canadian Psychology, 42(1), 25–37. https://doi.org/10.1037/h0086877
Advisory Committee on Human Radiation Experiments. (1995). The development of
human subject research policy at DHEW. In ACHRE Report Part I (Chap. 3). U.S.
Government Printing Office.
Albala, I., Doyle, M., & Appelbaum, P. S. (2010). The evolution of consent forms for
research: A quarter century of changes. IRB: Ethics & Human Research, 32(3), 7–11.
https://www.jstor.org/stable/25703699
American Bar Association Commission on Law and Aging, & American Psychological
Association. (2008). Assessment of older adults with diminished capacity: A handbook for
psychologists.
American Psychological Association. (2017). Ethical principles of psychologists and code of
conduct (2002, amended effective June 1, 2010, and January 1, 2017). http://www.
apa.org/ethics/code/ethics-code-2017.pdf
Appelbaum, P. S., & Grisso, T. (2001). MacCAT-CR: MacArthur Competence Assessment Tool
for Clinical Research. Professional Resource Press.
Appelbaum, P. S., Lidz, C. W., & Klitzman, R. (2009). Voluntariness of consent to
research: A conceptual model. The Hastings Center Report, 39(1), 30–39. https://doi.org/
10.1353/hcr.0.0103
68 Barton W. Palmer
Appelbaum, P. S., & Roth, L. H. (1982). Competency to consent to research: A psychiatric
overview. Archives of General Psychiatry, 39(8), 951–958. https://doi.org/10.1001/
archpsyc.1982.04290080061009
Baker, M. T., & Taub, H. A. (1983). Readability of informed consent forms for research
in a Veterans Administration medical center. JAMA, 250(19), 2646–2648. https://
doi.org/10.1001/jama.1983.03340190048030
Beardsley, E., Jefford, M., & Mileshkin, L. (2007). Longer consent forms for clinical
trials compromise patient understanding: So why are they lengthening? Journal of
Clinical Oncology, 25(9), e13–e14. https://doi.org/10.1200/JCO.2006.10.3341
Beauchamp, T., & Childress, J. (2001). Principles of biomedical ethics (5th ed.). Oxford
University Press.
Berg, J. W., Appelbaum, P. S., Lidz, C. W., & Parker, L. S. (2009). Informed consent: Legal
theory and clinical practice (2nd ed.). Oxford University Press.
Berger, O., Grønberg, B. H., Sand, K., Kaasa, S., & Loge, J. H. (2009). The length of
consent documents in oncological trials is doubled in twenty years. Annals of
Oncology, 20(2), 379–385. https://doi.org/10.1093/annonc/mdn623
Boynton, M. H., Portnoy, D. B., & Johnson, B. T. (2013). Exploring the ethics and psychological impact of deception in psychological research. IRB: Ethics & Human
Research, 35(2), 7–13. www.jstor.org/stable/41806156
Diamond, M. R., & Reidpath, D. D. (1992). Psychology ethics down under: A survey of
student subject pools in Australia. Ethics & Behavior, 2(2), 101–108. https://doi.org/
10.1207/s15327019eb0202_3
DuBois, J. M. (2008). Ethics in mental health research: Principles, guidance, and cases.
Oxford University Press.
Dunn, L. B., Nowrangi, M., Palmer, B. W., Jeste, D. V., & Saks, E. (2006). Assessing
decisional capacity for clinical research or treatment: A review of instruments.
The American Journal of Psychiatry, 163(8), 1323–1334. https://doi.org/10.1176/
ajp.2006.163.8.1323
Dunn, L. B., Palmer, B. W., Appelbaum, P. S., Saks, E. R., Aarons, G. A., & Jeste, D. V.
(2007). Prevalence and correlates of adequate performance on a measure of abilities
related to decisional capacity: Differences among three standards for the MacCAT-CR in
patients with schizophrenia. Schizophrenia Research, 89(1–3), 110–118. https://doi.org/
10.1016/j.schres.2006.08.005
Dunn, L. B., Palmer, B. W., & Karlawish, J. H. T. (2007). Frontal dysfunction and
capacity to consent to treatment or research: Conceptual considerations and
empirical evidence. In B. L. Miller & J. L. Cummings (Eds.), The human frontal lobes:
Functions and disorders (2nd ed., pp. 335–344). Guilford Press.
Enama, M. E., Hu, Z., Gordon, I., Costner, P., Ledgerwood, J. E., & Grady, C. (2012).
Randomization to standard and concise informed consent forms: Development
of evidence-based consent practices. Contemporary Clinical Trials, 33(5), 895–902.
https://doi.org/10.1016/j.cct.2012.04.005
Epstein, L. C., & Lasagna, L. (1969). Obtaining informed consent: Form or substance.
Archives of Internal Medicine, 123(6), 682–688. https://doi.org/10.1001/archinte.
1969.00300160072011
Faden, R., Beauchamp, T., & King, N. (1986). A history and theory of informed consent.
Oxford University Press.
Federman, D., Hannam, K., & Rodriguez, L. L. (2003). Responsible research: A systems
approach to protecting research participants. The National Academies Press.
Flory, J., & Emanuel, E. (2004). Interventions to improve research participants’ understanding in informed consent for research: A systematic review. JAMA, 292(13),
1593–1601. https://doi.org/10.1001/jama.292.13.1593
Foot, H., & Sanford, A. (2004). The use and abuse of student participants. The Psychologist,
17(5), 256–259.
Informed Consent
69
Gilbert, T., Bosquet, A., Thomas-Antérion, C., Bonnefoy, M., & Le Saux, O. (2017).
Assessing capacity to consent for research in cognitively impaired older patients.
Clinical Interventions in Aging, 12, 1553–1563. https://doi.org/10.2147/CIA.S141905
Heller, J. (1972, July 26). Syphilis victims in U.S. study went untreated for 40 years.
The New York Times. https://www.nytimes.com/1972/07/26/archives/syphilis-victimsin-us-study-went-untreated-for-40-years-syphilis.html
Hofmann, B. (2009). Broadening consent—and diluting ethics? Journal of Medical
Ethics, 35(2), 125–129. https://doi.org/10.1136/jme.2008.024851
Holm, S., & Ploug, T. (2017). Big data and health research—The governance challenges in
a mixed data economy. Journal of Bioethical Inquiry, 14(4), 515–525. https://doi.org/
10.1007/s11673-017-9810-0
Institute of Medicine. (2014). Discussion framework for clinical trial data sharing: Guiding
principles, elements, and activities. The National Academies Press.
International Military Tribunal. (1949). Trials of war criminals before the Nuernberg Military
Tribunals under Control Council Law No. 10 (Vol. 2). U.S. Government Printing Office.
Jeste, D. V., Eglit, G., Palmer, B. W., Martinis, J. G., Blanck, P., & Saks, E. R. (2018). Supported decision making in serious mental illness. Psychiatry, 81(1), 28–40. https://
doi.org/10.1080/00332747.2017.1324697
Jeste, D. V., Palmer, B. W., Appelbaum, P. S., Golshan, S., Glorioso, D., Dunn, L. B.,
Kim, K., Meeks, T., & Kraemer, H. C. (2007). A new brief instrument for assessing
decisional capacity for clinical research. Archives of General Psychiatry, 64(8), 966–974.
https://doi.org/10.1001/archpsyc.64.8.966
Katz, J. (1987). The regulation of human experimentation in the United States—
A personal odyssey. IRB: Ethics & Human Research, 9(1), 1–6. https://doi.org/10.2307/
3563644
Kaye, J., Whitley, E. A., Lund, D., Morrison, M., Teare, H., & Melham, K. (2015).
Dynamic consent: A patient interface for twenty-first century research networks.
European Journal of Human Genetics, 23(2), 141–146. https://doi.org/10.1038/
ejhg.2014.71
Kim, S. Y. (2006). When does decisional impairment become decisional incompetence? Ethical and methodological issues in capacity research in schizophrenia.
Schizophrenia Bulletin, 32(1), 92–97. https://doi.org/10.1093/schbul/sbi062
Kim, S. Y., Appelbaum, P. S., Kim, H. M., Wall, I. F., Bourgeois, J. A., Frankel, B., Hails,
K. C., Rundell, J. R., Seibel, K. M., & Karlawish, J. H. (2011). Variability of judgments
of capacity: Experience of capacity evaluators in a study of research consent capacity.
Psychosomatics, 52(4), 346–353. https://doi.org/10.1016/j.psym.2011.01.012
Kim, S. Y., Appelbaum, P. S., Swan, J., Stroup, T. S., McEvoy, J. P., Goff, D. C., Jeste, D. V.,
Lamberti, J. S., Leibovici, A., & Caine, E. D. (2007). Determining when impairment
constitutes incapacity for informed consent in schizophrenia research. The British
Journal of Psychiatry, 191(1), 38–43. https://doi.org/10.1192/bjp.bp.106.033324
Kim, S. Y., Karlawish, J. H., Kim, H. M., Wall, I. F., Bozoki, A. C., & Appelbaum, P. S.
(2011). Preservation of the capacity to appoint a proxy decision maker: Implications for dementia research. Archives of General Psychiatry, 68(2), 214–220. https://
doi.org/10.1001/archgenpsychiatry.2010.191
Kimmel, A. J. (2007). Ethical issues in behavioral research: Basic and applied perspectives
(2nd ed.). Blackwell Publishing.
Kitchener, K. S., & Kitchener, R. F. (2009). Social science research ethics: Historical
and philosophical issues. In D. M. Mertens & P. E. Ginsberg (Eds.), The handbook
of social research ethics (pp. 5–22). Sage Publications. https://doi.org/10.4135/
9781483348971.n1
Kush, R., & Goldman, M. (2014). Fostering responsible data sharing through standards.
The New England Journal of Medicine, 370(23), 2163–2165. https://doi.org/10.1056/
NEJMp1401444
70 Barton W. Palmer
Loverde, M. E., Prochazka, A. V., & Byyny, R. L. (1989). Research consent forms: Continued unreadability and increasing length. Journal of General Internal Medicine, 4(5),
410–412. https://doi.org/10.1007/BF02599693
Marson, D. C., Earnst, K. S., Jamil, F., Bartolucci, A., & Harrell, L. E. (2000). Consistency
of physicians’ legal standard and personal judgments of competency in patients with
Alzheimer’s disease. Journal of the American Geriatrics Society, 48(8), 911–918. https://
doi.org/10.1111/j.1532-5415.2000.tb06887.x
Marson, D. C., McInturff, B., Hawkins, L., Bartolucci, A., & Harrell, L. E. (1997). Consistency of physician judgments of capacity to consent in mild Alzheimer’s disease.
Journal of the American Geriatrics Society, 45(4), 453–457. https://doi.org/10.1111/
j.1532-5415.1997.tb05170.x
Meisel, A., Roth, L. H., & Lidz, C. W. (1977). Toward a model of the legal doctrine of
informed consent. The American Journal of Psychiatry, 134(3), 285–289. https://
doi.org/10.1176/ajp.134.3.285
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the
protection of human subjects of research. http://www.hhs.gov/ohrp/humansubjects/
guidance/belmont.html
National Research Act of 1974, Pub. L. No. 93-348, 88 Stat. 342 (1974).
Nishimura, A., Carey, J., Erwin, P. J., Tilburt, J. C., Murad, M. H., & McCormick, J. B.
(2013). Improving understanding in the research informed consent process: A systematic review of 54 interventions tested in randomized control trials. BMC Medical
Ethics, 14(1), Article 28. https://doi.org/10.1186/1472-6939-14-28
Office for Protection From Research Risks. (1993). Informed consent tips. https://www.
hhs.gov/ohrp/regulations-and-policy/guidance/informed-consent-tips/index.html
Palmer, B. W., Cassidy, E. L., Dunn, L. B., Spira, A. P., & Sheikh, J. I. (2008). Effective
use of consent forms and interactive questions in the consent process. IRB: Ethics &
Human Research, 30(2), 8–12. http://www.jstor.org/stable/30033264
Palmer, B. W., & Harmell, A. L. (2016). Assessment of healthcare decision-making
capacity. Archives of Clinical Neuropsychology, 31(6), 530–540. https://doi.org/10.1093/
arclin/acw051
Palmer, B. W., Harmell, A. L., Dunn, L. B., Kim, S. Y., Pinto, L. L., Golshan, S., & Jeste,
D. V. (2018). Multimedia aided consent for Alzheimer’s disease research. Clinical
Gerontologist, 41(1), 20–32. https://doi.org/10.1080/07317115.2017.1373177
Palmer, B. W., Lanouette, N. M., & Jeste, D. V. (2012). Effectiveness of multimedia aids
to enhance comprehension of research consent information: A systematic review.
IRB: Ethics & Human Research, 34(6), 1–15. http://www.jstor.org/stable/41756830
Palmer, B. W., & Savla, G. N. (2007). The association of specific neuropsychological deficits
with capacity to consent to research or treatment. Journal of the International Neuro­
psychological Society, 13(6), 1047–1059. https://doi.org/10.1017/S1355617707071299
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
Roth, L. H., Meisel, A., & Lidz, C. W. (1977). Tests of competency to consent to treatment. The American Journal of Psychiatry, 134(3), 279–284. https://doi.org/10.1176/
ajp.134.3.279
Saks, E. R., Dunn, L. B., & Palmer, B. W. (2006). Meta-consent in research on decisional capacity: A “Catch-22”? Schizophrenia Bulletin, 32(1), 42–46. https://doi.org/
10.1093/schbul/sbj017
Sessums, L. L., Zembrzuska, H., & Jackson, J. L. (2011). Does this patient have medical decision-making capacity? JAMA, 306(4), 420–427. https://doi.org/10.1001/
jama.2011.1023
Sevick, M. A., McConnell, T., & Muender, M. (2003). Conducting research related to
treatment of Alzheimer’s disease: Ethical issues. Journal of Gerontological Nursing,
29(2), 6–9. https://doi.org/10.3928/0098-9134-20030201-05
Informed Consent
71
Sieber, J. E. (1999). What makes a subject pool (un)ethical? In G. Chastain & R. E.
Landrum (Eds.), Protecting human subjects: Departmental subject pools and institutional
review boards (pp. 43–64). American Psychological Association. https://doi.org/
10.1037/10322-002
Simpson, C. (2010). Decision-making capacity and informed consent to participate in
research by cognitively impaired individuals. Applied Nursing Research, 23(4), 221–226.
https://doi.org/10.1016/j.apnr.2008.09.002
Stunkel, L., Benson, M., McLellan, L., Sinaii, N., Bedarida, G., Emanuel, E., & Grady, C.
(2010). Comprehension and informed consent: Assessing the effect of a short consent
form. IRB: Ethics & Human Research, 32(4), 1–9. www.jstor.org/stable/25703705
Sturman, E. D. (2005). The capacity to consent to treatment and research: A review of
standardized assessment tools. Clinical Psychology Review, 25(7), 954–974. https://
doi.org/10.1016/j.cpr.2005.04.010
Sweet, L., Adamis, D., Meagher, D. J., Davis, D., Currow, D. C., Bush, S. H., Barnes, C.,
Hartwick, M., Agar, M., Simon, J., Breitbart, W., MacDonald, N., & Lawlor, P. G.
(2014). Ethical challenges and solutions regarding delirium studies in palliative
care. Journal of Pain and Symptom Management, 48(2), 259–271. https://doi.org/
10.1016/j.jpainsymman.2013.07.017
Taber, C., Warren, J., & Day, K. (2016). Improving the quality of informed consent in
clinical research with information technology. Studies in Health Technology and
Informatics, 231, 135–142. https://doi.org/10.3233/978-1-61499-712-2-135
Taub, H. A. (1980). Informed consent, memory and age. The Gerontologist, 20(6),
686–690. https://doi.org/10.1093/geront/20.6.686
Taub, H. A., & Baker, M. T. (1983). The effect of repeated testing upon comprehension
of informed consent materials by elderly volunteers. Experimental Aging Research,
9(3), 135–138. https://doi.org/10.1080/03610738308258441
Tomlinson, T. (2013). Respecting donors to biobank research. The Hastings Center Report,
43(1), 41–47. https://doi.org/10.1002/hast.115
U.S. Public Health Service. (1973). Jay Katz addenda on Charge I. In Final report of the
Tuskegee Syphilis Study Ad Hoc Advisory Panel. U.S. Department of Health, Education
and Welfare.
Wang, Y.-Y., Wang, S.-B., Ungvari, G. S., Yu, X., Ng, C. H., & Xiang, Y.-T. (2018). The
assessment of decision-making competence in patients with depression using the
MacArthur Competence Assessment Tools: A systematic review. Perspectives in Psychiatric Care, 54(2), 206–211. https://doi.org/10.1111/ppc.12224
Wendler, D. (2013). Broad versus blanket consent for research with human biological
samples. The Hastings Center Report, 43(5), 3–4. https://doi.org/10.1002/hast.200
Whelan, P. J., Oleszek, J., Macdonald, A., & Gaughran, F. (2009). The utility of the
Mini-Mental State Examination in guiding assessment of capacity to consent to
research. International Psychogeriatrics, 21(2), 338–344. https://doi.org/10.1017/
S1041610208008314
World Medical Assembly. (1964). Declaration of Helsinki: Recommendations guiding
doctors in clinical research. https://www.wma.net/wp-content/uploads/2018/07/
DoH-Jun1964.pdf
World Medical Association. (2018). WMA Declaration of Helsinki—Ethical principles for
medical research involving human subjects. https://www.wma.net/policies-post/wmadeclaration-of-helsinki-ethical-principles-for-medical-research-involving-humansubjects/
5
Addressing Conflicts of
Interests in Behavioral
Science Research
Gerald P. Koocher and Konjit V. Page
A
t the most basic level, we conduct research to understand our world and
thereby learn to make better decisions. Biases and reliance on compromised data undermine the very foundation of science. Failure to recognize and
acknowledge conflicting relationships that potentially compromise research
findings or mislead others can prove particularly insidious. Such failures can
flow from personal blind spots or intentional concealment of potential contaminating influences. Actively addressing real or perceived conflicts of interest
(CoIs) therefore sits at the very heart of scientific integrity.
We all make choices when confronted with opportunities for reward and
competing demands on our time, effort, resources, and responsibilities. For
researchers, ethical conflicts occur when contending with many competing
concerns that could potentially influence the scientific integrity of their work.
These concerns include honestly, objectively, and accurately reporting findings, even when doing so might prevent them from profiting financially;
achieving recognition via publication; or gaining some other type of reward.
Earning rewards or recognition tied to research does not pose any intrinsic
CoIs, so long as investigators retain their scientific integrity by following the
rules of science (e.g., methodological soundness, obedience to regulatory standards) and by disclosing any invisible factors that other scientists or members
of the public would want to know in understanding and evaluating the
research. Such hidden factors might include funding or other sponsorship of
the research, beneficial ownership of products or processes studied, or other
factors that might create an appearance of or actual CoIs leading to biased or
potentially compromised results.
https://doi.org/10.1037/0000258-005
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
73
74 Koocher and Page
One solution to addressing actual or apparent CoIs focuses on transparency.
By providing information that consumers of the research might find relevant,
potential conflicts become known and thereby aid consumers in fairly evaluating the findings. Some such CoIs occur in public view, as when research
supports the goals of a product or industry that has publicly funded the salary
of the investigator. Other CoIs may be hidden from public view, as when
concealing support for publication or withholding some research findings
could provide a financial benefit to researchers or members of their families.
Another type of CoI can occur in the form of a conflict of commitments when
a researcher has competing professional obligations, such as collaborating on
another project, preparing a new grant application, or serving as a peer reviewer.
For example, agreeing to take on an assignment with a firm deadline while
already committed to multiple other projects with similarly close deadlines creates
a conflict of interests invisible to each of the parties to whom an impossible
commitment for timely response has been assured. Still other CoIs can result
from rivalries or other personal motives unrelated to the research at hand.
The need to address CoIs or commitments occurs frequently in the lives of
researchers and is not inherently unethical. The ethical challenge arises in the
ways that researchers recognize (or fail to recognize) the conflict and the
manner in which they address it. Ethical researchers are honest about any
interests they have that may cause potential conflicts and inform others so
that a disinterested entity can independently assess researcher objectivity. In
other cases, investigators may opt out of or avoid engaging in research activities for which they may have real or apparent CoIs.
ETHICAL GUIDELINES AND OTHER STANDARDS
A good general summary of documentary guidance on research CoIs can be
found on the American Psychological Association (APA) website under the
heading of “Conflicts of Interests and Commitments” (APA, 2019). These documents include laws and federal agency standards, as well as a range of professional guidelines, that address research with animal and human participants.
All typically address objectivity, integrity, and transparency.
Addressing CoIs in biomedical research, the Institute of Medicine (Lo &
Field, 2009) noted that regulatory policies should focus on prevention as a
strategy for protecting the integrity of professional judgment and assuring the
public trust. This report described how the primary goal of research becomes
subverted by secondary interests, especially when such interests are concealed.
The recommended guidelines focus on transparency, accountability, and fairness by pressing for disclosure in all venues where research is presented (e.g.,
written and oral announcements at professional lectures, author footnotes in
scientific publications).
The National Institutes of Health (NIH) issued revised regulations focused
on financial CoI disclosures in 2011 (see NIH, 2020). These carry particular
weight because they apply to any institution applying for or receiving NIH
Addressing Conflicts of Interests in Behavioral Science Research 75
research funding as well as to investigators participating in federally funded
research. Certain types of grants aimed at small businesses (e.g., Small Business
Innovation Research) and technology transfer grants are exempted because the
goal of such grants involves jump-starting self-sustaining businesses.
Definitions of prohibited behaviors under the NIH rules apply to institutions and to investigators, their spouses, and their dependent children. The
rules also define a “significant” financial CoI as involving an aggregate value
(i.e., salary, consulting fees, stock options, or other ownership interests)
received over 12 months exceeding $5,000. The rules do not bar receiving
more valuable returns but focus only on reporting. We regard the NIH criteria
for reporting as very permissive in scope because of both the limits on the
scope of relationships deemed as warranting scrutiny and the dollar amounts
included as thresholds. Many other types of relationships and far less cash
value might well lead to investigator biases that compromise research.
The APA’s (2017) Ethical Principles of Psychologists and Code of Conduct
(hereinafter referred to as the “APA Ethics Code”) addresses CoIs generally
in Standard 3.06, Conflict of Interest:
Psychologists refrain from taking on a professional role when personal, scientific,
professional, legal, financial, or other interests or relationships could reasonably be
expected to (1) impair their objectivity, competence, or effectiveness in performing
their functions as psychologists or (2) expose the person or organization with
whom the professional relationship exists to harm or exploitation.
Many other sections of the APA Ethics Code touch on potential ethical issues
that could radiate from the basic CoI statement, including Section 3.05, Multiple
Relationships, and Section 8, Research and Publication. The APA Ethics Code
also bars discrimination in Section 3.01, Unfair Discrimination, based on a
number of personal attributes including age, gender, gender identity, race, ethnicity, culture, national origin, religion, sexual orientation, disability, and socioeconomic status. Concealing CoIs may result in unfair discriminatory effects,
particularly with respect to economically disadvantaged or underrepresented
groups. Although these other sections might seem to offer a broader scope of
principled guidance and enforcement than other guidelines, the APA Ethics
Code’s provisions are relatively toothless from an enforcement perspective.
Many behavioral science researchers do not have professional licenses, and
most state licensing boards focus on harms caused by services offered to consumers, rather than research. In addition, a significant portion of active researchers in the behavioral and social sciences do not belong to APA, placing them
outside the enforcement penumbra of the APA Ethics Code. APA has also
reduced its level of attention to ethics enforcement significantly in recent years.
In 2018 the APA Ethics Committee Rules and Procedures underwent a revision
with respect to what the committee would investigate. Under the new policy,
The Committee will not proceed on a complaint if there is another body with
jurisdiction to take action with respect to the member’s license or registration
and generally will not investigate or adjudicate complaints that are the subject
of pending litigation or administrative proceedings. (Ethics Committee of the
American Psychological Association, 2018, section B.4.2)
76 Koocher and Page
As a result of this change, even significant concerns regarding CoIs in research
contexts would likely fall to “administrative proceedings” at researchers’ institutions or other agencies. Institutions may have little incentive or even strong
disincentives to formally investigate less than significantly egregious CoI offenses
(Koocher & Keith-Spiegel, 2010, 2016). For example, by formally investigating
an internal CoI complaint, an institution might reveal lax policies that could
trigger a larger investigation by regulatory agencies. Adopting informal resolutions or overlooking some CoIs might seem a safer or more expeditious
approach, although such practices simply compound problems tied to the lack
of transparency.
Another type of conflict has manifested itself through professional practice
guidelines. The stated intent of such documents involves reaching a professional consensus on evidence-based treatments and best practices to guide
clinical practice. In the face of increasing concern that CoIs may affect the
development of clinical practice guidelines, Cosgrove and her colleagues (2013)
evaluated the American Psychiatric Association’s (2010) Practice Guideline for
the Treatment of Patients With Major Depressive Disorder. They sought to find any
potential financial or other CoIs and examine their possible effects on guideline development. They selected the depression guideline both because of its
influence on clinical practice and because it recommends pharmacotherapy for
all levels of depression, hence engaging the interests of the pharmaceutical
industry. The investigators sought to identify the number and type of financial
CoIs among members of the guideline development group as well as for an
independent panel charged with mitigating any biases related to these conflicts.
Fully 100% of the guideline development panel disclosed financial ties to
industry, with an average of 20.5 relationships (range: 9–33). A majority of the
committee disclosed participation as compensated members of pharmaceutical
companies’ speakers bureaus. Members of the independent panel assigned to
review the guidelines for bias also had undeclared financial relationships.
Cosgrove and her coauthors (2013) also examined the quality of references
used to support the panels’ recommendations, as well as the congruence
between research results and the recommendations. They reported that fewer
than half (44.4%) of the studies cited as supporting the guideline recommendations qualified as high quality. More than a third (34.2%) of the studies cited
did not include outpatient participants with major depressive disorder, and
17.2% did not assess clinically relevant results. A significant percentage
(19.7%) of the references cited did not comport with the recommendations.
Similar findings in unreported or underreported CoIs have also been noted
in international practice guideline contexts (Bindslev et al., 2013). Efforts to
explore intellectual or scholarly CoIs revealed that 9.1% of all cited research and
13.0% of references cited in support of the recommendations were coauthored
by six of the guideline developers.
These findings should not come as a surprise, as published experts were
sought out to serve on panels requiring their expertise. However, the high
Addressing Conflicts of Interests in Behavioral Science Research 77
prevalence of financial CoIs and the variable quality of the evidence cited
raise questions about the independence and clinical validity of the guideline recommendations. Despite increasing awareness and many attempts to
call attention to the problem, the public cannot rely solely on governmental
authority or professional associations to effectively eliminate the hidden
influence of CoIs in research.
PREVENTIVE STEPS
Prevention of troubling CoIs requires an understanding of the motives that
might underlie dishonesty. Behavioral scientists have had a long-standing
interest in the study of dishonest behavior as a human character flaw. In the
1920s, Hartshorne and May (1928–1930) began a series of studies on human
character in which they tempted children to engage in dishonest behavior.
They began with the commonsense assumption that virtuous behavior
presupposed possessing certain attitudes or knowledge (e.g., children who
belonged to the Boy Scouts or participated in moral or religious instruction
should regularly have demonstrated greater honesty than those who did not).
Their most significant finding showed that they could not categorize children
as virtuous or not, but rather that “moral behavior” appeared highly situational
or context specific. A substantial body of subsequent research (e.g., Bazerman
& Tenbrunsel, 2011; Burton, 1963; Koocher & Keith-Spiegel, 2010; Lo &
Field, 2009) has suggested that a more fruitful approach to the study of honesty
requires consideration of cognitive factors including the beliefs and motivation of the person in a specific context.
Distilled to its simplest elements, three factors contribute to people’s willingness to engage in dishonest behavior (Koocher & Keith-Spiegel, 2010).
First, the person must have a degree of moral flexibility or latitude that allows
them to set aside or ignore socially or professionally promoted ethical principles and moral conviction under some circumstances—for example,
I know that researchers are generally supposed follow all the rules, but some of
those rules are simply advisory and can be overlooked in my study. It’s OK to
drop the few significant outliers from my data analyses and not mention it in the
write-up.
Second, the person must assess the situation as one in which the preferred
outcome or payoff reaches a level outweighing any tendency to adhere to
standard ethical principles. For example, in some cases people may engage in
a kind of cognitive calculus that permits them to contort reasoning in such a
way that they believe their choice justified:
If I get the right results, the payoff will be enormous. Plus, I’m sure I already
know how the results are going to look once all the data are in. The pharmaceutical company will support more research funding and offer me a big consulting
contract or stock options.
78 Koocher and Page
Finally, the person must believe that the likelihood of detection and being
held accountable for violation of ethical standards is low, thus assuring they
will proceed undetected and unchallenged—for example,
All the key people signed nondisclosure agreements, so no one will know if I
withhold information on the stock options I was offered for a statistically significant outcome.
In some ways, willingness to engage in dishonest behaviors parallels bystander
effects in social psychological research. Bystanders will intervene if they
(a) notice the situation in question, (b) interpret it as one requiring intervention, and (c) decide that they have a responsibility to act (Staub, 2015).
Meta-analytic research has demonstrated that bystanders often vary in the
key factors of noticing, interpreting, and deciding to take responsibility to act
based on context, such as perceived risk or danger, using an arousal–cost–
reward model (Fischer et al., 2011). These patterns may explain why some
researchers do not seem to recognize CoI situations or seek to conceal them.
Colleagues who notice problems and are willing to speak up can play a role in
reducing CoIs’ adverse effects, but many do not choose to become involved
(Koocher & Keith-Spiegel, 2016). Common reasons reported for not wanting
to raise questions after noticing CoI problems include uncertainty about the
facts, professional or personal ties to the putative perpetrator, and fears that
the host or employing institution would not support the person making a
complaint (Koocher & Keith-Spiegel, 2016).
Enacting and enforcing clear formal CoI disclosure policies can go far to set
a tone that will reduce the likelihood of promulgating unrecognized potentially biased findings. For example, mandating a public disclosure statement
of financial and other material support for speakers or their underlying
research at the start of any public presentation aids in transparency. Similarly,
disclosures at the beginning of any published research reports have a similar
effect (Gorman, 2019). Such requirements also tend to avoid concealment by
omission, as when an author reasons that there was no need to disclose potential conflicts because no one asked.
CASE EXAMPLES
Harvard Mood Disorders Study
On July 2, 2011, the Harvard Crimson reported that three nationally known
child psychiatrists—Joseph Biederman, Timothy Wilens, and Thomas Spencer—
had violated the CoI policies of Harvard Medical School and their employer, the
Massachusetts General Hospital (Yu, 2011). They had done so by failing to report
fully on the income they earned from pharmaceutical companies. For example,
between 2000 and 2007, Biederman reportedly earned at least $1.6 million but
reported only $200,000 to university and hospital authorities (Harris, 2008).
Details were covered extensively in the national press (e.g., The New York Times,
Addressing Conflicts of Interests in Behavioral Science Research 79
n.d.). Two of the professors, Biederman and Wilens, were leading investigators
and advocates for the diagnosis and treatment of pediatric bipolar disorder
(Harris, 2008). The third, Spencer, focused his research largely on attentiondeficit/hyperactivity disorder.
Once thought rare in preadolescents, the controversial condition called
“pediatric bipolar disorder” became increasingly diagnosed in children, including
many preschoolers. Over a 10-year period beginning in the mid 1990s, the use
of the diagnosis increased 40-fold (Parens & Johnston, 2010). With support
from pharmaceutical companies, Biederman and Wilens gave talks and wrote
papers on the off-label use of mood stabilizers and antipsychotic drugs to treat
this condition (see U.S. Food and Drug Administration, 2018). This means that
they advocated for the use of pharmaceutical drugs in an unapproved age
group while earning substantial income from the manufacturers of the drugs.
Such conflicts of interests have been cited as a possible factor in the rise of
pediatric bipolar disorder diagnoses (Levin & Parry, 2011). Parents may find it
less stigmatizing to have a child diagnosed with an inherited biological or biochemical disorder as opposed to perhaps needing guidance, child counseling, or
even less appealing, a recommendation for personal psychotherapy.
In this case, the university and hospital took several actions. The psychiatrists were required by the university and hospital to write a letter of apology to
the rest of the faculty. They were prohibited from participating in incomeproducing activities for pharmaceutical companies for 1 year and were required
to obtain university and hospital permission to participate in income-producing
activities for 2 years after the year of exclusion.
University of Pennsylvania Genetic Trials
Jesse Gelsinger, who had a rare liver disease, ornithine transcarbamylase deficiency (OTCD), died while participating in a gene therapy trial at the University
of Pennsylvania. OTCD is an X-linked genetic disorder that prevents the breakdown and excretion of ammonia, allowing an increase to toxic levels and
compromising the central nervous system. One of the lead researchers, James
Wilson, held shares in a biotech company, Genovo, that stood to gain from the
research’s outcome. During investigations that followed Gelsinger’s death,
internal university documents implicitly valued Wilson’s stake in Genovo at
approximately $28.5 million to $33 million (Wilson, 2010).
Gelsinger’s death sparked two separate lawsuits, one by the family and one
by the federal government. Both suits were settled, with no public apologies
or acknowledgment of wrongdoing in either case (Wilson, 2010). The interesting narrative and careful legal analysis by law professor Robin Wilson
argued that financial interests should trigger significant, ongoing review of
any related clinical trials. She believed that if James Wilson’s outsized financial stake had triggered mandatory monitoring, people inside the university
health care system would have stumbled on a string of questionable decisions
(including significant departures from the research protocol) long before
those mistakes culminated in Gelsinger’s death.
80 Koocher and Page
What Can We Learn From These Cases?
Clearly, both researchers and their universities differ significantly in their
recognition, disclosure, and actions with respect to CoIs. A recent report by
The Chronicle of Higher Education and ProPublica (Waldman & Armstrong,
2019) found that many public universities actually refuse to reveal faculty
members’ CoIs.
As a prototypical example, Waldman and Armstrong (2019) cited Sharon
Donovan and Richard Mattes, both full professors of nutrition science at
major universities who served on a federal advisory committee that drafted
the nation’s dietary guidelines. Both had extensive ties to the food industry,
but that connection was not easy to discover. Donovan, a faculty member at
the University of Illinois flagship campus, reported receiving gifts, honoraria,
or consulting income from companies selling infant formula (e.g., Wyeth
Nutrition, Mead Johnson) on forms that the state made available to the public.
However, similar reports that Mattes filed for administrators at Purdue University in neighboring Indiana were considered confidential “personnel records.”
Only by searching the footnotes of journal articles could Waldman and
Armstrong learn that he had received payments from nine companies and
trade groups (e.g., Procter & Gamble, PepsiCo, Alliance for Potato Research
and Education). The authors concluded that the public often knows very little
about faculty members’ outside activities that could influence their teaching,
research, or public policy views (Waldman & Armstrong, 2019).
It seems highly desirable that well-regarded experts with sound research
and scholarly knowledge should be called upon as consultants when framing
policy, developing new products, or recommending professional-level care to
others. It also seems most appropriate to compensate such experts for their
time and effort in performing such activities. At the same time, expert opinion
should not bend as private quid pro quo. By analogy, when testifying as an
expert witness, the hourly rate charged by the expert must be disclosed or will
certainly be asked on cross-examination. If asked, “How much are you being
paid for your opinion?” the expert should be able to honestly affirm, “I am
being paid for my time and my work, but my opinion is not for sale.”
CONCLUSION
It should be reiterated that there is no inherent ethical conflict in having
multiple professional and personal relationships, so long as objectivity, competence, and effectiveness in performing one’s professional functions remain
unencumbered and no exploitation takes place. That said, it seems clear that
many skilled professionals can have significant blind spots with respect to
their own real or apparent biases. Similarly, many institutions and businesses
can have incentives to overlook such biases. The solution rests on transparency and the willingness of individuals to recuse themselves from decision
making when and where unfair bias may exist or may appear relevant to
other affected parties.
Addressing Conflicts of Interests in Behavioral Science Research 81
REFERENCES
American Psychological Association. (2017). Ethical principles of psychologists and code of
conduct (2002, amended effective June 1, 2010, and January 1, 2017). http://www.
apa.org/ethics/code/ethics-code-2017.pdf
American Psychological Association. (2019). Conflicts of interests and commitments.
https://www.apa.org/research/responsible/conflicts/
American Psychiatric Association. (2010). Practice guideline for the treatment of patients
with major depressive disorder. https://psychiatryonline.org/pb/assets/raw/sitewide/
practice_guidelines/guidelines/mdd.pdf
Bazerman, M. H., & Tenbrunsel, A. G. (2011). Blind spots: Why we fail to do what’s
right and what to do about it. Princeton University Press. https://doi.org/10.1515/
9781400837991
Bindslev, J. B. B., Schroll, J., Gøtzsche, P. C., & Lundh, A. (2013). Underreporting of
conflicts of interest in clinical practice guidelines: Cross sectional study. BMC Medical
Ethics, 14, Article 19. https://doi.org/10.1186/1472-6939-14-19
Burton, R. V. (1963). Generality of honesty reconsidered. Psychological Review, 70(6),
481–499. https://doi.org/10.1037/h0047594
Cosgrove, L., Bursztajn, H. J., Erlich, D. R., Wheeler, E. E., & Shaughnessy, A. F.
(2013). Conflicts of interest and the quality of recommendations in clinical guidelines. Journal of Evaluation in Clinical Practice, 19(4), 674–681. https://doi.org/10.1111/
jep.12016
Ethics Committee of the American Psychological Association. (2018). Rules and procedures. https://www.apa.org/ethics/committee-rules-procedures-2018.pdf
Fischer, P., Krueger, J. I., Greitemeyer, T., Vogrincic, C., Kastenmüller, A., Frey, D.,
Heene, M., Wicher, M., & Kainbacher, M. (2011). The bystander-effect: A metaanalytic review on bystander intervention in dangerous and non-dangerous emergencies. Psychological Bulletin, 137(4), 517–537. https://doi.org/10.1037/a0023304
Gorman, D. M. (2019). Use of publication procedures to improve research integrity by
addiction journals. Addiction, 114(8), 1478–1486. https://doi.org/10.1111/add.14604
Harris, G. (2008, October 4). Top psychiatrist didn’t report drug makers’ pay. The New
York Times. https://www.nytimes.com/2008/10/04/health/policy/04drug.html
Hartshorne, H., & May, M. A. (1928–1930). Studies in the nature of character. Macmillan.
Koocher, G. P., & Keith-Spiegel, P. (2010). Peers nip misconduct in the bud. Nature,
466(7305), 438–440. https://doi.org/10.1038/466438a
Koocher, G. P., & Keith-Spiegel, P. C. (2016). Ethics in psychology and the mental health
professions: Standards and cases (4th ed.). Oxford University Press.
Levin, E. C., & Parry, P. I. (2011). Conflict of interest as a possible factor in the rise of
pediatric bipolar disorder. Adolescent Psychiatry, 1(1), 61–66. https://doi.org/10.2174/
2210676611101010061
Lo, B., & Field, M. J. (Eds.). (2009). Conflict of interest in medical research, education, and
practice. Institute of Medicine, Board on Health Sciences Policy, Committee on
Conflict of Interest in Medical Research, Education, and Practice.
National Institutes of Health. (2020). Required submission of financial conflict of interest policy
into the eRA Commons Institution Profile (IPF) module (Notice No. NOT-OD-21-002).
https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-002.html
The New York Times. (n.d.). Times topics. http://topics.nytimes.com/topics/reference/
timestopics/people/b/joseph_biederman/index.html
Parens, E., & Johnston, J. (2010). Controversies concerning the diagnosis and treatment of bipolar disorder in children. Child and Adolescent Psychiatry and Mental Health,
4, Article 9. https://doi.org/10.1186/1753-2000-4-9
Staub, E. (2015). The roots of goodness and resistance to evil: Inclusive caring, moral
courage, altruism born of suffering, active bystandership, and heroism. Oxford University
Press.
82 Koocher and Page
U.S. Food and Drug Administration. (2018). Understanding unapproved use of approved
drugs “off label.” https://www.fda.gov/patients/learn-about-expanded-access-andother-treatment-options/understanding-unapproved-use-approved-drugs-label
Waldman, A., & Armstrong, D. (2019, December 6). Many public universities refuse to
reveal professors’ conflicts of interest. Chronicle of Higher Education. https://www.
chronicle.com/article/Many-Public-Universities/247667
Wilson, R. F. (2010). The death of Jesse Gelsinger: New evidence of the influence of
money and prestige in human research. American Journal of Law & Medicine, 36(2–3),
295–325. https://doi.org/10.1177/009885881003600202
Yu, X. (2011, July 2). Three professors face sanctions following Harvard Medical School
inquiry. Harvard Crimson. http://www.thecrimson.com/article/2011/7/2/schoolmedical-harvard-investigation
6
Data Sharing
Rick O. Gilmore, Melody Xu, and Karen E. Adolph
P
sychologists embrace the ethical imperative of protecting research participants from harm. We argue that sharing data should also be considered an
ethical imperative. Despite potential risks to participants’ privacy and data
confidentiality, sharing data confers benefits to participants and to the community at large by promoting scientific transparency, bolstering reproducibility, and fostering more efficient use of resources. Most of the risks to
participants can be mitigated and the benefits of sharing realized through
well-established but not yet widespread practices and tools. This chapter
serves as a how-to manual for addressing ethical challenges in sharing human
data in psychological research in ways that simultaneously protect participants and advance discovery.
THE ETHICAL IMPERATIVE TO SHARE HUMAN RESEARCH DATA
In a typical psychology study, researchers collect some data, process and analyze it, and then publish a report of their findings. Although other researchers
can read the publication, typically they have no access to the original data,
the context in which it was collected, or the various incarnations of the data
Work on this chapter was supported by grants from the James S. McDonnell
Foundation (220020558), the Defense Advanced Research Projects Agency
(HR001119S0005-MCS-FP-035), and the Eunice Kennedy Shriver National Institute
of Child Health and Human Development (R01-HD-094830).
https://doi.org/10.1037/0000258-006
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
83
84 Gilmore, Xu, and Adolph
during processing and analyses (Adolph, 2020). Descriptions in the Method
and Results sections of journal articles are necessarily incomplete—at best,
crude road maps of data collection and data processing. No matter how detailed,
graphs and statistics cannot portray the richness of the original data; they can
only summarize interpretations of the data. Thus, progress in psychological
science relies on a considerable amount of blind trust (Heidorn, 2008).
Moreover, the data that comprise all the steps from data collection to
published report remain private. Although still potentially useful, the data go
unused and the analysis scripts disappear into the lab ether. Potential resources
are needlessly wasted.
Data Sharing Increases Transparency and Reproducibility
Data sharing supports transparency and reproducibility, two tenets of research
that distinguish science from religion, opinion, and fake news. Indeed, data
sharing is so critical for transparency and reproducibility that it is an ethical
imperative for scientific progress (Gennetian et al., 2020). It is an antidote to
blind trust.
However, Psychological Science, the flagship journal of the Association for
Psychological Science (2020), and top-tier journals of the American Psychological Association (2020; e.g., Journal of Experimental Psychology, Journal
of Personality and Social Psychology, Developmental Psychology) recommend—but
do not require—data sharing. Science, a premiere outlet for general science,
requires researchers to share only the final processed data, not the original
data (American Association for the Advancement of Science, 2020). Unfortunately, without the original data, readers cannot know exactly what happened
during data collection—who said and did what to whom, and how and in
what context. And readers cannot know exactly what transpired over the
course of data processing—how data were scored, filtered, or smoothed;
which data points were removed and for what reasons; and so on. In effect,
readers must trust scientific reports without sufficient means to verify them.
Moreover, without full transparency over the entire research workflow,
researchers cannot identify the culprit for failures to replicate. Although outright
fraud is presumably rare (Bhattacharjee, 2013; Couzin-Frankel, 2014), lack of
reproducibility may result from small effects, as many doubters fear (Nosek
et al., 2015). Or the problem may stem solely from lack of transparency: Truly
robust results fail to be replicated because new researchers do not know exactly
what the previous researchers did (Adolph, 2020; Gilmore & Adolph, 2017;
Suls, 2013). Total transparency may seem a far-off, utopian ideal (Adolph et al.,
2012; Nosek et al., 2012). But scientists have an ethical obligation to document
procedures and findings in ways that others can readily reproduce. To maximize reproducibility, data processing, visualization, and analyses should be
automated with scripts, rather than done manually with button clicks in a
computer program; the scripts should be annotated and documented and
shared along with the data. Preregistration is a positive step (Nosek et al., 2019;
Nosek & Lindsay, 2018), but it is no panacea. Preregistrations, like published
Data Sharing
85
reports, contain insufficient detail to allow exact replication of procedures and
analyses, and researchers often stray from preregistered analysis plans (Claesen
et al., 2019).
Video is a better solution for comprehensive documentation of procedures.
Video clips of a typical (or exemplary) data collection with real (or simulated)
participants, from beginning to end, can document the procedures, displays,
and surrounding context with more richness and fidelity than are possible
with text-based descriptions or static images (Adolph, 2020; Adolph et al.,
2017; Gilmore & Adolph, 2017; Suls, 2013). For behavioral data (on anyone
doing anything), video recordings provide a detailed, dynamic record of what
happened, such as what the lab room looked like, how procedures were
explained, or the timing of experimental displays. For physiological, imaging,
and biological data, video documentation records potentially crucial aspects
of data collection, such as how researchers explained the technology, got
participants to lie still in the MRI machine, put recording devices on participants, or collected blood or spit samples. We argue that video documentation
of procedures should be standard in psychological science—whether or not
video is a primary source of raw data (Suls, 2013). Many outlets are available
for sharing procedural videos. The Journal of Visualized Experiments (2020)
provides voiced-over video documentation of procedures, but publishing is
costly. Alternatively, procedural videos can be shared alongside publications
as supplemental materials in many journals for free or the cost of the article.
And regardless of the existence of publications, extensive video documentation of procedures and displays can be shared for no cost at Databrary.org
and on the Open Science Framework (osf.io).
In addition to documenting procedures, video excerpts provide compelling
documentation of research findings in behavioral studies. Whereas a picture
is worth a thousand words, a video is worth a thousand pictures (Adolph,
2020). Of course, videos can be seen differently by different eyes, so video
documentation will not diminish the vigor of scientific debate, but it will shift
the focus (Gilmore, 2016; Gilmore et al., 2018). Moreover, the perceived costs
of video sharing to researchers (e.g., time to collect, curate, and store data)
and risks to participants (the need to protect personally identifiable information from unauthorized disclosure or misuse) can be mitigated by building
on previously proven technologies and established policy frameworks, which
are discussed later in this chapter.
Data Reuse Leverages Resources
Psychological research is expensive. Federal agencies spend billions of taxpayer
dollars annually on behavioral research. Thus, researchers have an ethical
obligation to use grant-funded support to maximize benefits to research
participants and to the public (Brakewood & Poldrack, 2013). Nevertheless,
valuable data and the resources expended for data collection are routinely
wasted. Data “shared” on researchers’ lab servers, websites, and institutional
repositories may be unfindable by others. Researchers at different sites collect
86 Gilmore, Xu, and Adolph
similar data supported by different grants because psychology laboratories are
siloed. Researchers must reinvent procedures for themselves because they
do not know the details of similar procedures used in prior work.
The current situation is fraught, but data sharing can help. Data sharing
speeds scientific progress because researchers can revisit and reuse existing data
to ask questions outside the bounds of the original study—often questions
never imagined by the original researchers. As Heidorn (2008) put it, “The true
worth of the data is not determined by the cost for gathering it but in the
savings incurred by someone else not needing to gather it again” (p. 290).
ETHICAL CONSIDERATIONS IN SHARING DATA
If data sharing is an ethical practice and a boon to discovery, why do so few
psychological researchers share? In most areas of psychological research, data
sharing is not a scientific necessity, it is a choice. It is not mandated by funders,
journals, or societies, and it is not rewarded or supported by academic institutions. Moreover, researchers must overcome the inertia of the status quo to
acquire the know-how and invest the time and effort to plan for sharing and
to curate shared data to make them findable and reusable. And, of course, the
ethical challenges in sharing human research data contribute to researchers’
reluctance to share.
Sharing by Necessity or by Choice
In some fields, researchers cannot acquire the necessary data on their own.
Data collection may be infeasible, impractical, or prohibitively expensive for a
single research team. Particle physics, for example, is so expensive (e.g., billions
of dollars to fund the Large Hadron Collider) and physically large (the collider has
a circumference of 16.6 miles) that the science requires large-scale, international data sharing (European Council for Nuclear Research, 2020). In other
fields, only a shared corpus of data is sufficiently large to match the scope
of the subject matter. Meteorology (weather science) and seismology (earthquake science) require international data sharing because the scope of the
science eclipses the capacity of single data collection sites. So, historically, data
sharing has an inverse relation with independent research. That is, the greater
the difficulty of conducting research alone or in small groups, the more likely
it is for that research community to share data.
What about psychological science? Sharing is necessary to collect sufficiently large and diverse samples for large-scale studies of social outcomes,
health and disease, human genetics, language, neural imaging, and so on
(Frank, 2020; Hood & Rowen, 2013; Inter-University Consortium for Political
and Social Research, 2020; MacWhinney, 2020; Manolio et al., 2008; Poldrack
& Gorgolewski, 2014). But most conventional psychological research does
not require shared data. This is especially true for studies that rely on samples
of convenience—undergraduates in introductory courses, people living near
Data Sharing
87
the research lab, and, in the case of online data collection, people with access
to the Internet and time to spare. Thus, typical research practices impose no
pressure to share. For most psychology researchers, data sharing is an ethical
choice, not a necessity.
Ethical Considerations in Sharing Human Data
Ethics boards rely on widely recognized guidelines such as the Belmont Report
and the Declaration of Helsinki to ensure participants’ health and well-being
and to protect their privacy and the confidentiality of their data (National
Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979; World Medical Association, 2013). However, less widely
recognized considerations apply to every stage of data sharing, including the
decision to collect data, data collection, data curation, data contribution, and
data reuse.
Even the decision to collect new data involves ethical issues. Are new participants warranted? Although some subdisciplines in psychology rely heavily
on secondary data analyses, most do not. However, like primary data collection, data reuse entails meaningful scholarship. Moreover, data reuse may be
cheaper and more efficient, incur less risk, and provide more benefits to
participants (e.g., by minimizing their time and effort) than primary data
collection. Existing data can be culled from multiple sources and used in
combination with new data to minimize participant effort or to expand a
sample. Indeed, the ethics of collecting small, underpowered samples are
questionable (Fleming & Reynolds, 2008), and data reuse can help address
the limitations of small samples and samples of convenience. Furthermore,
the bulk of psychological research focuses on only a tiny proportion of the
world’s population (Henrich et al., 2010), an ethical violation of a different
sort. New data collection targeting underrepresented groups and widespread
data sharing may help mitigate the consequences of building a human science
on a narrow slice of humanity (Gilmore et al., 2020).
Perhaps the most obvious ethical challenge during data collection concerns
participants’ permission to share their identifiable data (Meyer, 2018). Permission to share, which can occur at any time, is distinct from consent to
participate, which must occur before participation. Thus, researchers (and
ethics boards) can choose whether to separate or combine these two requests.
However, if researchers do not lay the foundation for sharing (e.g., obtain
appropriate approval from their ethics board) and request participants’ permission in real time, later sharing is often infeasible. Whereas consent to
participate involves telling participants about the procedures and general
aims of the study, permission to share is open ended. Shared data can be
reused to address aims far beyond the scope of the original study, and new
technologies (e.g., machine learning algorithms) expand the universe of
potential reuses. So in seeking permission to share data, researchers should
not (and likely cannot) specify how shared data will be used in the future. But
researchers can and should tell participants what data will be shared, who
88 Gilmore, Xu, and Adolph
will have access, where data will be stored, and for how long. And researchers
should inform participants about the potential benefits and risks of sharing
their data.
A less obvious ethical challenge concerns the instruments and tools used to
collect data. Shared data should be interoperable, in formats readable by open
source or widely available tools (Wilkinson et al., 2016). Thus, use of proprietary instruments (e.g., self-report instruments copyrighted by publishers) can
preclude sharing the original, unprocessed data, and proprietary data formats
can require researchers to purchase the tools to enable data reuse (e.g., imaging
and physiological data).
Data collection rarely incorporates curation—quality assurance, cleaning,
organization, and documentation. However, researchers should curate their data
during data collection and processing, rather than as an agonizing follow-up
exercise after their findings are published. Ethical data sharing requires
data to be findable (e.g., by tagging it with appropriate search terms and key
words) and reusable by including sufficient documentation about the data
and its provenance (Wilkinson et al., 2016). Thus, data sets require thoughtful
curation. Sharing a spreadsheet with obscure column headers and limited
documentation may meet the letter of a journal’s or funder’s requirements
but not the spirit. Curation is less onerous if researchers collect, process, and
analyze their data with an eye toward sharing.
The most critical part of data curation is to respect participants’ permission
to share. Participants may share personally identifiable data, including their
name, genome, sexual activity, crimes, and romantic relationships. But many
participants, researchers, and ethics boards prefer to share “deidentified” data
stripped of personal identifiers (Joel et al., 2018). Regardless, private or sensitive information must be adequately protected (Office for Human Research
Protections, 2017). At first blush, the distinction between public (e.g., birth
records) and private or sensitive information (e.g., medical records) may seem
obvious. However, the line is not clear cut. Publicly shared information in
the United States (e.g., party enrollment of registered voters, religious affiliations) may constitute private, highly sensitive information in other countries.
Similarly, the distinction between sensitive and nonsensitive information
changes across generations. The current prevalence of social media platforms,
for example, encourages people to publicly share private snippets of their
lives in words, photographs, and videos. As evidenced by Facebook’s exchange
of contact list data with companies like Yahoo and Amazon, people’s perception of private information and what is actually private undergo rapid and
dramatic change. As an article in The Atlantic put it, “Facebook didn’t sell your
data; it gave it away” (Madrigal, 2018).
Ethical sharing requires data to be accessible (Wilkinson et al., 2016).
Researchers and their ethics boards must decide when to share data, who
should have access, and how access should be controlled. Embargoes until a
paper goes to press, for a fixed period after data collection ends, or at the end
of a grant period are justifiable. But indefinite, promissory time periods often
stretch into “never” (Ascoli, 2006; Meyer, 2018). When granting access, some
Data Sharing
89
repositories, journals, and researcher websites delegate control to the data
contributor. That is, potential users must contact the contributor to request
access. This is an ethically dubious practice if restricting access is self-serving
(e.g., blocks access to scholarly rivals), and “outing” oneself to or engaging
in negotiations with the data contributor may discourage certain groups of
researchers (e.g., young or new investigators, underrepresented minorities,
potential detractors) from requesting access. A similarly dubious practice is to
allow researchers access only in exchange for coauthorship on resulting papers.
Open access (without data contributors vetting reusers’ identities and purposes
or mandating coauthorship) is the most ethical stance: It maximizes the benefits of research participation and ensures that access is equitable and unbiased.
However, data contributors should get scholarly credit for their work. Most
recognized repositories generate standard citation formats and persistent identifiers (e.g., digital object identifiers [DOIs]) for data sets, making citation easy.
The final step in the data-sharing life cycle is reuse, and data reusers also
have ethical obligations and deserve ethical protections. In addition to citing
the data contributor, data reusers should address data errors in ways that
minimize harm to the contributor while advancing the science (Gilmore et al.,
2020). If researchers add to the data set, modify it, or integrate across shared
data sets, they should share the new data set, too. Researchers should be free
to reuse data as they see fit, including in ways that are critical of the original
data set or data contributor.
CASE STUDY: DATABRARY
Our perspectives on the ethics of data sharing come from experience creating
and supporting the Databrary video data library. Databrary (databrary.org)
provides a case study to illustrate how ethical challenges in sharing personally
identifiable human data can be addressed and how Databrary handles situations that involve potential harm to research participants, contributors, and
data reusers.
About Databrary
Databrary is the world’s only data repository specialized for storing and sharing
research video. As of early 2021, the library held 72,000+ hours of shared
video, and the Databrary network included 1,265 authorized researchers and
587 of their affiliates from 622 institutions around the globe. Researchers’
use of Databrary is free, including technical support and use of the Datavyu
video coding tool (datavyu.org). For historical reasons, most videos depict
children, but Databrary is designed to share research videos of human and
nonhuman animal behavior at any age.
Video is a uniquely powerful medium for documenting procedures and
findings and for training and educational purposes. Research video is also an
inexpensive, widely available, and uniquely powerful source of raw material
90 Gilmore, Xu, and Adolph
for reuse. Video captures the richness of behavior and the subtle details of the
surrounding context, and it allows multiple observers to revisit the same
events with fresh eyes to ask new questions. In addition, compared with data
in proprietary formats and processed data, raw research video requires relatively little information about the data provenance. Nonetheless, the virtues of video also pose tough ethical challenges. In particular, video contains
personally identifiable information (e.g., people’s faces, voices, names; the
interiors of their homes or classrooms), and it cannot be easily anonymized
without losing much of its potential for reuse.
Databrary’s ethical framework protects participants’ privacy and the confidentiality of their shared data. It also protects the rights of data contributors
and data reusers while ensuring the integrity of the data. The policy framework rests on the Databrary access agreement (Databrary, 2020a), which is a
legal contract between researchers, their institutions, and Databrary’s host
institution, New York University (NYU). To become “authorized investigators”
with access to Databrary, researchers must promise to respect participants’
wishes about sharing, to treat other researchers’ data with the same high standards as they treat their own, and to be responsible for the treatment of data
by people they supervise (“affiliates” such as students, staff, and collaborators).
Institutions are responsible for their researchers’ conduct on Databrary, so
most institutions limit authorization to faculty or full-time independent
researchers. In contrast to most data use agreements, the Databrary access
agreement gives researchers open access to all shared data in the library, and it
allows them to contribute data themselves with the appropriate permissions.
Fundamentally, the Databrary policy framework for sharing and protecting
data rests on two pillars: enabling open access among a restricted community
of researchers and obtaining explicit permission to share from participants.
Open Access to Shared Data Among a Restricted Community of
Researchers
Databrary limits access to all nonpublic data to its community of authorized
researchers. Among this community, access is open to all shared data, so
would-be users need not contact contributors to request access to their data
or specify to contributors how they will use the data. Reciprocally, contributors do not know who accessed their data or for what purposes, at least until
a publication citing the data set appears.
As shown in the top left of Figure 6.1, when researchers apply for authorization, Databrary staff work with the institution to approve the access agreement. Some institutions require an ethics board to review the agreement;
most handle authorization like a formal contract or data use agreement. After
institutions authorize them, researchers can, in turn, give their students, staff,
and trainees access to their own data or to all shared data on Databrary.
Requiring a formal contract that holds institutions responsible for their authorized investigators, and authorized investigators responsible for their trainees
Data Sharing
91
FIGURE 6.1. Scenarios for Sharing and Reusing Data and Materials Stored on
Databrary
Researcher registers and
requests authorization
Institution authorizes
researcher
Scenario 1:
Data sharing
Public user
Researcher can authorize
affiliates (students, lab
staff, and postdocs)
Scenario 2:
Educational,
training,
or nonresearch
purposes
Ethics board
approves
uploading and
sharing data
Researcher Participants
(or affiliate consent to
with
share
permission) their data
can upload
data
Researcher
can share
displays,
materials, and
deidentified
data
Researcher
can share
identifiable
data
Scenario 3:
Secondary
analyses or
research
reuse
Scenario 4:
Public
browsing
and citizen
scientist
reuse
Ethics board
approves
data reuse
Researcher
(or affiliate with
permission) can
browse and
download
shared data for
educational,
training, or
nonresearch
purposes
Researcher
(or affiliate
with
permission)
can download
and reuse
shared data
Anyone can
browse,
download,
and reuse
public data
Researcher
(or affiliate)
must cite
shared data
Note. The figure summarizes the main steps involved in four different data sharing and
use scenarios involving Databrary.
and staff, limits accessibility to a known community of researchers, bolsters
accountability, and provides protections for shared data.
Data Security
NYU (2020) has classified data stored on Databrary as moderate risk and
evaluates and maintains security measures accordingly. To ensure data
safety, storage and backup are geographically distributed. Moreover, Databrary is a member of the Data Preservation Alliance for the Social Sciences
92 Gilmore, Xu, and Adolph
(https://www.data-pass.org/), a group of social science data repositories that
share data and metadata across groups to reduce the likelihood of data loss.
NYU committed to storage and preservation in perpetuity.
What about unauthorized access? Each account is password protected,
and Databrary requires researchers to follow the security practices recommended by their institutions, such as downloading data only to protected
computers on campus or to encrypted hard drives on laptops. Data breaches
or unauthorized access in the labs of researchers who use shared data are the
responsibility of those researchers and their institutions. Like other authorized investigators, Databrary staff have no access to unshared private data
uploaded by researchers.
Change in Researchers’ Status
What happens if authorized investigators change institutions, retire, or die?
Or if a student leaves the lab or gets a faculty position? Researchers and institutions are responsible for maintaining their own status and that of their
affiliates. They cannot access other people’s shared data without appropriate
institutional authorization. Institutions decide whether to allow former faculty
to retain control over data they contributed or whether to appoint a data
steward. Databrary will continue to share contributed data unless directed to
do otherwise by the original institution to ensure that changes in the status of
data contributors do not disrupt its availability to others.
Explicit Permission to Share
The second pillar of the Databrary policy framework involves securing explicit
permission to share data. As illustrated in Scenario 1, shown in Figure 6.1,
data contributors must secure permission from their ethics board and research
participants. The ethics board must agree that data can be stored on Databrary
and shared according to participants’ wishes, and participants must specify
their preferred level of sharing. If participants say no to sharing beyond the
original research team, the data default to “private.” Researchers must manually
change the permissions setting if participants agree to share all (or portions) of
their data with other Databrary researchers (“authorized users”), or in addition
to allow video excerpts to be shown (“learning audiences”), or to share their
data with anyone in any context (“public”). Researchers can decide at any
point to restrict sharing (e.g., if a participant inadvertently revealed sensitive
information or displayed a potentially embarrassing behavior), but they cannot
share at broader levels than participants specified. For minors (< 18 years of
age), parents must give permission for their children’s data to be shared, and
children capable of assent must do so in addition. Databrary offers template
permission forms for participants and video tutorials and scripts to illustrate
how researchers can request permission from participants (Databrary, 2020b).
In most cases, permission to share can be separated from consent to participate in the study. Thus, researchers can request permission to share after the
Data Sharing
93
study is completed and participants have a clearer understanding of the
protocol than before they began the study.
All files—including private ones—are stored together to ensure the integrity of the data set. Moreover, the entire data set remains accessible only to the
original research team until they are ready to share it. Thus, researchers can
treat Databrary as their lab data server and remote storage for both private and
shared data. To ensure that contributors are cited and credited for their work,
the system generates a standard citation for shared data sets that includes a
persistent identifier (DOI).
Participants change their mind about sharing, or about their specified level
of sharing. Under the European Union General Data Protection Regulation
(2016), research participants may ask to have their data removed. However,
only the original data contributor has information that links participants’
identities to their data on Databrary. Accordingly, research participants who
wish to retract permission to share data must contact the original researcher.
Even if the original researcher can determine which specific files to delete or
make private, Databrary makes no promise that data previously shared can be
retrieved from other researchers who accessed it prior to the participant’s
change of heart. These limits on the “right to be forgotten” must be conveyed
to participants at the outset.
Minors come of age. In the United States, only adults can consent to participate in research or give permission for a minor to participate. The same principle extends to permission to share data. Institutions and ethics boards must
decide whether to require that consent or permission be obtained again for
minors who come of age in order to share their data on Databrary. U.S. federal
agencies do not require recontact. Regardless, as with adult participants, information linking minors’ identity to their data on Databrary is held (or not) by
the original data contributor, and Databrary cannot promise to retrieve previously shared data.
Using Databrary
By restricting access to authorized researchers and sharing data with explicit
permission, Databrary demonstrates that personally identifiable video data can
be openly shared in ways that protect participants. As illustrated in Figure 6.1,
authorized investigators and their affiliates can browse and download shared
data and reuse it for various purposes. Only files marked “authorized users,”
“learning audiences,” or “public” are accessible outside of the original research
team. The Databrary access agreement provides assurance that files will be
downloaded to secure platforms (e.g., computers inside a locked laboratory)
and viewed only by people to whom authorized researchers grant access.
Many use cases for video do not involve research; see Scenario 2 in Figure 6.1. Researchers may want to view procedure videos to train their staff,
access coding manuals and spreadsheets to verify a coding scheme, download
videos to decide whether a data set is appropriate for their reuse case, or
simply browse shared data to learn what their colleagues are doing or to look
94 Gilmore, Xu, and Adolph
for inspiration. Alternatively, researchers may want to show an existing video
clip or to create their own excerpt from another researcher’s raw data to illustrate a point in a classroom lecture or presentation (i.e., they wish to show
the clip to broader audiences for educational purposes). Videos shared on
Databrary for use in teaching and presentations are designated “learning
audiences,” meaning that participants gave permission for these materials to
be shown by authorized investigators to audiences outside of a research laboratory. Such nonresearch use cases typically do not require approval from an
institutional review board.
However, as illustrated in Scenario 3 of Figure 6.1, researchers may want
to reuse shared data to address a specific research question (secondary data
analysis) or to demonstrate feasibility for a grant proposal. In this case, they
must obtain ethics board approval. The range of potential questions is limited
only by the researcher’s imagination. Ideally, researchers who create new
findings from shared data will share those data back with the research community. The only requirement is to cite the Databrary resources used for a
project using the recommended citation format.
Finally, as illustrated in Scenario 4 of Figure 6.1, like many websites, Databrary provides access to selected data and materials to any member of the
public with a web browser. Publicly accessible data are reviewed by the data
contributor to ensure that materials are not sensitive, do not violate copyright,
and pose minimal risk to participants. Public (nonauthorized) users cannot
upload data.
Data Errors Discovered During Reuse
Some researchers may worry that sharing makes it more likely that a critic or
rival will find errors in the work. Indeed, open sharing should accelerate the
detection of mistakes along with the accumulation of robust findings. This is
one way that openness improves science. Open data sharing in Databrary
allows contributors to fix a mistake as soon as it is discovered, and the corrected
version becomes immediately available to the research community. The possibility of error detection and correction should give the community greater
confidence in the validity of findings based on shared data.
CONCLUSION
Sharing research data involves ethical mandates to protect research participants from harm. But the ethics of data sharing extends beyond participant
well-being. Researchers also have ethical obligations to the scientific community and to the public at large. The ethical principles of beneficence and justice
compel researchers both to protect participants and to maximize benefits to
them and to others (Brakewood & Poldrack, 2013; National Commission for
the Protection of Human Subjects of Biomedical and Behavioral Research,
1979). Thus, the most defensible ethical stance for researchers is to share data
Data Sharing
95
as broadly as participants allow, but to protect the data with appropriate
restrictions on who can gain access. Open sharing of sensitive and identifiable
psychological data that balances these ethical imperatives can be achieved.
Databrary and similar restricted-access data repositories show how. Our aim
should be to accelerate discovery by shedding light on the dark data of psychological science and practice that typically go unshared (Heidorn, 2008).
REFERENCES
Adolph, K. E. (2020). Oh, behave! Presidential address, XXth International Conference
on Infant Studies, New Orleans, LA, May 2016. Infancy, 25(4), 374–392. https://
doi.org/10.1111/infa.12336
Adolph, K. E., Gilmore, R. O., Freeman, C., Sanderson, P., & Millman, D. (2012).
Toward open behavioral science. Psychological Inquiry, 23(3), 244–247. https://
doi.org/10.1080/1047840X.2012.705133
Adolph, K. E., Gilmore, R. O., & Kennedy, J. L. (2017, October). Science brief: Video
data and documentation will improve psychological science. Psychological Science
Agenda. https://www.apa.org/science/about/psa/2017/10/video-data
American Association for the Advancement of Science. (2020). Science journals:
Editorial policies. https://www.sciencemag.org/authors/science-journals-editorialpolicies#unpublished-data-and-personal-communications
American Psychological Association. (2020). Journal of Personality and Social Psychology:
General submission guidelines: Open science. https://www.apa.org/pubs/journals/psp?tab=4
Ascoli, G. A. (2006). The ups and downs of neuroscience shares. Neuroinformatics, 4(3),
213–216. https://doi.org/10.1385/NI:4:3:213
Association for Psychological Science. (2020). 2020 submission guidelines: Open practices
statement. https://www.psychologicalscience.org/publications/psychological_science/
ps-submissions#OPS
Bhattacharjee, Y. (2013, April 26). The mind of a con man. The New York Times. https://
www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academicfraud.html
Brakewood, B., & Poldrack, R. A. (2013). The ethics of secondary data analysis: Considering the application of Belmont principles to the sharing of neuroimaging data.
NeuroImage, 82, 671–676. https://doi.org/10.1016/j.neuroimage.2013.02.040
Claesen, A., Gomes, S., Tuerlinckx, F., Vanpaemel, W., & Leuven, K. U. (2019, May 9).
Preregistration: Comparing dream to reality. https://psyarxiv.com/d8wex/
Couzin-Frankel, J. (2014, May 30). Harvard misconduct investigation of psychologist released. Science News. https://www.sciencemag.org/news/2014/05/harvardmisconduct-investigation-psychologist-released
Databrary. (2020a). About Databrary: Authorized access. https://databrary.org/about/
agreement.html
Databrary. (2020b). Databrary support: Ethics-related support. https://databrary.org/support/
irb.html
European Council for Nuclear Research (CERN). (2020). The Large Hadron Collider.
https://home.cern/science/accelerators/large-hadron-collider
European Union General Data Protection Regulation. (2016). Regulation (EU) 2016/679
of the European Parliament and of the Council of 27 April 2016.
Fleming, D. A., & Reynolds, D. (2008). Ethical human-research protections: Not universal
and not uniform. The American Journal of Bioethics, 8(11), 21–22. https://doi.org/
10.1080/15265160802516864
Frank, M. C. (2020). Frequently asked questions: Why should I contribute to Wordbank?
http://wordbank.stanford.edu/faq#why-should-i-contribute-to-wordbank
96 Gilmore, Xu, and Adolph
Gennetian, L. A., Tamis-LeMonda, C. S., & Frank, M. C. (2020). Advancing transparency and openness in child development research: Opportunities. Child Development
Perspectives, 14(1), 3–8. https://doi.org/10.1111/cdep.12356
Gilmore, R. O. (2016). From big data to deep insight in developmental science. Wiley Interdisciplinary Reviews: Cognitive Science, 7(2), 112–126. https://doi.org/10.1002/wcs.1379
Gilmore, R. O., & Adolph, K. E. (2017). Video can make behavioural science more
reproducible. Nature Human Behaviour, 1(7), Article 0128. https://doi.org/10.1038/
s41562-017-0128
Gilmore, R. O., Cole, P. M., Verma, S., van Aken, M. A. G., & Worthman, C. M. (2020).
Advancing scientific integrity, transparency, and openness in child development
research: Challenges and possible solutions. Child Development Perspectives, 14(1),
9–14. https://doi.org/10.1111/cdep.12360
Gilmore, R. O., Kennedy, J. L., & Adolph, K. E. (2018). Practical solutions for sharing
data and materials from psychological research. Advances in Methods and Practices in
Psychological Science, 1(1), 121–130. https://doi.org/10.1177/2515245917746500
Heidorn, P. B. (2008). Shedding light on the dark data in the long tail of science. Library
Trends, 57(2), 280–299. https://doi.org/10.1353/lib.0.0036
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the
world? Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/
S0140525X0999152X
Hood, L., & Rowen, L. (2013). The Human Genome Project: Big science transforms
biology and medicine. Genome Medicine, 5(9), Article 79. https://doi.org/10.1186/
gm483
Inter-University Consortium for Political and Social Research. (2020). Find data.
https://www.icpsr.umich.edu/web/pages/ICPSR/index.html
Joel, S., Eastwick, P. W., & Finkel, E. J. (2018). Open sharing of data on close relationships and other sensitive social psychological topics: Challenges, tools, and future
directions. Advances in Methods and Practices in Psychological Science, 1(1), 86–94. https://
doi.org/10.1177/2515245917744281
Journal of Visualized Experiments. (2020). JoVE’s objectives and criteria for publication.
https://www.jove.com/publish/editorial-policies/#4
MacWhinney, B. (2020). TalkBank: Principles of sharing. https://talkbank.org/share/
principles.html
Madrigal, A. C. (2018, December 19). Facebook didn’t sell your data; it gave it away.
The Atlantic. https://www.theatlantic.com/technology/archive/2018/12/facebooksfailures-and-also-its-problems-leaking-data/578599/
Manolio, T. A., Brooks, L. D., & Collins, F. S. (2008). A HapMap harvest of insights into
the genetics of common disease. The Journal of Clinical Investigation, 118(5), 1590–1605.
https://doi.org/10.1172/JCI34772
Meyer, M. N. (2018). Practical tips for ethical data sharing. Advances in Methods and Practices
in Psychological Science, 1(1), 131–144. https://doi.org/10.1177/2515245917747656
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/read-the-belmont-report/index.html
New York University. (2020). Electronic data and system risk classification policy. https://www.
nyu.edu/about/policies-guidelines-compliance/policies-and-guidelines/electronicdata-and-system-risk-classification.html
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S.,
Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E.,
Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., . . .
Yarkoni, T. (2015). Scientific standards: Promoting an open research culture. Science,
348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
Data Sharing
97
Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., van
’t Veer, A. E., & Vazire, S. (2019). Preregistration is hard, and worthwhile. Trends in
Cognitive Sciences, 23(10), 815–818. https://doi.org/10.1016/j.tics.2019.07.009
Nosek, B. A., & Lindsay, D. S. (2018, February 28). Preregistration becoming the norm in
psychological science. APS Observer. https://www.psychologicalscience.org/observer/
preregistration-becoming-the-norm-in-psychological-science
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological
Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058
Office for Human Research Protections. (2017). Federal policy for the protection of
human subjects (“Common Rule”). https://www.hhs.gov/ohrp/regulations-and-policy/
regulations/common-rule/index.html
Poldrack, R. A., & Gorgolewski, K. J. (2014). Making big data open: Data sharing in neuro­
imaging. Nature Neuroscience, 17(11), 1510–1517. https://doi.org/10.1038/nn.3818
Suls, J. (2013). Using “cinéma vérité” (truthful cinema) to facilitate replication and
accountability in psychological research. Frontiers in Psychology, 4, Article 872.
https://doi.org/10.3389/fpsyg.2013.00872
Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A.,
Blomberg, N., Boiten, J. W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J.,
Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T.,
Finkers, R., . . . Mons, B. (2016). The FAIR Guiding Principles for scientific data
management and stewardship. Scientific Data, 3(1), Article 160018. https://doi.org/
10.1038/sdata.2016.18
World Medical Association. (2013). World Medical Association Declaration of Helsinki:
Ethical principles for medical research involving human subjects. JAMA, 310(20),
2191–2194. https://doi.org/10.1001/jama.2013.281053
7
Understanding Research
Misconduct
Alison L. Antes and James M. DuBois
M
any researchers likely imagine that they will not encounter research
misconduct in their careers. After all, research misconduct is a rare,
intentional act of cheating and deception, or so the logic goes. In fact, this
view of research misconduct is often an oversimplification. Researchers need
to recognize that research misconduct cannot be entirely explained as an act
committed only by a select few “bad apples.” Instead, it is a more complicated
phenomenon. In this chapter, research misconduct is described, and its harmful
effects on science are explained. Then, the reasons research misconduct happens
and preventive measures are discussed. Finally, guidance is provided for
researchers who believe they may have witnessed misconduct.
WHAT IS RESEARCH MISCONDUCT?
The U.S. federal definition of research misconduct is “fabrication, falsification, or
plagiarism in proposing, performing, or reviewing research, or in reporting
research results” (Public Health Service Policies on Research Misconduct, 2005,
§ 93.103). Fabrication is defined as “making up data or results and recording or
reporting them” (§ 93.103(a)); falsification is “manipulating research materials,
equipment, or processes, or changing or omitting data or results such that the
research is not accurately represented in the research record” (§ 93.103(b));
and plagiarism is “appropriation of another person’s ideas, processes, results, or
words without giving appropriate credit” (§ 93.103(c)). Research misconduct
https://doi.org/10.1037/0000258-007
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
99
100 Antes and DuBois
“does not include honest error or differences in opinion” (§ 93.103(d)). Further,
federal policy indicates that misconduct must be “committed intentionally,
knowingly, or recklessly” (§ 93.104(b)).
Fabrication and Falsification of Data or Results
Fabrication and falsification entail manipulations or mistruths about research
data and results. In short, fabrication is making up data. A researcher engages
in fabrication when they create data or observations that did not come from
an actual experiment or data collection effort. Fabrication also includes constructing a false figure, table, or image and presenting it as valid. Falsification,
in brief, is changing data or results to mislead. Falsification includes omitting
data to make results appear to support the researcher’s claims or hypotheses.
Manipulating research instruments, materials, or processes to modify research
results also constitutes falsification. Additionally, manipulating images or figures to make them more supportive of the researcher’s claims is falsification.
It is worth noting that data practices that fall short of falsification, sometimes
referred to as “questionable” or “detrimental” research practices, are also of
great concern to the scientific community (National Academies of Sciences,
Engineering and Medicine, 2017). Whether such behaviors constitute research
misconduct depends on the details of the case (Shamoo & Resnik, 2015). Given
researchers’ desire to present data in the most favorable light and the many
choices they make about the analysis, interpretation, and reporting of their
data, they must take active steps to avoid the innumerable opportunities to
self-deceive, rationalize, and ultimately mislead (Nuzzo, 2015).
Plagiarism of Words or Ideas
Plagiarism is taking the words or ideas of another author and using them as
one’s own. Plagiarism can take several forms. The most extreme form is to
take someone else’s entire work and submit the work under one’s own name.
A related form of plagiarism is using portions of another work word for word
in one’s own work without properly attributing the words to their author, for
example, through citation and the use of quotation marks or blocked quoting.
Importantly, even when an author includes a citation to the original source,
omitting the quotation marks or blocked quoting still constitutes plagiarism
by most standards.
Plagiarism can also include taking the general structure of the original
author’s work and making minor changes, for example, by using synonyms
for some of the words. If the structure and ideas in the writing remain
unchanged, the work is not original to the authors who are claiming it as their
own. So-called self-plagiarism occurs when authors reuse their past words in
a new work without referring to the original work. This form of plagiarism can
be controversial, particularly for some sections of manuscripts, such as the
Method (Moskovitz, 2019). A researcher might use similar methods across
multiple studies, and methods tend to include technical terms that can be
Understanding Research Misconduct 101
difficult to paraphrase. Yet it is considered best to rewrite and paraphrase one’s
own words when preparing a new manuscript.
Finally, improper and careless citation practices occur when citations are
inaccurate or missing or when in-text citations are excluded. These practices
can constitute plagiarism, even if they are unintentional, when they result in
passages of someone else’s work being presented as one’s own.
Harms of Research Misconduct
Falsification, fabrication, and plagiarism are often considered the most serious
violations of scientific standards of conduct because of the direct harm they
pose to science. Misrepresenting research results or the authors of research
puts false information into the scientific record. Science relies on accuracy and
honesty, and falsification, fabrication, and plagiarism are clear violations of
these values. Further, plagiarism takes credit from someone else’s work and
therefore is a form of stealing.
Research misconduct wastes the limited resources available for research.
When incorrect results are published, other researchers may pursue the false
lead. Additionally, there is the potential for serious harm to the public when
fraudulent research findings are used to inform medical or public practices
and policies. Finally, misconduct harms science and society by creating mistrust of researchers. Distrust can decrease public support for research in terms
of both financial support and engagement with research as participants or as
community partners.
Prevalence of Research Misconduct
It is a challenge to establish the frequency of misconduct in science. Misconduct
is a relatively low-frequency event, and misconduct cases are underreported
(Titus et al., 2008). Researchers are also likely to underreport misbehavior in
surveys about research misconduct, given its consequences. A commonly cited
meta-analysis of surveys of misconduct found that 2% of respondents indicated they had engaged in fabrication or falsification and 14% reported this
behavior by colleagues, whereas 34% admitted to behavior that was “questionable” and 72% reported such behavior by colleagues (Fanelli, 2009). Overall,
in the past few years, the scientific community has expressed growing concern
about the need for vigilance in understanding and preventing misconduct and
detrimental research practices (National Academies of Sciences, Engineering
and Medicine, 2017).
HOW IS RESEARCH MISCONDUCT INVESTIGATED?
Institutions that receive funding from federal agencies are required to have
policies and procedures for investigating research misconduct. In general,
investigations of misconduct include three stages: inquiry, investigation, and
102 Antes and DuBois
oversight review by a federal office such as the Office of Research Integrity
within the U.S. Department of Health and Human Service (set forth in Public
Health Service Policies on Research Misconduct, 2005) or the Office of the
Inspector General of the National Science Foundation (set forth in Research
Misconduct, 2002). A finding of misconduct must be based on “a preponderance of the evidence” (Public Health Service Policies on Research Misconduct,
2005, § 93.104(c); Research Misconduct, 2002, § 689.2(c)(3)).
When an allegation is first reported, an institutional official, often a research
integrity officer, is responsible for reviewing the allegation to determine whether
it is credible and constitutes research misconduct according to federal policy.
If the allegation is credible, an inquiry phase begins to identify evidence. If the
evidence is sufficient to warrant a full investigation, the institution conducts a
thorough investigation. The investigation concludes with a report and determination of whether research misconduct occurred. The relevant federal oversight
body reviews the institution’s report and makes independent findings.
WHAT ARE THE CONSEQUENCES OF RESEARCH MISCONDUCT?
When a finding of research misconduct is made by an institution and/or federal
oversight body, it issues sanctions and corrective actions for the researcher
found guilty of misconduct. Federal oversight can also include taking actions
against an institution.
Consequences for Researchers
Federal actions in response to misconduct often include barring the researcher
from receiving federal funding. Existing grant funding can be suspended for a
period or taken away. The researcher may become ineligible to serve on peer
review or advisory committees for federal agencies. They may be required to
have their institution officially certify the authenticity of their future work.
Further, the researcher may be required to submit a correction or retraction
of published papers, and the institution can require the researcher to be
supervised. These actions may be imposed for a period of a year to lifetime,
depending on the nature of the misconduct. For research funded by the Public
Health Service, case summaries of the findings of research misconduct and
imposed actions are also posted on the Office of Research Integrity’s website.
Some extreme cases of research misconduct have been criminally prosecuted
and have led to prison sentences (Keränen, 2006).
Of course, further consequences include the stress of being investigated,
lasting reputational damage, and harm to the careers of research staff and
trainees who worked with the investigator as projects are abandoned or if
labs are closed. The suspension of a researcher’s funds and projects can stall
their research program and potentially end their research career. In some
cases, the institution terminates the individual’s employment or asks the
researcher to resign.
Understanding Research Misconduct 103
Institutions can require other actions, such as suspension from supervising
trainees or engagement in remedial education. A remediation training program at Washington University in St. Louis, the Professionalism and Integrity
in Research Program, is directed by the authors (DuBois and Antes) and
draws researchers from institutions across the United States (DuBois, Chibnall,
Tait, & Vander Wal, 2016). Participants in this program learn to identify the
root causes of their cases and approaches to avoid problems in the future,
such as how to use professional decision-making strategies and effective lab
supervision practices.
Consequences for Institutions
For institutions, damage to the university’s reputation is considerable in cases
of misconduct. Further, inquiries and investigations are costly endeavors for
institutions, whatever the outcome of the investigation. Costs are particularly
high when the accused researchers contest the charges. Moreover, institutions
can be required to return federal grant funding that has already been spent and
may be assessed fines if they are deemed to have provided inadequate training
or oversight (Science News Staff, 2019).
WHY DOES RESEARCH MISCONDUCT HAPPEN?
Research on research misconduct is essential to understanding its causes.
However, it is difficult to study these largely hidden events. Further, the
causes are numerous and complicated. As a general framework, it is helpful
to distinguish between two primary explanations for unethical behavior:
“bad apples” and “bad barrels” (Kish-Gephart et al., 2010). Bad apple explanations refer to causes of misbehavior that focus on characteristics of the
individual. Bad barrel explanations refer to aspects of the environment in
which the individual works. In the sections that follow, we explore both bad
apples and bad barrels as explanations for misconduct in research and discuss
decision making and cultural influences that are important considerations in
understanding misconduct.
Bad Apples: Individuals and Misconduct
Personality characteristics such as arrogance and narcissism produce selfcentered, overconfident thinking, which may contribute to research mis­
conduct (Antes et al., 2007; DuBois et al., 2013). Researchers with these traits
may think they are unlikely to be caught, or they may force their data to fit
their hypothesis out of certainty they are right (Davis et al., 2007; DuBois
et al., 2013). Being overly ambitious in achieving career success, seeking
“superstar” status in one’s field, or desiring to increase one’s personal wealth
may also be at play (DuBois et al., 2013).
104 Antes and DuBois
Cynicism—mistrust of the motives of others, and specifically the belief that
others are ill intentioned and self-centered—has also been linked to less ethical
decision making (Antes et al., 2007). Skirting the rules and norms of their
profession may be more likely among those who believe others are engaged
in similar behavior; however, explicit data linking cynicism to research misconduct are not available. Mental health issues or mental disorders might
play a role in some cases of wrongdoing, but currently there is a lack of
evidence to suggest this is a particularly significant factor in cases of research
misconduct (DuBois et al., 2013).
Although personality factors likely play some role in misconduct, this explanation is incomplete in most cases. It is also important to examine the environments in which researchers work to understand what might contribute to
research misconduct.
Bad Barrels: Environments and Misconduct
Perhaps the most common factor in research environments that is implicated
in research misconduct is pressure. Researchers are under pressure to publish
papers and secure grants; these factors are in turn linked to academic tenure
and promotion (Davis et al., 2007). Closely related to pressure is competition
for limited resources (Davis et al., 2007). Others have proposed that perceived
unfairness at the institutional level and institutional climate may be environmental factors that contribute to researcher behavior (Martinson et al., 2006,
2013). Nevertheless, some studies have found little or no link between research
pressures and research misconduct (DuBois et al., 2013; Fanelli et al., 2015),
and of course most people who experience pressures do not engage in
misconduct.
Evidence on graduate student researchers in research laboratories suggests
that a climate of competitive pressure within the lab is associated with less
ethical decision making (Mumford et al., 2007). Davis et al. (2007) also provided
some evidence that work environment factors, including conflict, insufficient
mentoring, and poor communication, contribute to misconduct. Supervision
and mentoring may be especially important factors in the research behavior
of junior investigators, particularly review of data (DuBois, Chibnall, Tait, &
Vander Wal, 2016; Wilson et al., 2007; Wright et al., 2008). Being overcommitted and not adequately prioritizing their supervisory role contributes
to some of the research integrity violations among researchers referred to
remediation training (DuBois, Chibnall, Tait, & Vander Wal, 2016). Insufficient
supervision may influence trainee behavior through lack of knowledge of
appropriate practices or an increased perception that they are unlikely to
be caught.
Although pressure, competition, work climate, and supervision may be
key environmental factors at play in cases of misconduct, they exert their influence on individual judgment and decision making. Poor coping with environmental factors likely undermines decision making and may yield impulsive
behavior (DuBois et al., 2013; Mumford et al., 2007). Dysfunctional research
Understanding Research Misconduct 105
environments are more likely to produce faulty thinking patterns and rationalizations (Davis et al., 2007; Gunsalus & Robinson, 2018). This point stands
in contrast to a bad apples explanation of misconduct, suggesting that any
researcher might engage in biased thinking or rationalizations in a given
environment.
Flawed thinking, in general, is of concern in understanding research mis­
conduct. As one author noted, “They [scientists] have fooled themselves that
science is a wholly objective enterprise unsullied by the usual human subjectivity and imperfections. It is not. It is a human activity” (Smith, 2006, p. 234).
When researchers engage in faulty decision making, they can justify self-serving
decisions, in turn diminishing the integrity of their decisions (DuBois, Chibnall,
& Gibbs, 2016; DuBois, Chibnall, Tait, Vander Wal, Baldwin, et al., 2016).
Cultural Influences on Research Behavior
Culture is yet another factor proposed as a potential influence on researcher
decision making and behavior (Antes et al., 2018). In particular, perceptions
about plagiarism differ across cultures. International researchers may not
recognize as readily as U.S. researchers the notion of intellectual property—the
idea that knowledge or ideas generated by a particular individual are exclusively the possession of the originator and thus require due credit (Heitman &
Litewka, 2011). This type of misunderstanding is particularly likely when
material is published open access or is readily available on the Internet. What
stands as an obvious violation to U.S. researchers may be more ambiguous to a
researcher trained outside the United States. Thus, it is essential that researchers
training international students or collaborating with international investigators
be prepared to engage in cross-cultural discussion to reach shared understandings about acceptable practice in the United States (Heitman, 2014).
Consider the following case example:1
Dr. Waldstein is a coauthor of a paper with a colleague, Dr. Kahn, who has agreed
to serve as the first author and write the first draft of the full manuscript. When
Dr. Waldstein receives the draft manuscript, he is surprised to find that much of
the text is borrowed from other articles, including two articles that Dr. Waldstein
wrote. The articles are cited, but none of the borrowed verbatim material is
enclosed in quotation marks. Dr. Waldstein feels that he is at risk of becoming
complicit in plagiarism. He wonders whether Dr. Kahn, who was trained in
another nation, thinks plagiarism is acceptable. He is very upset and uncertain
how to proceed.
In this case, if the manuscript had been submitted, the borrowed text would
constitute plagiarism of the text from other authors and, at the least, text
recycling of Dr. Waldstein’s own work. Dr. Waldstein’s good authorship practice of reviewing the manuscript before submission may have saved the pair
from an allegation of plagiarism. However, now Dr. Waldstein is faced with
The case example presented here is fictitious.
1
106 Antes and DuBois
the challenge of how to respond to this situation. The cause in this case might
be a cultural difference regarding the acceptability of plagiarism, lack of
knowledge regarding proper citation practices, or some other cause altogether,
such as a lack of attention.
HOW CAN RESEARCH MISCONDUCT BE PREVENTED?
Given the individual and environmental factors that influence research
behavior, how can research integrity be fostered and research misconduct
prevented? Prevention starts at the individual level, with good professional
practices of researchers, but it is also important to recognize the role of factors
within labs and at the institutional level.
Prevention for Individual Researchers
The SMART professional decision-making strategies—Seek help, Manage your
emotions, Anticipate consequences, Recognize rules and context, and Test your
assumptions and motives—help researchers make informed decisions when
confronted with new, unfamiliar, or uncertain situations (DuBois, Chibnall, Tait,
Vander Wal, Baldwin, et al., 2016; McIntosh et al., 2020). Further, they help
offset bias and cognitive thinking patterns that can produce self-interested decision making. The SMART strategies can be described as follows:
• Seek help reminds researchers to ask for advice from objective, trusted individuals and to seek out additional knowledge and information.
• Manage your emotions prompts researchers to identify and control their
emotional responses, particularly negative emotions that tend to hinder
objectivity, judgment, and effective interactions with others.
• Anticipate consequences indicates to researchers that they must consider the
possible short- and long-term outcomes of the situation, as well as consequences to others and themselves.
• Recognize rules and context cues researchers to be aware of relevant
principles, rules, and regulations that apply and the dynamics of the
situation—such as what caused the problem—and of the individuals
involved—such as power hierarchies and who has the authority to
change which factors.
• Finally, test your assumptions and motives reminds researchers to consider how
their interpretation of the problem and potential decisions might be biased
or based on faulty assumptions and to evaluate underlying personal motivations and goals and how they might affect decision making in the situation.
A tool that provides reflective questions to consider when engaging in each of
the strategies is available online (Professionalism and Integrity in Research
Program, 2018b).
Understanding Research Misconduct 107
It may also be important for researchers to proactively manage pressures
associated with scientific work. A cohort of experienced researchers known for
their success and professionalism in research described the importance of
keeping what one loves about research at the front of one’s mind while having
strategies to offset stress (Antes & DuBois, 2018). Coping mechanisms and
stress management may be important, particularly if other personal stressors
exist outside of work (Davis et al., 2007).
Regarding plagiarism, researchers must establish good writing processes
so that it is clear when information is paraphrased and when words are
directly quoted from the original source. Additionally, whether the researcher
is writing solo or collaboratively, adopting the practice of running manuscripts
through plagiarism software before submitting the work can catch problematic
passages so they can be addressed proactively.
In the case of Drs. Waldstein and Kahn, they could use plagiarism software
to make sure they have identified all the problematic sections of text. Additionally, given that Dr. Waldstein is upset, it would be wise to engage the
SMART strategies, particularly managing one’s emotions. The managing
emotions strategy suggests taking some time, perhaps an hour or two, before
responding, or it may even be best to wait until the next day. Managing
emotions helps a researcher better generate potential responses to the problem.
Dr. Waldstein might also seek help. Seeking help can further support emotion
management, and it can be used to generate ideas about how to respond. For
example, Dr. Waldstein might call a friend who works at a different university
to get advice.
Anticipating the consequences of the situation, Dr. Waldstein is, understandably, most concerned about submitting work that could be classified as involving
research misconduct. This would be hugely detrimental to Dr. Waldstein,
Dr. Kahn, and even their department and university. However, another concern
is the potential of working again with Dr. Kahn; Dr. Waldstein wants to preserve
their collegial working relationship. He recognizes the potential for cultural
background to be a factor in Dr. Kahn’s citation practices and considers this
perspective in approaching Dr. Kahn. At the same time, Dr. Waldstein wants to
communicate the serious nature of the plagiarism in the manuscript.
Prevention for Labs
In research labs, it is important to maintain a group culture that prioritizes
data integrity and research rigor and reproducibility as the highest asset that
the lab has, an asset that must be protected. This kind of work group climate
starts with the principal investigator communicating this expectation and setting the standard for shared accountability (Antes et al., 2019a). Critically,
group practices for the management and review of data are essential, for
example, storing data in a centrally accessible location that is reviewed by the
principal investigator and routinely reviewing and discussing raw data and
interpretation of findings (Antes et al., 2019a). Having multiple researchers
working together on projects may also provide some protection against
108 Antes and DuBois
wrongdoing because multiple individuals have access to and involvement
with the research data (Antes et al., 2019a).
The lab must also engage in good communication, including regular team
meetings to review data and clear discussion of expectations and standards of
conduct in the lab (Antes et al., 2019a; Wright et al., 2008). It is particularly
critical that the lab provides a safe environment for raising concerns, asking
for help, and inviting feedback (Antes et al., 2019b). Individuals must never
feel that the principal investigator will be angry if their data do not support
hypotheses. A lab leadership and management checklist listing these and
other essential lab management practices can be found online (Professionalism
and Integrity in Research Program, 2018a).
Prevention for Institutions
For institutions, prevention focuses largely on educating investigators about
research misconduct and research integrity, as well as providing adequate
resources. There are mixed findings regarding the quality and impact of
responsible conduct of research training, but some evidence suggests that
training has been improving in the past decade (Watts et al., 2017). However,
some concern remains that differences arising from cultural backgrounds may
not be adequately addressed in existing approaches to research integrity education (Antes et al., 2018; Heitman, 2014). Additionally, the working environment, in particular the institution’s climate for research integrity, must support
responsible behavior (Crain et al., 2013; DuBois & Antes, 2018; Wells et al.,
2014). Institutions can assess their climate for research integrity using validated
measures such as the Survey of Organizational Research Climate (Martinson
et al., 2013). Institutions should additionally clarify who owns research data
(typically the institution) and provide adequate platforms for data sharing and
storage, as well as adequate software for maintaining or backing up lab notebooks and other procedural documentation.
WHAT SHOULD RESEARCHERS DO IF THEY SUSPECT RESEARCH
MISCONDUCT?
As seen in the case of Drs. Waldstein and Kahn, using the SMART strategies
can help a researcher figure out how to approach a case of misconduct. In
this case, Dr. Waldstein managed emotions and was fortunate to have caught
the problem before submission of the manuscript. The case study continues
as follows:
Dr. Waldstein opts to talk with Dr. Kahn in an open fashion to address the problem.
Dr. Waldstein learns that, in fact, Dr. Kahn thought that citing the source was
enough when borrowing passages verbatim from other texts. Dr. Waldstein
explains that quotation marks around the text (or blocking longer excerpts) are
essential to denote that the text is quoted word for word. They discuss how
paraphrasing the original text would have made the citation without quotation
Understanding Research Misconduct 109
marks acceptable. The two agree to revise the current paper and to use plagiarism
software to identify any problematic passages that may remain. Dr. Waldstein
tells Dr. Kahn that journals and funding agencies are increasingly using this kind
of software, so it is a useful step before submitting manuscripts for review,
although the authors still must carefully examine the results returned by the
software.
In this case, Dr. Waldstein has addressed the concern and helped educate
Dr. Kahn, and they made a plan to ensure that their manuscript will not contain
any plagiarized material. Fortunately, this situation did not oblige Dr. Waldstein
to report this concern to an institutional authority. Dr. Waldstein used informal
intervention to address the concern. (Keith-Spiegel et al., 2010)
Undeniably, confronting the decision whether to report misconduct formally
to institutional officials may be one of the most difficult choices a researcher
encounters. Reporting suspected misconduct is considered an ethical responsibility of researchers, but whistleblowing can be difficult. The federal regulations
provide for protections against retaliation toward whistleblowers. Nonetheless,
reporting wrongdoing can be incredibly challenging and may have personal
and career consequences (Allen & Dowell, 2013; Gross, 2016).
Based in part on interviews and surveys of more than 2,000 researchers
who had been in a position to consider intervening in research wrongdoing,
a team of researchers produced a real-world, practical guide to help researchers
respond in a responsible way while also considering the risks (Keith-Spiegel
et al., 2010). Beyond the professional responsibility to act, the guide points to
researchers’ motivation to avoid being associated with a manuscript that
contains false data, which will tarnish their reputation even if they did not
directly engage in the wrongdoing. The guide also addresses a key challenge in
cases of research misbehavior: distinguishing between true research misconduct,
defined in this chapter as falsification, fabrication, and plagiarism, and questionable or bad science, carelessness, and incompetence. Unfortunately, all of
these can result in harm to the research record. All may require some intervention, but Keith-Spiegel et al. (2010) distinguished between formal or
official reporting, in which allegations are taken to an institutional official,
and informal intervention, in which actions are taken without going through
official means. For example, speaking directly with the researcher about
the behavior of concern is an informal intervention, as seen in the case of
Drs. Waldstein and Kahn. Indeed, determining the nature of the behavior at
hand informs which type of intervention is appropriate.
Additionally, Keith-Spiegel et al. (2010) encouraged researchers to examine
the quality of the evidence regarding the suspected misbehavior. Consistent
with the SMART strategies, they advised dealing with the emotions involved
in making a choice about reporting research wrongdoing. Next, they reviewed
the advantages and disadvantages of informal and formal interventions,
including anticipating how the individual might respond given their personality, the quality of the evidence, the nature of the relationship (e.g., peer or
supervisor), the nature of the institution (e.g., supportive or unsupportive),
and whether the behavior seems to be a pattern or a first offense. Importantly,
110 Antes and DuBois
they encouraged the person considering responding to wrongdoing to evaluate
their personal sources of support and their own welfare. When the person
has made the decision to formally report a suspected case of misconduct, the
guide spells out what to expect in the process.
CONCLUSION
It is the responsibility of all researchers at all career stages to understand
misconduct, its causes and consequences, and ways to take proactive steps
to prevent it. The first step in a proactive approach is awareness that misconduct may arise out of bad apple scenarios, but a host of factors can
contribute. Further, some humility on the part of each individual researcher
is warranted. This mindset allows researchers to consider how their research
work environment, work habits, and decision making might promote or potentially undermine their professional behavior. Ultimately, it serves science and
society well for researchers to keep in mind the central goal of their work—the
progress of science.
REFERENCES
Allen, M., & Dowell, R. (2013). Retrospective reflections of a whistleblower: Opinions
on misconduct responses. Accountability in Research, 20(5–6), 339–348. https://
doi.org/10.1080/08989621.2013.822249
Antes, A. L., Brown, R. P., Murphy, S. T., Waples, E. P., Mumford, M. D., Connelly, S.,
& Devenport, L. D. (2007). Personality and ethical decision-making in research:
The role of perceptions of self and others. Journal of Empirical Research on Human
Research Ethics, 2(4), 15–34. https://doi.org/10.1525/jer.2007.2.4.15
Antes, A. L., & DuBois, J. M. (2018). Cultivating the human dimension in research.
Molecular Cell, 72(2), 207–210. https://doi.org/10.1016/j.molcel.2018.09.015
Antes, A. L., English, T., Baldwin, K. A., & DuBois, J. M. (2018). The role of culture and
acculturation in researchers’ perceptions of rules in science. Science and Engineering
Ethics, 24(2), 361–391. https://doi.org/10.1007/s11948-017-9876-4
Antes, A. L., Kuykendall, A., & DuBois, J. M. (2019a). The lab management practices of
“research exemplars” that foster research rigor and regulatory compliance: A qualitative study of successful principal investigators. PLOS ONE, 14(4), Article e0214595.
https://doi.org/10.1371/journal.pone.0214595
Antes, A. L., Kuykendall, A., & DuBois, J. M. (2019b). Leading for research excellence
and integrity: A qualitative investigation of the relationship-building practices of
exemplary principal investigators. Accountability in Research, 26(3), 198–226. https://
doi.org/10.1080/08989621.2019.1611429
Crain, A. L., Martinson, B. C., & Thrush, C. R. (2013). Relationships between the
Survey of Organizational Research Climate (SORC) and self-reported research
practices. Science and Engineering Ethics, 19(3), 835–850. https://doi.org/10.1007/
s11948-012-9409-0
Davis, M. S., Riske-Morris, M., & Diaz, S. R. (2007). Causal factors implicated in
research misconduct: Evidence from ORI case files. Science and Engineering Ethics,
13(4), 395–414. https://doi.org/10.1007/s11948-007-9045-2
DuBois, J. M., Anderson, E. E., Chibnall, J., Carroll, K., Gibb, T., Ogbuka, C., & Rubbelke, T.
(2013). Understanding research misconduct: A comparative analysis of 120 cases of
Understanding Research Misconduct 111
professional wrongdoing. Accountability in Research, 20(5–6), 320–338. https://doi.org/
10.1080/08989621.2013.822248
DuBois, J. M., & Antes, A. L. (2018). Five dimensions of research ethics: A stakeholder
framework for creating a climate of research integrity. Academic Medicine, 93(4),
550–555. https://doi.org/10.1097/ACM.0000000000001966
DuBois, J. M., Chibnall, J. T., & Gibbs, J. (2016). Compliance disengagement in research:
Development and validation of a new measure. Science and Engineering Ethics, 22(4),
965–988. https://doi.org/10.1007/s11948-015-9681-x
DuBois, J. M., Chibnall, J. T., Tait, R., & Vander Wal, J. (2016). Misconduct: Lessons
from researcher rehab. Nature, 534(7606), 173–175. https://doi.org/10.1038/534173a
DuBois, J. M., Chibnall, J. T., Tait, R. C., Vander Wal, J. S., Baldwin, K. A., Antes, A. L.,
& Mumford, M. D. (2016). Professional Decision-Making in Research (PDR): The
validity of a new measure. Science and Engineering Ethics, 22(2), 391–416. https://
doi.org/10.1007/s11948-015-9667-8
Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic
review and meta-analysis of survey data. PLOS ONE, 4(5), Article e5738. https://
doi.org/10.1371/journal.pone.0005738
Fanelli, D., Costas, R., & Larivière, V. (2015). Misconduct policies, academic culture
and career stage, not gender or pressures to publish, affect scientific integrity. PLOS
ONE, 10(6), Article e0127556. https://doi.org/10.1371/journal.pone.0127556
Gross, C. (2016). Scientific misconduct. Annual Review of Psychology, 67(1), 693–711.
https://doi.org/10.1146/annurev-psych-122414-033437
Gunsalus, C. K., & Robinson, A. D. (2018). Nine pitfalls of research misconduct. Nature,
557(7705), 297–299. https://doi.org/10.1038/d41586-018-05145-6
Heitman, E. (2014). Cross-cultural considerations in U.S. research ethics education.
Journal of Microbiology & Biology Education, 15(2), 130–134. https://doi.org/10.1128/
jmbe.v15i2.860
Heitman, E., & Litewka, S. (2011). International perspectives on plagiarism and
considerations for teaching international trainees. Urologic Oncology, 29(1), 104–108.
https://doi.org/10.1016/j.urolonc.2010.09.014
Keith-Spiegel, P., Sieber, J., & Koocher, G. P. (2010). Responding to research wrongdoing:
A user-friendly guide. http://users.neo.registeredsite.com/1/4/0/20883041/assets/
RRW_11-10.pdf
Keränen, L. (2006). Assessing the seriousness of research misconduct: Considerations
for sanction assignment. Accountability in Research, 13(2), 179–205. https://doi.org/
10.1080/08989620500440261
Kish-Gephart, J. J., Harrison, D. A., & Treviño, L. K. (2010). Bad apples, bad cases, and
bad barrels: Meta-analytic evidence about sources of unethical decisions at work.
Journal of Applied Psychology, 95(1), 1–31. https://doi.org/10.1037/a0017103
Martinson, B. C., Anderson, M. S., Crain, A. L., & de Vries, R. (2006). Scientists’ perceptions of organizational justice and self-reported misbehaviors. Journal of Empirical
Research on Human Research Ethics, 1(1), 51–66. https://doi.org/10.1525/jer.2006.1.1.51
Martinson, B. C., Thrush, C. R., & Crain, A. L. (2013). Development and validation of
the Survey of Organizational Research Climate (SORC). Science and Engineering
Ethics, 19(3), 813–834. https://doi.org/10.1007/s11948-012-9410-7
McIntosh, T., Antes, A. L., & DuBois, J. M. (2020). Navigating complex, ethical problems in professional life: A guide to teaching SMART strategies for decision-making.
Journal of Academic Ethics. Advance online publication. https://doi.org/10.1007/
s10805-020-09369-y
Moskovitz, C. (2019). Text recycling in scientific writing. Science and Engineering Ethics,
25(3), 813–851. https://doi.org/10.1007/s11948-017-0008-y
Mumford, M. D., Murphy, S. T., Connelly, S., Hill, J. H., Antes, A. L., Brown, R. P., &
Devenport, L. D. (2007). Environmental influences on ethical decision making:
112 Antes and DuBois
Climate and environmental predictors of research integrity. Ethics & Behavior, 17(4),
337–366. https://doi.org/10.1080/10508420701519510
National Academies of Sciences, Engineering and Medicine. (2017). Fostering integrity
in research. National Academies Press. https://doi.org/10.17226/21896
Nuzzo, R. (2015). How scientists fool themselves—and how they can stop. Nature,
526(7572), 182–185. https://doi.org/10.1038/526182a
Professionalism and Integrity in Research Program. (2018a). Lab leadership and management best practices—Checklist. https://integrityprogram.org/wp-content/uploads/2018/
06/ResearchLeadershipMgmtBestPracticesChecklist.pdf
Professionalism and Integrity in Research Program. (2018b). Strategies for professional decision making: The SMART approach. https://integrityprogram.org/wp-content/uploads/
2018/06/SMARTStrategies_2018.pdf
Public Health Service Policies on Research Misconduct, 42 C.F.R. Part 93 (2005).
https://ori.hhs.gov/sites/default/files/42_cfr_parts_50_and_93_2005.pdf
Research Misconduct, 45 C.F.R. Part 689 (2002). https://www.nsf.gov/oig/_pdf/cfr/
45-CFR-689.pdf
Science News Staff. (2019, March 25). Duke University settles research misconduct
lawsuit for $112.5 million. Science. https://www.sciencemag.org/news/2019/03/
duke-university-settles-research-misconduct-lawsuit-1125-million
Shamoo, A. E., & Resnik, D. B. (2015). Responsible conduct of research (3rd ed.). Oxford
University Press.
Smith, R. (2006). Research misconduct: The poisoning of the well. Journal of the Royal
Society of Medicine, 99(5), 232–237. https://doi.org/10.1177/014107680609900514
Titus, S. L., Wells, J. A., & Rhoades, L. J. (2008). Repairing research integrity. Nature,
453(7198), 980–982. https://doi.org/10.1038/453980a
Watts, L. L., Medeiros, K. E., Mulhearn, T. J., Steele, L. M., Connelly, S., & Mumford,
M. D. (2017). Are ethics training programs improving? A meta-analytic review of
past and present ethics instruction in the sciences. Ethics & Behavior, 27(5), 351–384.
https://doi.org/10.1080/10508422.2016.1182025
Wells, J. A., Thrush, C. R., Martinson, B. C., May, T. A., Stickler, M., Callahan, E. C., &
Klomparens, K. L. (2014). Survey of organizational research climates in three
research intensive, doctoral granting universities. Journal of Empirical Research on
Human Research Ethics, 9(5), 72–88. https://doi.org/10.1177/1556264614552798
Wilson, K., Schreier, A., Griffin, A., & Resnik, D. (2007). Research records and the resolution of misconduct allegations at research universities. Accountability in Research,
14(1), 57–71. https://doi.org/10.1080/08989620601126017
Wright, D. E., Titus, S. L., & Cornelison, J. B. (2008). Mentoring and research mis­
conduct: An analysis of research mentoring in closed ORI cases. Science and Engineering
Ethics, 14(3), 323–336. https://doi.org/10.1007/s11948-008-9074-5
8
Ethics in Coercive Environments
Ensuring Voluntary Participation in Research
Michael D. Mumford, Cory Higgs, and Yash Gujar
W
illiam James (1890) defined psychology as “the science of mental life,
both of its phenomena and their conditions” (p. 1). One key condition
in which the human mind operates is within social institutions. We all have
gone to school—an institution. Most of us receive paychecks from firms, either
public or private—yet another institution. Some join the military, and a few
of us go to prison—still other institutions. The point here is that much of
human life, and the focus of the human mind, occurs in, and with respect to,
institutions. Accordingly, many psychological studies are intended to understand mental life as it occurs in institutional settings.
To understand the ethical issues confronting those interested in conducting
research in these or other institutional settings, one must recognize four key
characteristics of all institutions. First, institutions exist to provide products or
services to certain key constituencies or “customers” (Campbell, 2007). Second,
institutions are composed of multiple stakeholder groups who impose expectations on each other and act and react to events on the basis of the stakeholder
groups’ traditions, missions, and concerns (Rowley & Moldoveanu, 2003). Third,
institutions establish structures to control and direct interactions among stakeholders, often hierarchical structures, to ensure that requisite products and
services are provided (Katz & Kahn, 1978). Fourth, peoples’ participation in a
stakeholder group or an institution is subject to both direct (e.g., pay) and
indirect (e.g., identity) control by the institution and/or stakeholder group
(Stajkovic & Luthans, 2003).
https://doi.org/10.1037/0000258-008
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
113
114 Mumford, Higgs, and Gujar
These observations are critical for understanding the key ethical concerns
arising for researchers conducting psychological studies in institutional settings.
The institution itself and stakeholder groups associated with the institution are
inherently coercive entities. They reward and punish people, and these rewards
and punishments may act to undermine voluntary participation in psychological
studies. Put differently, Principle E: Respect for People’s Rights and Dignity of
the American Psychological Association’s (APA; 2017) Ethical Principles of Psychologists and Code of Conduct (hereinafter referred to as the “APA Ethics Code”)
notes that psychologists should be “aware that special safeguards may be
necessary to protect the rights and welfare of persons or communities whose
vulnerabilities impair autonomous decision making” (p. 4). In this chapter,
we consider the key variables that influence voluntary participation of people,
stakeholders, and institutions in psychological studies.
BENEFITS AND RISKS
Institutions participate in studies when they see a benefit either for the institution as a whole or for key stakeholders within the institution. For example,
a military institution might agree to participate in a leadership study in the
hope of improving officer development. A prison might agree to participate in
a study of adult moral development in the hope of reducing recidivism. What
should be recognized here, however, is that institutions ultimately provide a
product or service, and if a study does not contribute in some way to the institution’s mission or its stakeholders’ interests, it would not be prudent to make
a request for participation to either the institution as a whole or to individuals
in their role as institutional actors.
Investigators must be able to clearly describe the benefit of a proposed
study to officials capable of representing the institution. Not only must the
benefits be described, but the risks of participating in the study to the institution, its stakeholders, and individual participants in relevant stakeholder
groups must also be described. Often investigators, when recruiting institutional support, fail to describe risks of harm. An informed, voluntary participation decision can be made only if both potential benefits and risks to the
institution have been described.
In this regard, however, it is not adequate simply to consider overt benefits
and overt risks—for example, reducing turnover in a firm through a study
examining a new model of job satisfaction. Studies may at times have indirect,
subtle benefits and risks associated with them—for example, a job satisfaction
study may be used to appraise managerial performance and may, potentially,
induce conflict between senior executives and midlevel managers. These subtle,
indirect risks must be noted, and strategies for minimizing both overt and
subtle covert risks must be provided.
It should also be recognized that it is not sufficient simply to describe risks
and benefits to institutional officials. A description of the risks and benefits of
the study should be provided to representatives of each key stakeholder group
Ethics in Coercive Environments 115
that might have a vested interest in the study because of their participation in
or the outcomes of the effort. For example, in a study of physical ability and
retirement timing for correctional officers, it is not sufficient to inform only
the bureau; the union, wardens, and human resources should also be informed
about the risks and benefits of the study.
The need to inform key stakeholders about the risks and benefits of a
proposed study broaches another noteworthy issue. Institutions are complex
entities, and it may not be clear exactly which stakeholders might be impacted
by a study and how they might be impacted. One implication of this observation
is that investigators must understand and have expertise with regard to the
institution. In addition to an understanding of the institution, it is often desirable for the investigator to have consultants or an advisory board who can
inform them with respect to the risks and benefits that might be relevant to
the members of key stakeholder groups. Thus, for the U.S. Army, a study of
management of negative soldier behavior might require both the involvement
of senior sergeants as consultants and advice from an advocate stakeholder
group, the sergeant major’s academy, to ensure that relevant stakeholders
have been fully informed about study benefits and risks (Connelly, 2016).
INFORMED CONSENT
If institutional representatives and key stakeholder groups agree to allow
a psychological study to take place, the investigator may proceed to collect
data from individuals. Support for the study by the institution and key stakeholders, however, has nothing to do with whether a particular individual will
or will not participate in a study. Put differently, the individual must also be
viewed as an autonomous decision maker whose participation in the study is
based on informed consent. According to the APA Ethics Code (Standard 8.02,
Informed Consent to Research (a)), informed consent requires describing the
research (e.g., duration and procedures of the study); describing factors that
might influence their willingness to participate; describing risks and benefits,
including research benefits, of participating; allowing participants to withdraw and describing the consequences of withdrawing; describing limits of
confidentiality; describing incentives for participating; and providing ways to
contact responsible parties to obtain answers to questions.
All of these issues remain relevant to research in coercive institutions. Thus,
investigators must make themselves available to answer questions about the
nature and intent of a study. If participants wish to withdraw from the study
for any reason, they should be allowed to do so, and whenever possible, their
withdrawal should be treated as confidential information. Risks and potential
benefits of withdrawal should also be described. However, study benefits and
risks should be described not only with respect to the individual, but also with
respect to the institution as a whole and the stakeholder groups that have a
vested interest in the study.
116 Mumford, Higgs, and Gujar
In institutional studies, a number of unique considerations arise in obtaining
informed consent from individual participants. One might assume that the
institution can provide consent because of the person’s involvement in the
institution. However, neither membership in an institution nor involvement in
a stakeholder group can substitute for individual informed consent. A manager
cannot consent on behalf of employees; a teacher cannot provide consent or
permission on behalf of students. Either the individual participant must provide
informed consent, or a person directly responsible for this person as an individual (e.g., parent, guardian) must provide consent when the person is unable
to understand the information being provided in order to reach an informed
decision with respect to participation (David et al., 2001).
Another issue of some concern is the potential for administrative actions to
be taken on the basis of the data collected in a study. If a study is conducted to
support administrative action, it falls under an institutional legal rubric, and
informed consent, although desirable, may not strictly be necessary. When a
study is conducted purely for research purposes, however, participation in
the study must be treated as a separate issue from any potential administrative
actions. A person cannot be granted visitation privileges, an administrative
action, for participating in a prison study. Data, of course, may inform subsequent institutional policy. However, the participant cannot per se be subject
to administrative action based on participation or nonparticipation.
Our foregoing observations, of course, raise a key question with respect to
volunteering for studies in institutional settings. Institutions have control
over significant outcomes for individual participants in a study. If outcomes
(e.g., bonus, time off) are to be provided by the institution for participation in
a study, the investigator must expressly note this when informed consent is
obtained and expressly define the procedures for providing such rewards. For
example, if a lottery for a prize is to be awarded, the procedures by which this
lottery is to be conducted should be described.
Institutions might not coerce participation only through overt rewards
(e.g., overtime pay), however. Participants may also be coerced by subtle social
mechanisms. For example, senior officers standing in the back of the data
collection room, even if they say nothing, may make military personnel feel
coerced to participate. In fact, social pressures to participate in a study may be
quite pervasive. To manage these social pressures, it is generally advisable
that studies be conducted in relatively private locations and that investigators
treat as confidential who has or has not agreed to participate. Notably, this
confidentiality should be maintained with respect to both the institution and
members of relevant stakeholder groups.
In discussions of informed consent, the issues of deception and debriefing
often arise. In institutional studies, because of the involvement and vested
interests of multiple stakeholder groups, deception in studies is generally considered inappropriate. Deception in institutional studies may induce significant conflict among stakeholders, acting to undermine the integrity of a study.
Debriefing, in contrast, should be routinely conducted, and typically this is
Ethics in Coercive Environments 117
done by providing all participants and relevant stakeholders in the institution
with a written summary of the findings of the study.
In providing written debriefings, however, another consideration should be
borne in mind. Debriefings are useful only if understood by participants and
stakeholders. Thus, debriefings or reports should be written clearly without
undue jargon or confusing language. Similarly, given the scope of some institutional studies, a one-size-fits-all informed consent document may not be
appropriate. Rather, multiple consent documents may be needed to ensure that
various stakeholder groups, as well as the individual members of these groups,
fully understand the nature and intent of the study.
DATA OWNERSHIP
When institutions sponsor studies, they may assume that they own the
data. This assumption clearly may conflict with the informed consent document, and in cases where such conflict exists, the informed consent document
should determine the course of action taken. One way investigators might
minimize potential conflict in this regard is to ensure that all data are treated
and stored anonymously. Although this solution often works, when data
are collected online, it can be far more difficult, because of tracking technology,
to protect participant anonymity (Nosek et al., 2002). Under these conditions, perhaps with the help of information security experts, investigators
should establish procedures to prevent inappropriate tracking or hacking of
participant identifiers.
Although full anonymity is desirable, in many institutional studies it may
not prove possible. The problem arises from the need to link data collected
in a study to administrative data routinely collected by the institution—for
example, when yearly performance reviews are being treated as the criterion
in a test validation study. Under these conditions, the unique identifiers of a
participant should be back coded to study-unique identifiers, and only backcoded data should be stored.
In presenting the findings of a study, of course, only aggregate data should
be provided. Moreover, some caution is called for in collecting and storing
demographic information, however relevant it may be to the study design.
Specifically, certain combinations of demographic data may allow individual
participants to be identified. Thus, investigators should review the demographic data collected and ensure that any combination of such data will
result in a minimum of six to seven participants falling into a particular category. If this criterion cannot be met, one or more of the demographic identifiers should be dropped.
Both institutions and investigators wish to control the data collected in
many studies. For institutions, the issue boils down to protecting the institution from regulators or competitors. For investigators, the issue boils down to
protecting publication rights. What must be recognized here, however, is that
118 Mumford, Higgs, and Gujar
science requires replication (Bohannon, 2015), and in certain cases, initial
data analyses may be subject to replication or extension by other investigators. One implication of this statement is that deidentified study data should
be made publicly available. Another implication is that data from any institutional study should be maintained in an open-source database for a minimum
of 5 to 7 years (Atwater et al., 2014).
CONFLICT AMONG STAKEHOLDERS
Institutional studies are often done to provide information bearing on subsequent institutional policy. This observation is noteworthy because different
stakeholders may advocate very different policies, and from the perspective of
the stakeholder group, the study is not a neutral, fact-finding effort. Rather,
it may be a threat or, alternatively, a set of facts that can be leveraged for
personal political gain. It is not possible to remove conflict, politics, and personalization from reactions to any study conducted in an institutional setting.
Investigators, however, must act to minimize such reactions.
The global principle to be applied in addressing such issues is maintenance
of scientific objectivity. One aspect of objectivity is to clearly state the limitations of any study conducted in an institutional setting. Another aspect of
objectivity is to design the study with respect to the substantive issue of concern by taking into account different perspectives on the issue. Still another
aspect of objectivity is that investigators conducting research in institutional
settings should not seek to advocate for a particular stakeholder group or a
particular model of the issue at hand.
At a more practical level, however, studies conducted in institutional settings must maintain separation. By the term “separation” we refer to the fact
that any single study only informs; it does not determine institutional policy.
Institutions may choose to use or ignore the findings of a particular study in
determining institutional policy. Investigators must not only recognize the
necessary separation, they must also seek to create conditions in which separation is evident—for example, by noting to participants that the study does
not determine policy and removing themselves from policy decisions following
presentation of findings.
INCLUSIVENESS
People derive their identity from not only the institutions and stakeholder
groups they are affiliated with but also their background and life history. Frequently in institutions, gender, ethnicity, disability, religion, and sexual orientation, among other variables, may act to define unique stakeholder groups
with unique concerns. These concerns, moreover, are relevant to both how
studies are conducted and how the resulting data are used. Recognition of the
aforementioned concerns and subsequent research actions should be evident
Ethics in Coercive Environments 119
to any psychologist working in organizations in which adverse impact is
routinely assessed and to any psychologist working in prisons where gangs
are typically ethnically based.
The APA Ethics Code explicitly recognizes the need to respect people’s
rights and dignity. In study design, many actions can be taken to ensure
respect for people’s rights. For example, demographic questions should be
written to allow people to adequately describe themselves. Questions that
constitute an invasion of privacy should not be asked. Questions should not
be presented in such a way as to activate gender or ethnic stereotypes. Clearly,
the content of the questions is an important concern (Mumford et al., 2012);
however, it should also be recognized that questions not directly relevant to
the substantive concerns of the study should not be asked. Appropriate review
panels are often integral to addressing all these issues.
Respect, however, is not solely an issue of study design; it also bears on how
analyses are conducted and findings reported. Most studies involve an outcome measure or a dependent variable. If the dependent variable is biased or
unfair—for example, if most managers dislike female employees—predictors
will maintain this bias unless the confounding variable is controlled. Investigators must think about how to control such biases. Moreover, in many cases,
explicit analyses to assess fairness should be conducted and the fairness of
conclusions reported (Truxillo et al., 2004).
REPORTING AND PUBLICATION
In institutions as elsewhere, fabrication, falsification, and plagiarism are viewed
as fraud and are considered serious incidents of misconduct (Steneck, 2009).
Incidents of fabrication and falsification have occurred in studies of institutions;
a case in point may be found in the retraction of multiple articles on authentic
leadership in industrial firms (Atwater et al., 2014). The motivations for fabrication and falsification are manifold. More often than not, however, they
involve a search for the next publication as a vehicle for career enhancement.
This observation leads to an important conclusion: When conducting studies in
institutional settings, the investigator’s goal must be service to the institution
and its stakeholders, not career advancement.
With that said, institutional research presents a unique quandary. Institutions often compensate the investigator for the study being conducted. Such
payments constitute a potential conflict of interest. This conflict, however, may
become quite real if a particular stakeholder group in an institution has decided
to pay for a study to advance a certain administrative policy. Here the investigator may face substantial pressure to find only results supporting this policy
position. To complicate matters even further, conflicts of interest may not
always be the result of institutional pressures; they may also arise because the
investigator wishes to sell a product or service to the institution, and this potential downstream payoff may create a conflict of interest for the investigator to
produce certain results or satisfy certain constituencies within the institution.
120 Mumford, Higgs, and Gujar
Such conflicts may result in selective reporting of findings through intentional
or unintentional actions such as inappropriate data trimming or inappropriate
rejection of a certain outcome measure for producing “unacceptable” findings.
Traditionally, conflicts of interests have been managed through reporting
of the source and amount of funding received for any study. Such actions,
although desirable, may not always fully address the issue. One alternative
approach is to request that an independent investigator replicate all analyses,
recognizing their contribution through authorship or acknowledgment. Another
alternative is to involve multiple investigators in the study, including investigators who are not routinely exposed to the institution or who do not stand
to profit from subsequent work for the institution. Yet another approach is to
ensure that multiple stakeholder groups, ideally of diverse stakeholders, are
involved in providing funding for a study. These varied approaches for
managing potential conflicts of interests should not be viewed as fully adequate.
Rather, multiple approaches should be used to ensure that a package of actions
is available to manage potential conflicts of interests.
All institutional studies, however, have embedded in them a conflict of
interest. Investigators, as scientists, have a responsibility to ensure the public
dissemination of study findings. Institutions supporting the study may not
want study findings to be shared, and often their reasons are legitimate. An
industrial firm may not want problems in its workforce disseminated to
competitors. A prison may not want issues in correctional officer block control
(i.e., control over a section of the prison) disseminated until they have had
time to put new policies into place. As a result, a conflict arises as to if, when,
and how study findings can be disseminated.
Investigators and institutions should formulate an action plan for managing
potential conflicts of interest with respect to dissemination. For example, the
expectation for publication of findings should be established at the outset of a
study. Data not considered publishable should be identified when the study
design is formulated. Institutions, if they desire, should be granted anonymity.
The timing of publication should be agreed upon so as to not interfere with institutional policy decisions or program deployment. Although there is no single
approach for resolving conflicts between investigators and institutions regarding
dissemination, it is the responsibility of the investigator to bring this issue to
the fore and to negotiate an appropriate, fair resolution for both parties.
SUSTAINABILITY
When institutions commission a study, typically the expectation is that the
study will produce findings of long-term value to the institution. Put differently, institutions seek findings and products from studies that are sustainable
over time. The importance of sustainability is explicitly noted in the Society
for Industrial and Organizational Psychology’s ethical guidelines, which also
note that investigators have a responsibility to act on misuse of findings and
procedures emerging as a result of an institutional study (APA, 2017).
Ethics in Coercive Environments 121
The issue of sustainability implies that investigators must know the boundaries of their own expertise. If an investigator does not know the area in which
they are working, it is unlikely that the study findings or products emerging
from the study will be sustainable. Of course, many institutional issues are
complex and involve multiple forms of expertise, resulting in a situation in
which no single investigator has all the requisite expertise. Under these conditions, it is the responsibility of the senior investigators to recruit colleagues
who have the expertise that is currently lacking. In fact, skilled investigators
constantly monitor the issues arising in institutional projects to ensure that
requisite expertise is available.
Expertise, however, is important in another way. When adequate expertise
is lacking, it is difficult to identify abuses in the way the institution uses the
study findings or products. Not only does the prevention of abuses depend on
expertise; it also requires investigators to establish formal or informal systems
for monitoring how study findings or products are being used by the institution. Investigators must also know how to report abuses to responsible institutional officials if they occur.
Although the need for expertise in institutional studies is crucial for ethical
conduct, a point often recognized by institutions when recruiting investigators, the financial incentives attached to many institutional studies make it
attractive for investigators to claim expertise they do not have. Such claims
represent false advertising, and a claim, if acted on, may come at the expense
of the institution and its stakeholders. This observation has two noteworthy
implications. First, claims of expertise should be backed by documentation
such as performance of similar studies in the past. Second, investigators should
withdraw from potential studies if they or others believe they lack the requisite expertise. Put somewhat differently, investigators cannot coerce institutions any more than institutions can coerce investigators.
CASE STUDY
An investigator who has a long history of research on institutional leadership
focused on leader thinking skills has been contacted by a senior manager of a
large, multidivision industrial products firm. The manager is on the short list
for successor to the CEO of the firm. He requests the leadership scholar to
conduct a study of the need for sincerity on the part of midlevel managers in
the firm.
The leadership scholar is enthusiastic about the project, in part because the
study will be adequately funded, in part because they have been assured that
all divisions of the firm will participate, and in part because a future stream of
work is envisioned in which midlevel managers will be continuously assessed
on their sincerity—work the leadership scholar will be involved in and paid
for and that might provide the basis for subsequent publications.
The measure developed by the leadership scholar is a behavioral self-report
measure that includes items such as “I always describe my plans to followers”
122 Mumford, Higgs, and Gujar
and “I always adhere to commitments I have made.” This measure is administered to 200 midlevel managers at an event where the project sponsor discusses the importance of leader sincerity with the managers. The measure
is to be validated against follower liking for the leader and senior managers’
ratings of the midlevel managers’ performance; the latter criterion is dropped
because of the investigator’s belief that given the firm’s financial performance,
all midlevel managers are performing well.
Initial data analyses indicate that the sincerity measure is not an especially
effective predictor of followers’ liking for the leader. However, if data gathered
in two divisions, finance and research and development, are dropped, the predictive relationships become much stronger. On the basis of these findings, the
senior manager asks the leadership scholar to formulate a program for developing management sincerity. Although the leadership scholar does not know
much about training and development, they do know leadership, so they execute the training program across the firm. A few complaints are registered, but
the senior manager who initiated the study sees no real problem and continues
the program until a new CEO, not the senior manager, ends the program.
As the leadership scholar now has more time, they begin to prepare the
findings from the initial study for publication. In working on the publication,
they notice that in dropping the finance and research and development middle
managers, power and sample size decrease a bit. However, because only findings using those who participated are reported, all participants recruited are
reported in the sample description. Given the findings obtained, the leadership
scholar argues in the publication that all leaders must always be sincere.
Are there problems here? Of course. Consider a few examples—and think
about others. The investigator never negotiated publication of the article, and
the article does not note that sincerity did not appear to work in two divisions—
data trimming resulting from conflicts of interests. No stakeholders were
fully informed of risks and benefits. Remember, in most situations it is more
important for leaders to exercise influence than to be sincere. Failure to
take into account the need for leaders to exercise influence and perform effectively means that risks and benefits have not been fully considered. Moreover,
not all stakeholders were informed—note that nothing was said about other
divisional directors. To make matters worse, the speech of the sponsoring
leader to midlevel managers can be viewed as a form of coercion. Expertise in
leader thinking skills has little to do with leader sincerity, and because of this
lack of expertise, the investigator discounted complaints—complaints that led
the development program to be unsuitable.
CONCLUSION
There are many other ethical issues when conducting research in institutions.
However, the foregoing observations seem sufficient to make our basic point.
It is far too easy to engage in unethical practices when conducting research
in coercive institutional environments. We hope this overview provides an
Ethics in Coercive Environments 123
initial impetus for considering these ethical issues in greater depth, which is
crucial for doing our best science and conducting science in the service of the
institutions that make our day-to-day lives possible.
REFERENCES
American Psychological Association. (2017). Ethical principles of psychologists and code
of conduct (2002, amended effective June 1, 2010, and January 1, 2017). http://
www.apa.org/ethics/code/ethics-code-2017.pdf
Atwater, L. E., Mumford, M. D., Schriesheim, C. A., & Yammarino, F. J. (2014). Retraction of leadership articles: Causes and prevention. The Leadership Quarterly, 25(6),
1174–1180. https://doi.org/10.1016/j.leaqua.2014.10.006
Bohannon, J. (2015). Reproducibility: Many psychology papers fail replication test.
Science, 349(6251), 910–911. https://doi.org/10.1126/science.349.6251.910
Campbell, J. L. (2007). Why would corporations behave in socially responsible ways?
An institutional theory of corporate social responsibility. Academy of Management
Review, 32(3), 946–967. https://doi.org/10.5465/amr.2007.25275684
Connelly, M. (2016). Negative soldier behaviors [Proposal submitted to the U.S. Army
Research Institute, Fort Hood, TX].
David, M., Edwards, R., & Alldred, P. (2001). Children and school-based research:
“Informed consent” or “educated consent”? British Educational Research Journal,
27(3), 347–365. https://doi.org/10.1080/01411920120048340
James, W. (1890). The principles of psychology. Holt.
Katz, D., & Kahn, R. L. (1978). The social psychology of organizations. Wiley.
Mumford, M. D., Barrett, J. D., & Hester, K. S. (2012). Background data: Use of experiential knowledge in personnel selection. In N. Schmitt (Ed.), The Oxford handbook of
personnel assessment and selection (pp. 353–382). Oxford University Press.
Nosek, B. A., Banaji, M. R., & Greenwald, A. G. (2002). E-research: Ethics, security,
design, and control in psychological research on the internet. Journal of Social Issues,
58(1), 161–176. https://doi.org/10.1111/1540-4560.00254
Rowley, T. I., & Moldoveanu, M. (2003). When will stakeholder groups act? An
interest- and identity-based model of stakeholder group mobilization. Academy of
Management Review, 28(2), 204–219. https://doi.org/10.5465/amr.2003.9416080
Stajkovic, A. D., & Luthans, F. (2003). Behavioral management and task performance in
organizations: Conceptual background, meta-analysis, and test of alternative models.
Personnel Psychology, 56(1), 155–194. https://doi.org/10.1111/j.1744-6570.2003.
tb00147.x
Steneck, N. H. (2009). Introduction to the responsible conduct of research. Office of Research
Integrity.
Truxillo, D. M., Steiner, D. D., & Gilliland, S. W. (2004). The importance of organizational justice in personnel selection: Defining when selection fairness really matters.
International Journal of Selection and Assessment, 12(1–2), 39–53. https://doi.org/
10.1111/j.0965-075X.2004.00262.x
9
Ethical Challenges in
International and Global
Research
B. R. Simon Rosser, Eleanor Maticka-Tyndale, and Michael W. Ross
S
ometimes it is not sufficient just to undertake research in one’s home
country. Many of the world’s greatest public health challenges are in
so-called developing countries. Others are truly global in scope and call for a
global response. In 1990, the Global Forum for Health Research captured these
challenges in the term “10/90 gap,” highlighting the disparity in funding for
health research (10%) in countries experiencing the highest proportion (90%)
of preventable deaths globally (Commission on Health Research for Develop­
ment, 1990). More recently, Luchetti (2014) summarized the work of several
researchers demonstrating that the excessive morbidity and mortality burden
in low- and middle-income countries is associated with social and behavioral
factors rather than the absence of technologies of prevention and treatment.
This finding places solutions to this public health challenge in the domain of
psychological and other social science research. To have the greatest impact on
solving the world’s most pressing challenges, psychologists and other social
scientists ethically should prioritize studies and collaborations on the basis of
where the need is greatest and where interventions can do the greatest good.
In this chapter, the unique challenges researchers face when working internationally and globally are explored using examples from our work advancing
sexual health. This work includes experience evaluating a comprehensive
sexual health intervention tailored for health workers in East Africa; lessons
learned from conducting a study of more than 170,000 gay, bisexual, and other
men who have sex with men (MSM) in 25 languages and 38 European
countries; research on stigmatized behavior (e.g., homosexuality, drug use)
https://doi.org/10.1037/0000258-009
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
125
126 Rosser, Maticka-Tyndale, and Ross
in countries where it is illegal and participants fear for their lives; and research
on HIV prevention in five countries in sub-Saharan Africa.
ETHICAL GUIDELINES AND OTHER STANDARDS
The ethical principles outlined in Chapter 1, including those in the Belmont
Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, 1979), and those set forth in the Declaration
of Helsinki (World Medical Association, 2018) are universal in their application. Regardless of whether research is conducted domestically or internationally, three basic principles enshrined in these documents apply: respect
for persons and their autonomy, beneficence and the duty to minimize
harm and maximize benefits, and justice in ensuring equality in participation in research.
The United States has been a world leader in health research, with the
National Institutes of Health (NIH) being the largest funder. In 2013, of
the estimated $37.1 billion spent by the 10 largest funders, approximately
$26.1 billion (or 70%) was NIH funding (Viergever & Hendriks, 2016). Given
the imbalance in health burdens across the globe, it can be argued that the
principle of minimizing harm and maximizing benefit places an ethical responsibility on researchers to propose and design studies where they can do the
greatest good, which applies both to place and population. U.S. federal funding
agencies reflect this priority by assigning “environment” as one of usually
five criteria determining the potential impact of the research. Although
many researchers focus their work domestically, at times it is both ethically
compelling and scientifically sound to propose research in communities beyond
U.S. borders. When that happens, there are a number of additional ethical
challenges to consider.
ENHANCING ETHICAL RESEARCH IN INTERNATIONAL SETTINGS
When proposing an international study, particularly in a low- or middle-income
country, there are several prevention strategies or best practices to promote
rigorous and ethical science (see Table 9.1). The sections that follow discuss and
provide case examples for the three greatest challenges when working internationally: (a) recognizing one’s own limitations, perspective, and expertise;
(b) ensuring adequate cultural expertise appropriate to the setting; and (c) being
conscious of positionalities and consequent power relations vis-à-vis research
partners, staff, and participants. In addition, there are special considerations
when researching oppressed, stigmatized, and illegal populations.
Recognizing One’s Own Limitations, Perspective, and Expertise
When working internationally, it is important to cultivate both cultural and
social humility. Often, the U.S.-based researcher is the principal investigator,
Ethical Challenges in International and Global Research 127
TABLE 9.1. Strategies to Enhance Ethical Research in International Settings
Principle
Respect for persons
Leadership
Strategies or Best Practices
Use a multiple principal investigator leadership plan
with an in-country researcher sharing responsibility
for all aspects.
Research team
Commit to advancing the careers of everyone on the
research team, including in-country members.
Think through how participation in this study will build
each team member’s and in-country capacity.
Cultural and social humility
Recognize your own cultural and social biases.
Commit to increasing your cultural knowledge (e.g.,
cultural and/or language immersion).
Model humility in referring to your own background.
Cultural respect
Anticipate that in-country norms and laws may require
extra meetings, levels of review, paperwork, and
focus on politics.
Cultural competence
Consider culture and situation in deciding what
questions are asked, exploring how they are
understood in the culture, and interpreting findings.
Beneficence
Work with the in-country team collaboratively.
Proactively ask about legal requirements for research
(e.g., data ownership, extra levels of institutional
review board review).
Honor local practices and expectations (when feasible
and appropriate).
Justice
Within the research
Social justice
Ensure that reasonable, nonexploitative procedures are
established that are feasible and culturally respectful
in the country.
Plan for extra meetings in-country to review all
protocols.
Ensure sufficient in-country budget to manage all
activities.
In reporting results, ensure the in-country team has
opportunities to present at conferences and to
publish.
Justice (and cultural respect) can mean prioritizing the
least powerful in society (e.g., women living in
staunchly patriarchal systems, ethnic groups,
refugees, Indigenous communities in countries
where they are stigmatized, people dispossessed of
rights, power, and position).
128 Rosser, Maticka-Tyndale, and Ross
wrote the research grant application, has responsibility for the success of the
study, controls the funds, and has a top team of respected experts. The danger,
then, is to assume that the principal investigator knows best. Likely, they do
not. Our collective history of conducting research internationally includes
numerous examples of principal investigators acting like the “ugly American”
stereotype. The dictionary defines “ugly American” as a pejorative term used
to refer to perceptions of “American citizens as exhibiting loud, arrogant,
demeaning, thoughtless, ignorant, and ethnocentric behavior mainly abroad,
but also at home” (Wikipedia, 2020, para. 1). In conducting research, any one
or a combination of the following examples is “ugly” (meaning the opposite
of “best practices”) and should be avoided:
• designing the study without adequate input about the social and cultural
context
• flying into a “developing world” situation and taking over
• demanding and pressuring local partners to produce results
• inadequately resourcing the local staff
• using methods and measures that are not culturally appropriate or respectful
or do not consider the social context and conditions of living and working
in the research locale
• flying out with the data to analyze it back in the United States, again
without any real social or cultural interpretation (known disparagingly by
international agencies as “helicopter research”)
Ultimately, the U.S. researchers may take the credit and senior authorships
or alternatively blame the population, country, or culture if something goes
awry. Any or all of these are understandable in the high-pressure world of
research and academia, but good ethical research conduct applies not only
to the treatment of participants but also to every aspect of planning and
conducting the research as well as to everyone involved in the study.
Two principles in the American Psychological Association (2017) Ethical
Principles of Psychologists and Code of Conduct may be applied to research ethics
in international settings by both psychologists and other social scientists. First,
psychologists should not practice beyond their expertise. The first ethical obligation when conducting research internationally is to recognize that one does
not have expertise in the culture or social systems of that country and must
cultivate a stance of cultural and social humility. Second, psychologists consult with, refer to, or cooperate with other professionals and institutions to
the extent needed to serve the best interests of those with whom they work
and the participants in their research. In international settings, this commonly
means they partner to conduct research with in-country researchers and participants in the coproduction of research rather than conducting research on
them. A recommended practice, wherever possible, is to reflect this partnership in a multiple principal investigator model, ensuring that in-country
Ethical Challenges in International and Global Research 129
researchers share leadership and authority for the project. For example, the
third author (MWR) had the experience of working with a highly stigmatized
minority population in a sub-Saharan African country with an experienced
and sensitive (to the social and political risks) African coprincipal investigator.
In contrast, a doctoral student from a Western country arrived and proceeded
to tell the African investigator that this was “his” research area and that she
should stop her work in the area.
Cultural humility in clinical practice is defined as “the ability to maintain an
interpersonal stance that is other-oriented (or open to the other) in relation to
aspects of cultural identity that are most important to the client” (Hook et al.,
2013, p. 354; Waters & Asbill, 2013). Applied to research ethics, cultural
humility includes the ability to appreciate that research colleagues in other
cultures likely have a better understanding of the meaning, nuances, and
application of the basic principles of ethics in their own sociocultural context
than do researchers not from that context. This does not mean that first-world
researchers abrogate their responsibility to lead the study, or that U.S.-based
researchers cannot raise ethical concerns. It does mean they need to recognize,
as essential to the research process, that cultural and social humility must be
applied to the interpretation and application of ethical principles. This recognition includes a presumption that colleagues in other countries are as concerned
about ethics as visiting researchers are, and that the ethical conduct of research
results only when visiting researchers acknowledge differences in interpretation and application and work together with in-country colleagues to
mutually discern what is the best ethical action given the particular situation
the study is facing.
Three factors guide cultural humility: (a) a lifelong commitment to selfevaluation and self-critique, (b) a commitment to address power imbalances,
and (c) a commitment to develop partnerships with people and groups who
advocate for others (Tervalon & Murray-García, 1998). Although developed to
address cultural humility among mental health professionals domestically, all
three factors can promote ethical research when working internationally.
Sociologically, part of the challenge for U.S.-based researchers is to recognize their own cultural lens when applied to ethics in research. Compared
with other countries, the United States prioritizes the individual and their
rights and responsibilities over other considerations and interprets ethics
through this cultural lens. Almost all “developing country” contexts are not as
extreme in prioritizing the individual as in the United States. In the United
States, the researcher’s ethical obligation is focused on assessing risks to the
individual participant and incorporating strategies to reduce the risk of harm
to the individual. Most other cultures recognize individual rights but acknowledge that these rights and responsibilities are situated within and need to be
balanced with the rights and needs of larger social units, including families,
tribes, local communities, and even nations. In more communally oriented
settings (much of the “developing world”), individual rights are not prioritized
over familial or communal rights. Instead, the three are viewed as interwoven
130 Rosser, Maticka-Tyndale, and Ross
and in need of consideration as a singular whole rather than as separate entities. Navigating the waters between individual and group rights and responsibilities can be a difficult and time-consuming process and one that continues
throughout the course of the research project.
The balance between consent of the community and the individual is illustrated in research on HIV prevention located in multiple communities across
several countries. It is often necessary to obtain permission from regional
government representatives (e.g., state, province, county, district) and community and tribal leaders or elders before entering local communities. This process typically involves gifts in exchange for permission to conduct research
and, in some cases, commitments to provide future gifts or benefits as the
research progresses. In one research project, the regional government minister
changed four times in the course of a 5-year project, and each new minister
required new gifts and commitments and, in one instance, a change in the
permission granted to the research team that required modifications to the
research design (Maticka-Tyndale & Onokerhoraye, 2012). The ethical dilemmas
debated by the institutional review board (IRB) at the Canadian principal
investigator’s university included whether gifts in exchange for permission
constituted bribes and, in the context of the power relations within the community, whether individual community members actually possessed the autonomy
to decline to participate in research endorsed by community leaders and elders.
Within the context of the local cultures and social systems, the gifts showed
respect for and acknowledgment of the authority of local leadership. They also
constituted a portion of the income of community leaders. Individual agreement to participate in the research was dependent on endorsement of leaders
who had been delegated the responsibility for looking after the well-being of
the community and controlling access to the community by outsiders.
In another example, the third author (MWR) debated the necessity of providing payment, in a study of military recruits in sub-Saharan Africa, to the
commanding officers. The payments were made, and the recruitment, administration, and follow-up in the study were among the most accurate and timely
of any experienced by this author. Access to adequate administration and
organization, particularly in more regimented settings, was valuable and
worthy of recompense (Ross et al., 2006).
Another ethical dilemma was posed by the large number of orphans in
communities most seriously affected by AIDS (Maticka-Tyndale, 2004). In a
research project on HIV prevention among youth, the research team grappled
with the question of who would be approached for consent for participation among underage orphans (Maticka-Tyndale, 2004). Often, there was no
guardian, with the eldest children taking responsibility for finding food and
caring for the youngest. These children would be excluded from the research
if consent from a parent or legal guardian was required. Village leaders presented a solution at a community meeting. The research was explained to
the entire community, with consent sought from the entire community for
the children’s participation. Each child was then given the opportunity to
individually assent or decline to participate.
Ethical Challenges in International and Global Research 131
Decisions about compensation for research participants illustrate another
difference in interpretation and practice of ethics. When conducting research
in the United States, it is viewed as ethically correct to compensate participants for their time and effort. In two studies we conducted, one with health
students (Rosser et al., 2020) and another with teachers (Maticka-Tyndale
et al., 2004) in east African countries, we were advised against compensation.
The first concern was that compensation at any meaningful amount could be
coercive and seriously bias recruitment. A second concern was that offering
compensation would privilege internationally funded studies over domestic
ones. A third was an appreciation that any available funds could have more
impact for students if applied to their common welfare (e.g., to the university
or local schools).
At the other end of the spectrum, researchers need to ask what legacy they
are leaving. For example, the third author (MWR) was shown a room in an
African hospital that researchers from a prominent university had just left
after a well-funded 10-year study. The room contained a worn metal desk,
a chair, and an empty filing cabinet. Across several studies in schools, the second
author (EM-T) found equipment abandoned because it required electricity
from generators to run (which the research teams brought for their projects
but had sold or taken home at the end of their studies). What was left behind
was that study’s physical legacy. (Even if the research produced positive outcomes, leaving behind equipment, furniture, and other objects gives the
appearance that that is the total legacy and burdens the local community with
the responsibility to dispose of things that they do not need or cannot use.)
Researchers should ask themselves whether they have contributed to the
country in terms of training and resources. Travel can be a sensitive issue,
especially if, in what is meant to be a collaboration, all the travel is by U.S.
researchers into the country, or if only the U.S. researchers get to present
results at the most prestigious conferences. On multiple dimensions, the
balance of benefits for each country’s investigators at the end of the study
needs to be equitable.
The language in which research is conducted presents another challenge
for both participation and data quality. English is the language of formal
instruction in many countries of sub-Saharan Africa, with Kiswahili typically
used in informal quotidian conversation across different tribal and nativelanguage groups. In the formative research for the study to develop and evaluate
a sexual health curriculum (Rosser & Leshabari, 2018–2023), participants
were students who were multilingual and attended a Western-type university
where classes were taught in English. Initially, the third author (MWR) and
colleagues planned to conduct focus groups in English, reasoning that the
students learned in English, that Kiswahili was for most of them their third
or fourth language, that English would allow all the team to observe, and that
recording the groups in English would save in translation costs. However,
when we piloted procedures in both Kiswahili and English and compared the
quality of the transcripts, it quickly became apparent that students felt freer
to talk and provided richer, more honest answers when speaking in Kiswahili.
132 Rosser, Maticka-Tyndale, and Ross
When procedures were conducted in English, the students assumed that the
English requirement was primarily for the Western researchers’ benefit and
withheld how they truly felt. Culturally, in many sub-Saharan African countries
and Indigenous groups, English was the language of colonization. Reticence in
English language communication is well engrained and passed down over the
generations, whereas native and lingua franca languages (i.e., languages such
as Kiswahili that are widely used in East Africa where native languages differ
between tribes) remain the languages of cultural resistance and the “language
where you let your hair down.” The protocol was changed to allow students to
discuss in whichever language they felt most comfortable.
Further, in areas where sensitive subjects such as sexuality or deviance
from cultural norms are studied, the richness and nuance in the original
language may be enhanced, reduced, or entirely missing when translated,
altering the meaning. The Sapir–Whorf hypothesis posits that the structure of
a language affects its speakers’ worldview or cognition, and thus people’s
perceptions are relative to their spoken language (Lucy, 2001). This hypothesis
fully applies to psychological research in other languages, and researchers
ignore it at their peril. For example, translating words on sexual practices into
any other language is fraught with the potential for misunderstanding. In one
study, the third author (MWR) asked what the word “homosexual” meant in
another language. It was explained that it meant a man who wore women’s
clothes. Conversely, some terms in English may not translate, or may translate
only into derogatory or condemnatory terms. For example, “queer” is a term
in popular use in younger cohorts in the United States. But it was considered a
derogatory, highly insulting term in one sub-Saharan African country. Ultimately, the researchers had to remove it from the questionnaire and reprint
the questionnaires because the MSM population refused to participate if the
study used such insulting language. As one of the informants who had studied
in the United States explained, “This was the equivalent of using the N-word
in the U.S.”
Ensuring Adequate Cultural Expertise Appropriate to the Setting
When conducting research internationally, it is essential to ensure cultural
expertise at all stages of the project from design of the study through publication of results. Beyond ensuring that the study team has the expertise to conduct the study competently, adequate cultural expertise enables the researchers
to recognize nuanced answers; gaps in what was not addressed, said, or evaded;
and even how to interpret silences, delays, and nonanswers.
At the structural level, researchers have a responsibility to think beyond
their individual studies. Many low- and middle-income countries with limited
research infrastructure have experienced the problem of their most talented
health and research professionals being recruited to high-income countries
and prestigious universities. To combat this, researchers have a duty to help
build the research infrastructure in international settings. A commitment to
Ethical Challenges in International and Global Research 133
meaningful collaboration includes asking how investigators will train and
increase the expertise of in-country colleagues and their students.
A typical way to build research infrastructure within the context of a research
project is to provide training to research assistants at all stages of the research
and to work collaboratively with research partners and staff in preparing and
delivering conference presentations and publications (see Onokerhoraye &
Maticka-Tyndale, 2016, for details on methods of capacity building in one HIV
prevention project). These activities build essential skills and experience. In one
study, the second author (EM-T) and colleagues entered into an agreement
with the local university partner to enhance the research training delivered
in their graduate program, focusing on qualitative research. A two-volume
research methods manual that used examples from research conducted in
sub-Saharan Africa was produced (Maticka-Tyndale & Brouillard-Coyle, 2010a,
2010b). The manual was piloted in courses for junior faculty and graduate
students and incorporated into the graduate curriculum at the local university.
Being Conscious of Positionalities and Consequent Power Relations
Research careers in low- and middle-income countries are often very different
from those in high-income countries, in large part as a result of differences in
access to resources and research infrastructure. Positionality is a sociological
term that refers to the social and political context that influences, and potentially biases, people’s understanding of and outlook on their world (in this
case, research and research careers). Researchers from high-income countries
arrive with resources (especially funding) and infrastructure support from
their home institutions (e.g., libraries, laboratories, computers, software, Internet access, support staff) that typically far exceed those available to their
in-country colleagues. This difference in support creates differences in positionalities and power among members of the research team that cannot be
ignored. It calls into play commitments inherent in cultural humility, including
the need to engage in self-reflection and self-evaluation and to address power
imbalances.
Expressions of gratitude and face-saving are two demeanors often exhibited
by in-country colleagues (and at times by research participants or other stakeholders) in response to the positioning of researchers from high-income
countries as both benefactors (i.e., they have brought funds and resources)
and the parties invested with the final decision-making power (and responsibility) by funding agencies. Although such gratitude may be heartfelt, and
people with attitudes of deference and gratitude may be pleasant to work
with, these demeanors can also interfere with good research, particularly
when they are combined with the need to save face when problems arise.
This interference was experienced in two projects in which mishaps occurred
during the research process (Maticka-Tyndale et al., 2007, 2008). Decisions
made quickly in the field when established protocols either were not or could
not be followed saved face for in-country researchers by camouflaging errors
134 Rosser, Maticka-Tyndale, and Ross
to ensure designated tasks appeared completed. However, in one project the
modified procedures violated the randomized controlled design of an evaluation, and in the other, they resulted in compromised data (Maticka-Tyndale &
Onokerhoraye, 2012; Maticka-Tyndale et al., 2007).
When the inappropriateness of the procedures was discussed at research
team meetings, two different responses occurred. In one project, the research
team as a whole (from both countries) met to address the problem. The field
conditions that prevented the protocol from being implemented were identified, procedures for officially modifying the design (changing from an experimental to a quasi-experimental design) were engaged, and all team members
participated in brainstorming new procedures for decision making in the
field. These new procedures were used on several occasions when established
protocols encountered problems in real-life implementation.
In the other country, an in-country researcher who had responsibility for
direction and oversight of fieldwork refused to engage in team discussions of
the mishaps. He blamed the flat, “everyone has a say” decision-making structure
that he felt compromised his authority. Although the field team brought the
difficulties in the field to his attention and followed the directions he gave them,
he vociferously blamed his field team for making “bad decisions.” He announced,
“Just as a boat requires a single captain and a crew that follows orders, a research
project requires an expert leader who makes decisions based on his expertise
and staff who will carry out his orders.” The researchers in this project struggled
with issues of positionality and power throughout. The high-income country
researcher never fully embraced cultural humility in accepting the hierarchical
relationships that were entrenched in cultural norms and social structures.
Although some researchers and all research staff participated in regular team
meetings during which plans and problems were discussed, it was clear that the
in-country researchers were uncomfortable with this open, everyone’s-voiceshould-be-heard format.
Another dimension of positionalities and power comes into play when
researchers from high-income countries are women working in strongly patriarchal environments. As a senior researcher, the second author (EM-T) held a
prestigious university position and had significant experience as a consultant
to national and international agencies. But her researcher status, experience,
and credentials violated the gender norms in several countries where she
worked. In one such country, as was customary for all married women regardless of education or status, she was regularly referred to as “Mrs.,” whereas her
male colleagues were referred to as “Doctor” or “Professor.” The seating of
researchers at official functions, engagement in discussions of serious issues,
and inquiries about aspects of the research were consistently directed to the
men on the research team. The principal investigator found herself engaging
in what Kandiyoti (1988) referred to as the “patriarchal bargain,” which entails
performing a prescribed, albeit undesirable, gender role in order to obtain a
desired end. To advance the research and, in the earliest stages, to obtain
permission from authorities (government, tribal, and institutional) to conduct
Ethical Challenges in International and Global Research 135
research within their jurisdictions, she coached and mentored male colleagues
in the presentation of the research and sat silently observing, participating in
discussions only when invited.
The workings of gendered positionality and power vary across countries and
settings. In rural settings in one country, the education and expertise of a
woman researcher were a barrier to open conversation. There was nothing in
the cultural or social framework to provide a script or model for communication with a woman in such a position. When a local colleague repositioned her
as a grandmother who was helping raise her grandchild, conversation flowed.
In communities where grandmothers are among the elders and are often tasked
with raising grandchildren who have been orphaned by AIDS, this researcher
now had a position and power that fit in the local social structure. Most importantly, there were cultural scripts for opening communication. These issues,
which have been referenced regarding gender and age, can also apply to
race, skin pigmentation, and ethnic and sexual minority status, among other
variables, which can be highly positional.
A final ethical challenge relating to position and power when conducting
research internationally is who owns the collected data. Many low- and
middle-income countries are developing laws and processes regulating the
ownership of data collected in the country. Researchers from high-income
countries taking the data is viewed as a form of cultural colonialism or imperialism, similar to foreigners stealing artifacts. Data collected in Tanzania, for
example, must be analyzed in Tanzania unless formal approval is provided to
export the data. This poses a logistical challenge for data analysis. Skills among
in-country researchers may not be adequately developed for the required data
analysis. Relocating experienced analysts from the high-income country to
the country for extended periods to train local researchers or conduct data
analysis may be difficult and costly.
Archiving of data in countries with limited facilities and infrastructure may
also be challenging, affecting the use of data in future work (e.g., meta-analyses
or comparative analyses after similar data are collected in other settings). The
requirement of government approval has implications for research budgets,
especially when such approval must be sought years after completion of a
project for comparative or meta-analytic work. Data archiving in repositories
becomes an ethical issue if where the data are archived biases access to the data
(particularly if it limits researchers within the country where the data were
collected). Given the mandate to share data by some U.S. funding agencies
(e.g., National Science Foundation, NIH), researchers should develop a written
data management plan that can be signed by both the U.S.-based and in-country
investigators.
Multinational and global research projects provide unique opportunities
to study the effects of large social factors (e.g., national laws, global health
concerns). But they also bring unique ethical challenges. Some projects may
require multiple IRBs, often with conflicting requirements and priorities.
Researchers can be tempted to cherry-pick IRBs that may be less rigorous
136 Rosser, Maticka-Tyndale, and Ross
than others. Other issues arise when standardized questionnaires are used.
Symptomatology of some mental health diagnoses may vary across cultures,
as well as subcultures and settings. For example, gay subcultures and identifications may vary substantially (Tran et al., 2018). Nevertheless, large studies
that involve a robust number of countries with arguably different cultural,
political, and legal environments provide an exciting opportunity to examine
the impacts of such environments on health and behavior (Ross et al., 2013).
ETHICAL CHALLENGES IN RESEARCH ON OPPRESSED,
STIGMATIZED, AND ILLEGAL POPULATIONS
In HIV/AIDS research, the term “key populations” has been adopted by the
World Health Organization to denote “people who inject drugs, men who
have sex with men, transgender persons, sex workers and prisoners” (U.S.
Agency for International Development, 2020, para. 1). In some countries, the
term may also denote ethnic or Indigenous minorities who are exploited for
sex, including children and young girls trafficked for sex. These populations
raise unique challenges for research. In several countries, illegal drug use,
homosexuality, and cross-dressing behavior are imprisonable offenses up to
and including the death penalty. In other countries, although the behavior
may not be illegal, the subgroup may be so stigmatized that participants face
being beaten, killed, or thrown out of their community if discovered.
One principle used in designing research on illegal and stigmatized groups is,
wherever possible, to consult the participants on what level of anonymity they
need to participate safely. For example, in one of the first qualitative studies of
gay Muslims (Minwalla et al., 2005), safety was discussed with each person
enrolled before participation. Although the researchers offered all participants
the opportunity to review quotes from their interviews prior to publication,
two participants declined, fearing that any transcript sent to them for clearance
could be intercepted by a family member, with violent consequences.
There are additional ethical issues with global data that need to be carefully considered in the digital age (Ross et al., 2018). Hacking or retrieving of
data from sites is always a possibility, and use of only a few data points to
identify an individual is relatively easy with modern computing algorithms.
Ross et al. (2018) described requests from at least one foreign government
for data sets on sexual minorities. They wanted data on child abuse of boys
to justify legislation to punish anyone identifying as homosexual or adults
engaging in consensual homosexual behavior. This possibility creates an obligation for researchers to think through how their data can be used or misused. We warn against data sets on stigmatized populations being placed in
the public domain. They can be misused by governments, journalists, and
others with unethical agendas. Both qualitative and quantitative data are at
risk. Although having data in the public domain is of undoubted scientific
benefit, its use leaves the primary researcher’s control.
Ethical Challenges in International and Global Research 137
CONCLUSION
Conducting research internationally and globally raises a number of ethical
challenges. Although ethical principles for conducting research are universal,
each culture interprets them through a unique lens. In our experience, three
of the greatest challenges to conducting international research ethically are
to recognize the researcher’s own limitations and expertise, to ensure the
presence of adequate cultural expertise appropriate to the setting, and to
be conscious of the ethical implications of positionalities and the consequent power relations vis-à-vis research partners, staff, and participants. In
addition, researching stigmatized and illegal populations raises unique challenges that have to be addressed.
REFERENCES
American Psychological Association. (2017). Ethical principles of psychologists and code of
conduct (2002, amended effective June 1, 2010, and January 1, 2017). http://www.
apa.org/ethics/code/ethics-code-2017.pdf
Commission on Health Research for Development. (1990). Health research: Essential link
to equity in development. Oxford University Press.
Hook, J. N., Davis, D. E., Owen, J., Worthington, E. L., Jr., & Utsey, S. O. (2013). Cultural
humility: Measuring openness to culturally diverse clients. Journal of Counseling
Psychology, 60(3), 353–366. https://doi.org/10.1037/a0032595
Kandiyoti, D. (1988). Bargaining with patriarchy. Gender & Society, 2(3), 274–290.
https://doi.org/10.1177/089124388002003004
Luchetti, M. (2014). Global health and the 10/90 gap. British Journal of Medical Practitioners, 7(4), Article a731.
Lucy, J. A. (2001). Sapir–Whorf hypothesis. In N. J. Smelser & P. B. Baltes (Eds.), International encyclopedia of the social and behavioral sciences (pp. 13486–13490). Elsevier.
https://doi.org/10.1016/B0-08-043076-7/03042-4
Maticka-Tyndale, E. (2004). Dilemmas for obtaining consent when working with children in high AIDS prevalence regions. National Council on Ethics in Human Research
Communique¥, 12(2), 27–28.
Maticka-Tyndale, E., & Brouillard-Coyle, C. (2010a). Research methods in applied social
welfare research: Vol. 1. University of Benin and University of Windsor.
Maticka-Tyndale, E., & Brouillard-Coyle, C. (2010b). Research methods in applied social
welfare research: Vol. 2. Qualitative data analysis. University of Benin and University of
Windsor.
Maticka-Tyndale, E., & Onokerhoraye, A. (2012). HIV prevention for rural youth: Mobilizing
Nigerian schools and communities. International Development Research Centre.
Maticka-Tyndale, E., Onokerhoraye, A., & Esiet, A. (2008). HIV prevention for rural
youth: Mobilizing Nigerian schools and communities. Global Health Research Initiative,
Teasdale-Corti Team Grants.
Maticka-Tyndale, E., Wildish, J., & Gichuru, M. (2004). HIV/AIDS and education:
Experience in changing behaviour: A Kenyan example. Commonwealth Education
Partnerships, 2004/05, 172–175.
Maticka-Tyndale, E., Wildish, J., & Gichuru, M. (2007). Quasi-experimental evaluation of a national primary school HIV intervention in Kenya. Evaluation and Program
Planning, 30(2), 172–186. https://doi.org/10.1016/j.evalprogplan.2007.01.006
Minwalla, O., Rosser, B. R. S., Feldman, J., & Varga, C. (2005). Identity experience
among progressive gay Muslims in North America: A qualitative study within
138 Rosser, Maticka-Tyndale, and Ross
Al-Fatiha. Culture, Health & Sexuality, 7(2), 113–128. https://doi.org/10.1080/13691
050412331321294
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/read-the-belmont-report/index.html
Onokerhoraye, A., & Maticka-Tyndale, E. (2016). Research capacity-building and dissemination components of the HIV Prevention for Rural Youth Project in Nigeria. In
D. Morrison (Ed.), Capacity building challenges in international projects (pp. 188–214).
International Development Research Centre.
Ross, M. W., Berg, R. C., Schmidt, A. J., Hospers, H. J., Breveglieri, M., Furegato, M.,
Weatherburn, P., & the European MSM Internet Survey (EMIS) Network. (2013).
Internalised homonegativity predicts HIV-associated risk behavior in European
men who have sex with men in a 38-country cross-sectional study: Some public
health implications of homophobia. BMJ Open, 3(2), Article e001928. https://doi.org/
10.1136/bmjopen-2012-001928
Ross, M. W., Essien, E. J., Ekong, E., James, T. M., Amos, C., Ogungbade, G. O., &
Williams, M. L. (2006). The impact of a situationally focused individual human
immunodeficiency virus/sexually transmitted disease risk-reduction intervention
on risk behavior in a 1-year cohort of Nigerian military personnel. Military Medicine,
171(10), 970–975. https://doi.org/10.7205/MILMED.171.10.970
Ross, M. W., Iguchi, M. Y., & Panicker, S. (2018). Ethical aspects of data sharing and
research participant protections. American Psychologist, 73(2), 138–145. https://doi.org/
10.1037/amp0000240
Rosser, B. R. S., & Leshabari, S. (2018–2023). Training for health professionals (Grant No.
1R01 HD092655). National Institute of Child Health and Human Development.
Rosser, B. R. S., Maticka-Tyndale, E., & Mgopa, L. L. S. (2020, July 27, 29). Researching
sex in Africa: Cultural, ethical and legal considerations [Paper presentation]. International
Academy of Sex Research Annual Meeting.
Tervalon, M., & Murray-García, J. (1998). Cultural humility versus cultural competence: A critical distinction in defining physician training outcomes in multicultural
education. Journal of Health Care for the Poor and Underserved, 9(2), 117–125. https://
doi.org/10.1353/hpu.2010.0233
Tran, H., Ross, M. W., Diamond, P. M., Berg, R. C., Weatherburn, P., & Schmidt, A. J.
(2018). Structural validation and multiple group assessment of the Short Internalized
Homonegativity Scale in homosexual and bisexual men in 38 European countries:
Results from the European MSM Internet Survey. Journal of Sex Research, 55(4–5),
617–629. https://doi.org/10.1080/00224499.2017.1380158
U.S. Agency for International Development. (2020). Key populations: Targeted approaches
toward an AIDS-free generation. https://www.usaid.gov/global-health/health-areas/
hiv-and-aids/technical-areas/key-populations
Viergever, R. F., & Hendriks, T. C. C. (2016). The 10 largest public and philanthropic
funders of health research in the world: What they fund and how they distribute
their funds. Health Research Policy and Systems, 14(1), Article 12. https://doi.org/10.1186/
s12961-015-0074-z
Waters, A., & Asbill, L. (2013, August). Reflections on cultural humility: Given the
complexity of multiculturalism, it is beneficial to understand cultural competency as
a process rather than an end product. CYF News. https://www.apa.org/pi/families/
resources/newsletter/2013/08/cultural-humility
Wikipedia. (2020). Ugly American (pejorative). https://en.wikipedia.org/wiki/Ugly_
American_(pejorative)
World Medical Association. (2018). WMA Declaration of Helsinki—Ethical principles
for medical research involving human subjects. https://www.wma.net/policies-post/
wma-declaration-of-helsinki-ethical-principles-for-medical-research-involvinghuman-subjects/
10
The Ethics of Mental Health
Intervention Research
Andrew W. Jones, Matthew V. Rudorfer, and Galia D. Siegel
D
iscussions of mental health intervention research ethics bring to light
important issues for stakeholders involved in this research to consider,
including investigators and study teams, participants and their families, affected
communities, regulatory and oversight bodies, and funders. This chapter highlights mental health intervention research ethics by presenting three in-depth
case studies that identify salient ethical considerations. These case studies,
which are based on recent research projects in the field, were chosen because
they involve a variety of study designs and psychological problem areas that
are currently relevant and raise important and timely ethical dilemmas. The
case presentations include details about each study that set the context for
the ethical questions, discuss applicable principles that come up in relation to
the study research design and implementation decisions, and describe how
each study approached the specified challenges. The case studies provide the
basis for the broader examination of mental health intervention research
ethics in the next section, which includes a brief literature review and discussion of regulatory and administrative considerations.
This chapter was coauthored by employees of the United States government as part of
official duty and is considered to be in the public domain. Any views expressed herein
do not necessarily represent the views of the United States government, and the
authors’ participation in the work is not meant to serve as an official endorsement.
https://doi.org/10.1037/0000258-010
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
In the public domain.
139
140 Jones, Rudorfer, and Siegel
MENTAL HEALTH INTERVENTION RESEARCH DEFINED
Mental health intervention research refers to investigations into the development,
feasibility, efficacy, effectiveness, implementation, and dissemination of mental
health treatment and preventive interventions with human participants. Treatment and prevention activities are aimed at helping individuals and families
cope with a breadth of challenging, and potentially devastating, emotional,
behavioral, and cognitive issues. Mental health conditions of varied severity
and clinical need, and affecting very different populations, are the focus of
psychological research taking place in many different settings, including
schools, workplaces, and houses of worship, as well as in more traditional
health care sites, including hospital emergency rooms and inpatient units,
outpatient mental health clinics, academic and industry research laboratories, community health clinics, and health care networks. In addition, with
the advent of technological capacities and the pervasiveness of personal
electronic devices, mental health studies may be conducted entirely remotely
or tethered minimally to a given setting. As mental health intervention
research includes a wide range of topics, populations, settings, and research
designs, the specific ethical challenges that come up can vary considerably
across studies.
ETHICAL ISSUES IN INTERVENTION RESEARCH
The psychology literature includes a rich history of discussions of the ethics of
intervention research. The topic continues to be a source of discussion and
debate in the field as new perspectives emerge (Emanuel et al., 2000) and
intervention and research designs evolve. Classic psychology intervention studies
that engendered tremendous ethical debates at the time they took place, such
as the Milgram (1963) and Zimbardo (1973) studies showing a disturbing
tendency of ordinary individuals to inflict pain and suffering upon others under
certain experimental conditions, continue to be debated today as new information about the conduct of those studies emerges (Blum, 2018; Romm, 2015).
Nevertheless, the issues raised about deception, informed consent, and risk–
benefit assessments by Milgram’s and Zimbardo’s well-known intervention
studies continue to inform research practice today.
Debates have emerged based on more recently developed research methods
and approaches; some of these debates involve ethical questions raised in
pragmatic effectiveness research (Simon et al., 2016; Sugarman & Califf,
2014) and research interventions implemented in health care systems (Faden
et al., 2013), as well as studies conducted using cloud-based technologies for
intervention delivery (Cosgrove et al., 2017) or for the full conduct of a study
(i.e., enrollment, intervention, follow-up, and data capture and management;
Harriman & Patel, 2014). Two cases described in this chapter, the Suicide Prevention Outreach Trial (SPOT) and MoodSwings 2.0, provide examples of trials
Ethics of Mental Health Intervention Research 141
using technology-based platforms, identifying the ethical issues that arise in
the context of remote intervention delivery and heavy use of informatics in
trial implementation.
For intervention research, a number of key ethical dilemmas relate to
tensions that arise because of the distinction between clinical research and
clinical practice (Grady, 2018). The therapeutic misconception can lead participants to assume that they are receiving the best treatment for their specific
needs by taking part in research (Thong et al., 2016). This confusion may be
exacerbated if investigators who also serve as clinicians and therapists at a
research site are not clear about their role in an intervention study. Researchers
(and those providing ethical oversight) face what can be a delicate balance of
implementing and completing a scientifically robust research study as per
protocol while upholding the ethical principles of beneficence, autonomy,
and justice (National Commission for the Protection of Human Subjects of
Biomedical and Behavioral Research, 1979). Some of the classic ethical considerations for intervention studies include the following:
• the potential risks presented by an intervention and by study participation
(Rid et al., 2010)
• determination of ethically permissible comparator arms, including the decision of whether a placebo arm is ethically defensible (Street & Luoma, 2002)
• whether medication tapering or withdrawal is reasonable (DuVal, 2004)
• under what circumstances studies should be stopped early (Mueller et al.,
2007)
• what level of participant safety monitoring is necessary for a study and,
relatedly, what the guidelines are for early withdrawal of a study participant
(National Institute of Mental Health [NIMH], n.d.-a)
• what posttrial access to care study teams should provide to participants
(Cho et al., 2018)
• the intersections between ethical and cultural considerations in research,
such as the implications of disparities in power and access to treatment and
resources between funders or sponsors, investigators, and study participants
(Millum, 2012)
• special issues in mental health intervention research with children, especially in relation to consent and assent and decision-making issues (Drotar,
2008; March, Silva, et al., 2004)
Notably, one of the cases we describe, the Treatment for Adolescents With
Depression Study (TADS), touches on a number of these issues as a study with
an adolescent population, numerous treatment comparator arms (including a
placebo arm), and the need for clear safety monitoring and early withdrawal
guidelines.
142 Jones, Rudorfer, and Siegel
REGULATORY AND ADMINISTRATIVE CONSIDERATIONS
Ethical principles guide our laws, policies, and regulations and thus provide
the framework for the administrative and regulatory activities intervention
studies are expected to complete to meet the requirements of funders, regulatory and oversight bodies, and federal guidelines. Thus, ethical intervention
study conduct goes beyond activities that directly impact an individual’s
experience of study participation to include complete and timely management of administrative aspects of a study. Study regulatory and administrative considerations are sometimes considered burdensome and of secondary
importance as they take staff time and study resources. These activities, however, are key to ethical implementation of mental health intervention research
by working to maintain the ethical principles identified in the Belmont Report
(National Commission for the Protection of Human Subjects of Biomedical
and Behavioral Research, 1979), which established that research practices
should do no harm, respect individual autonomy, and promote just access to
study participation and to the benefits of the study data.
Trial Registration and Data Sharing
Prior to study initiation, study teams are expected to complete trial registration
and develop a plan for data sharing and timely dissemination of results. Clinical trial registration helps investigators meet their ethical obligations not only to
participants but also to the public and the research community at large by
supporting the transparency of the clinical trial endeavor in sharing key aspects
of the research protocol, whom to contact about trial participation, and timely
dissemination of study outcomes. To enhance research transparency and
reproducibility, a National Institutes of Health (NIH; 2016a) policy, in effect
since 2017, states that all clinical trials receiving full or partial funding from
NIH must register and disseminate results through ClinicalTrials.gov. ClinicalTrials.gov is a publicly available resource that discloses information about
clinical trials and their results (U.S. National Library of Medicine, 2018).
In addition to trial registration, federal funding agencies such as NIH require
data sharing beyond the publication of study outcomes. An investigator must
include clear language in the informed consent documents about the extent of
data sharing planned for a study.
Regulatory bodies overseeing a clinical trial and the trial funder may
mandate registration on ClinicalTrials.gov. When deciding whether an intervention study requires registration on ClinicalTrials.gov, researchers should
consider the policies of the funder but also of the prospective journals where
the results may be published. Many journals require clinical trials to be registered on a public trials registry as a condition for consideration and, later,
publication. Therefore, it is important for intervention researchers to consider
registration prior to beginning trial recruitment.
Ethics of Mental Health Intervention Research 143
Data and Safety Monitoring
Another important ethical consideration in the planning of intervention
research is the level of data and safety monitoring that a study should have.
Data and safety monitoring plans ensure that
• appropriate oversight and documentation are in place to make sure assessments of participant safety are conducted on a schedule that reflects the
risks presented by the participants and by the study intervention;
• data quality and validity are routinely checked;
• integrity of the protocol is maintained throughout study implementa­
tion; and
• applicable rules are followed for stopping the study for safety, efficacy, or
futility based on an interim analysis.
The level of data and safety monitoring an intervention study requires (e.g.,
whether a principal investigator [PI] can provide adequate monitoring for a
study, whether independent review is needed) is assessed by determining the
risks participants may incur based on trial participation, as well as how complex
a trial is to run according the standards of good clinical practice (GCP). In general, the lower the risk to human participants and the less complex the study is
to execute (e.g., number of participants, single site vs. multisite, complexity of
the intervention and study design, concealment of intervention allocation), the
more likely that PI and institutional review board (IRB) monitoring of data
and safety will be sufficient. Independent safety monitors are often used in
single-site intervention studies that involve concealed allocation or riskier
interventions, and data and safety monitoring boards (DSMBs) are recommended for multisite clinical trials and for trials in which there is thought to
be a significant risk to human subjects (NIH, 1998). DSMBs are independent
oversight committees that review data from across study sites to assess studywide protocol and data integrity as well as participant safety. Members typically include ethicists, persons with lived experience, statisticians, and experts
in both the content and design of the study. Because the three case studies
discussed in this chapter were multisite clinical trials, they all received oversight
from DSMBs.
NIMH (n.d.-b) uses a risk-based monitoring approach to determine monitoring levels for funded studies. NIMH guidance provides examples of specific
mental health interventional research and the suggested level of monitoring.
Ultimately, it is the responsibility of the PI to protect the safety of participants
and to guarantee that the monitoring plan is appropriately followed.
Training
In addition to IRB mandates to review and ensure that requirements for training
on human subjects research protections are met prior to study approval to
144 Jones, Rudorfer, and Siegel
proceed, further training is often required for federally funded research. For
example, as of 2017, NIH (2016b) requires all individuals involved in the implementation of clinical research with human participants to complete GCP training.
GCP training complements human subjects training by providing guidance on
how to conduct administrative aspects of research in a consistent, careful, and
transparent way.
Special Populations
In the TADS case we describe in the next section, adolescents were enrolled,
and the researchers thus had to include proper consideration of conducting
research with children with respect to both appropriate consent and assent procedures and more fundamental ethical questions related to decision making
and power differentials between adults and children. For example, what does a
researcher do when a parent has consented for a child to participate in a study
and the child has initially assented, but the child is hesitant to complete the
interventions? At what point does the PI have an ethical obligation to end the
child’s participation? It is helpful for study teams to discuss stopping thresholds
prior to study initiation to determine when a child should no longer remain in
a study regardless of parental consent or influence.
Mental health intervention research may include participants who are decisionally impaired. Decisionally-impaired individuals are those with psychiatric,
developmental, or other disorders that cause diminished capacity to understand the study they are asked to participate in, including the risks and benefits
of the intervention (Gupta & Kharawala, 2012). Although research with these
vulnerable populations is important for the field, assessing capacity is a key
part of ethically acceptable research. Safeguards for these persons include, but
are not limited to, use of standardized assessments to measure decisional
capacity, proxy consent, a witness for the consent process, and waiting periods
to allow participants additional time to consider risks and benefits. A participant’s capacity can change throughout the duration of a study; to meet acceptable ethical standards, capacity should be considered throughout the duration
of the individual’s participation. Another important ethical consideration is
that decisionally-impaired individuals may have a legally authorized representative (LAR) to consent on their behalf. Researchers should ensure that any
study using an LAR adheres to local and state laws, along with local institutional policies.
Mental health intervention research that includes populations requiring
additional regulatory and administrative requirements, such as prisoners, can
create confusion for study teams. For example, federally funded research
with prisoners, defined in the U.S. Department of Health and Human Services
regulations as research with any individual involuntarily confined or detained
in a penal institution, requires additional protections and IRB review because
of the potential participants’ reduced capacity to make autonomous decisions
(Protection of Human Subjects, 2018, Subpart C). An example of the importance of these additional protections is that participants who are prisoners may
Ethics of Mental Health Intervention Research 145
not have access to proper mental health care. Therefore, a prisoner could be
more likely or may find it is necessary to sign up for a study to receive what they
perceive as additional care. This topic is not relevant only to research projects
recruiting prisoners from the outset; study teams also need to consider what to
do if a participant becomes a prisoner while participating in the study.
CASE STUDIES
As in other health care fields, mental health interventions include a range of
intervention approaches, including psychosocial and behavioral interventions,
medication and device-based treatments, and approaches that combine these
modalities. The case studies presented here highlight ethical dilemmas that arise
related to both psychosocial and pharmacological treatments (see Table 10.1).
They describe contemporary mental health research studies that touch on
a wide range of ethical issues, including research with children and adults,
medication and psychosocial interventions, in-person interventions and interventions delivered remotely, primary and adjunctive treatments, efficacy and
effectiveness research, and pragmatic and explanatory clinical trials.
Case Study 1: MoodSwings 2.0
MoodSwings 2.0, an experimental online intervention for individuals with
bipolar disorder (BD), includes self-guided educational materials, various interactive tools, and discussion forums (Lauder et al., 2017). A randomized controlled trial (RCT) was designed to determine whether MoodSwings 2.0 could
improve depressive and manic symptoms in adults diagnosed with BD (Gliddon
et al., 2019). The MoodSwings 2.0 RCT used a parallel group design with a
stepped care approach to identify the “active ingredients” of the three-prong
MoodSwings 2.0 intervention: Arm 1, the control condition, included the discussion forum only, Arm 2 included the discussion forum plus psychoeducational modules, and Arm 3 included these two interventions plus a cognitive
behavior therapy (CBT)–based interactive tool. The study was administered out
of two sites, one in the United States and the other in Australia, although participation in study activities took place entirely online or via telephone contacts.
Adjunctive Online Treatment
MoodSwings 2.0 was developed as an adjunctive treatment. Thus, for the
RCT, participants must have received care for BD at least twice within the
12 months prior to the study (Lauder et al., 2017); this was a central component of the risk–benefit assessment of the MoodSwings 2.0 RCT. The risk level
may have outweighed the benefit in the RCT if MoodSwings 2.0 was the only
source of treatment for the participants, who were from a clinical population
(BD) for whom a remote and limited treatment may not be considered sufficient. The clinical limitations of online interventions must be considered when
primary communication with participants takes place electronically through
146 Jones, Rudorfer, and Siegel
TABLE 10.1. Overview of Case Studies
Study
Characteristic
MoodSwings 2.0
Suicide Prevention
Outreach Trial (SPOT)
Treatment for
Adolescents With
Depression Study
(TADS)
Problem area
Bipolar disorder
Suicide
Adolescent
depression
Population
Adults ages 21–65
Adults ages 18 or
over with elevated
PHQ-9 scores
Youth ages 12–17
Setting and
trial size
Conducted online
and via telephone
304 participants
(international)
4 health care systems
19,000 participants
13 academic and
community clinics
439 participants
Research
design
Randomized clinical
trial
Pragmatic
effectiveness trial
Randomized clinical
trial
Interventions
All web-based:
Moderated peer
discussion forum
Discussion-format
psychoeducational
modules
Interactive CBT-based
psychosocial tools
Skills training
Care management
Medication alone
Placebo + clinical
management
Psychosocial
treatment (CBT)
alone
Medication +
psychosocial
treatment (CBT)
Adjunctive treatment
Informed consent
Security and privacy
of online
interventions
Safety monitoring
and potential harm
reporting
Online intervention
delivery—
protecting
participants
Privacy and data
management
Ethical
dilemmas
Usual care
Use of a placebo
control group
Risks of anti­
depressant
medication in
young people
Risks of CBT as
monotherapy
treatment
Monitoring levels
Multisite international
trial
Note. CBT = cognitive behavior therapy; PHQ-9 = Patient Health Questionnaire–9.
discussion boards, emails, and phone-based clinical interviews; mental health
internet interventions, although they have shown some success in participants with BD when used adjunctively, have not consistently proved useful
for these participants (Cosgrove et al., 2017). The ethical principle of beneficence maintains that researchers are responsible for protecting the overall
well-being of participants while reducing the overall risks. Thus, participants
in the MoodSwings 2.0 RCT were continually reminded over the course of
the study that MoodSwings 2.0 was not a replacement for face-to-face care from
a health care provider (Cosgrove et al., 2017). Because the MoodSwings 2.0
Ethics of Mental Health Intervention Research 147
RCT did not attempt to replace in-person psychotherapy or ongoing care but
was designed to be used as a supplemental tool, the risk–benefit assessment
of the study was more balanced.
Security and Privacy of Online Mental Health Interventions
When an online intervention is used, the risks to participants from internet
use must be considered because they are sharing private information about
themselves. The internet is a global enterprise with multifaceted access, so
how does a PI maintain the security and privacy of participants and their information? Study teams can begin to address privacy risks by ensuring that
the study platform and website contain effective security mechanisms. The
MoodSwings 2.0 RCT used an enterprise-based database encryption Transport
Layer Security or Secure Sockets Layer that aided in protecting the security
and privacy of participant data (Cosgrove et al., 2017).
In this trial, an added layer of protection was implemented when participants registered online to indicate interest in study participation: The username could not be their own name or any other name they used on social
media. Further, to enhance participant privacy, all study communication used
an internal messaging system rather than a commonly used personal email
system. This messaging approach is frequently used for online interventions
and should be considered when possible (Cosgrove et al., 2017). Furthermore, study discussion forum moderators provided additional assurance of
participant privacy and safety by reviewing each post before it was viewable
on the board to check whether participants posted potentially identifying
information such as name or location.
Online Intervention Delivery—Protection of Participants and
Limitations
Ethical management of participant safety in any trial is of utmost importance
but becomes even more pressing for a study intervention delivered remotely.
As noted earlier, clinical limitations are present for online interventions when
participants are interacting with the treatment intervention primarily electronically and there is limited contact with trained providers. How does a research
study ensure that no harm is done when the intervention is conducted remotely?
The red flag system and discussion forum moderation used in the MoodSwings 2.0 RCT are two examples.
As noted earlier, discussion forum moderators were responsible for reviewing posts before they became viewable on the forum, giving them an opportunity to note any significant mood changes among posting participants. Any
significant mood change received a red flag at the discretion of the moderator
and required further action. Participants were contacted via phone if suicidal
ideation, intent, or plan was referenced in a post. Participants could also receive
a red flag based on their responses to online monitoring of monthly mood
assessments (Montgomery–Asberg Rating Scale for Depression–Self Report
[Svanborg & Asberg, 1994], Altman Self-Rating Mania Scale [Altman et al.,
1997], and Hamilton Rating Scale for Depression [Hamilton, 1960]) or phone
148 Jones, Rudorfer, and Siegel
assessments. On the basis of assessment results, an algorithm developed to
pick up clinical distress or deterioration triggered automated messages and
online follow-up assessments a week later to reassess mood, and in cases of
positive responses to a suicide item, participants received an immediate safety
check call (Lauder et al., 2017). If the participant did not answer, a call was
made to the emergency contact.
Although the red flag system protects participant safety, limitations should
be considered and planned for. What happens if a participant receives a red
flag for a positive response to a suicide item but cannot be contacted? How
should the study team follow up with this participant? How many contact
attempts are ethically sufficient? As there are no established ethical guidelines
to address online follow-up of this nature, these are questions study teams
must discuss among themselves and with regulatory oversight bodies during
the protocol development phase of the research. Tracking how well safety
procedures are working in practice and participant adverse event data over
the course of the study will also help ensure that potential limitations to
safety monitoring of online participants are considered as part of the study’s
ongoing risk–benefit assessment.
Oversight of Multisite and International Research
The MoodSwings 2.0 RCT included administrative sites in the United States
and in Australia. International multisite intervention research presents unique
ethical and regulatory challenges that should be planned for by reviewing the
regulatory requirements at each location prior to study initiation and thinking
through how the study will manage differences in regulatory expectations
across sites. For the MoodSwings 2.0 RCT, an IRB for the U.S. site and a human
research ethics committee (HREC) for the Australian site were needed (Cosgrove
et al., 2017); each oversight body required different ethical and regulatory
review. The site PIs familiar with each country’s review committee were
responsible for making sure the study was implemented following the national
requirements.
Another consideration for the MoodSwings 2.0 RCT was the level of data
and safety monitoring appropriate for the trial, specifically whether an independent DSMB should be convened or the PIs and IRB or HREC sufficed. Riskbased monitoring guidelines suggest that a DSMB would be the appropriate
level of data and safety monitoring for this study because it was both multisite
and a clinical trial (NIMH, n.d.-b). Thus, a DSMB was convened that included
ethicists along with content-expert researchers to review and discuss privacy
and security issues and participant safety data (Cosgrove et al., 2017).
Case Study 2: Suicide Prevention Outreach Trial
In 2016, the journal Trials published an article discussing the protocol for the
Suicide Prevention Outreach Trial, a pragmatic effectiveness suicide prevention trial conducted as part of the Health Care Systems Research Collaboratory supported by the National Institutes of Health (Simon et al., 2016). SPOT
Ethics of Mental Health Intervention Research 149
was designed to compare the effectiveness of usual care in three health systems
with that of two population-based outreach interventions designed to reduce
suicide risk. The interventions were developed to reduce suicide and suicide
attempts in individuals at heightened risk for self-harm. All eligible individuals
with elevated scores on Item 9 (presence and frequency of thoughts of death
and self-harm) of the Patient Health Questionnaire–9 (PHQ-9; Kroenke et al.,
2010) collected as part of primary care or mental health care evaluations were
enrolled in the trial; those randomized to an intervention condition received
invitations to participate in the trial through health system online secure
messaging. The two interventions, skills training and care management, were
both based on effective treatments and delivered via online secure messaging
systems. The skills training intervention was based on dialectical behavior
therapy (DBT) and provided online training on specific DBT skills (Neacsiu
et al., 2010). The care management intervention built off a depression prevention program implemented at a large HMO (Hampton, 2010) and used participant responses to periodically administered Columbia–Suicide Severity Rating
Scale (CSSRS, Posner et al., 2011) screenings to identify a recommended time
frame for follow-up with health system care providers and to encourage participants to see their providers in this time frame.
Informed Consent
The SPOT trial brings attention to a central ethical dilemma raised in research
that involves human participants: Under what circumstances is it permissible
to involve people in research interventions without any informed consent
process, or with a modified informed consent process? Respect for persons
requires that individuals participate in an informed consent process prior to
participating in research, except in cases when a waiver or modification of
consent is granted by an IRB (McKinney et al., 2015). For the SPOT trial, the
study team made the case that a waiver of consent for randomization into the
study arms and consent modification for participation in the experimental
intervention were justifiable ethically and were necessary to allow the research
to go forward (Simon et al., 2016).
The conditions for obtaining a waiver or modification require that the
research involves no more than minimal risk to participants (U.S. Department of Health and Human Services, n.d.). The SPOT trial was considered to
be minimal risk by the study IRBs because the interventions provided to
participants were determined to be of minimal risk and participation in the
study was not considered likely to increase suicide risk because participants
would continue to receive the usual care provided by their health care system
(Simon et al., 2016). As recently argued in a discussion of the inclusion of at-risk
individuals in suicide prevention research, “as long as usual treatments are
not withheld and joining the trial does not predictably increase suicide risk,
participants are not disadvantaged” (Sisti & Joffe, 2018). The research interventions were delivered in addition to participants’ usual care, and providers
were informed of their patients’ trial participation and updated via secure
messaging as needed. In addition, the specific interventions being tested,
150 Jones, Rudorfer, and Siegel
although experimental in the sense that they were adapted for this trial, were
developed on the basis of empirically tested interventions that have been
shown to be effective in reducing suicide risk in other settings and with other
populations. Thus, the trial was deemed to meet a second condition for a
waiver or alteration: An altered consent process would not affect the rights
and welfare of participants.
The study team also had to demonstrate that the research question they
hoped to answer could not be addressed without a waiver of consent for
enrollment and randomization and a modified consent process for intervention participation. The SPOT trial was attempting to address whether either
of two interventions was effective at reducing suicidal behavior in health
systems when the interventions were offered via secure messaging to the care
system population at heightened risk. An effort to assess whether these
outreach and intervention strategies worked to reduce suicide risk relied on
being able to include all those identified as being at risk in the study sample,
not only those who were at risk and consented to study participation, because
learning about people’s willingness to participate in an intervention was a
central part of the research question.
With the minimal risk determination and the waiver and modification of
consent approvals obtained, a modified Zelen design was permissible, and
individuals were randomized to treatment and usual care without providing
consent (Zelen, 1990). Once randomized, those who were assigned to one of
the treatment arms and opted to participate in an experimental intervention
went through a modified consent process. Outcome data would be gathered
on all those randomized, as the trial was taking place in health care systems
that inform consumers that their data may be used for research unless they
actively opt out. Thus, the collection of outcome data from health system
records was permissible without informed consent within certain limits determined by the health system IRBs.
Safety Monitoring and Potential Harm Reporting
Participant safety monitoring and adverse event reporting are standard procedures in intervention research to ensure that participation in experimental
treatments is not increasing participants’ risk of harm. The scientific goal of
answering the research question at hand—in this case, whether the SPOT
outreach and treatment interventions reduced risk of suicidal behavior in the
high-risk health system population—needs to be addressed while maximizing
the benefit to the participants in the study and minimizing potential harms.
For research with individuals at heightened risk of suicide, this ethical
standard often means using study visits and assessments to check on participants in the active intervention and control arms of a trial, and then having
protocols in place for appropriate clinical management as needed. Traditional
safety monitoring and adverse events reporting were not feasible with the
SPOT trial’s design because only one of the treatment arms received additional safety screenings. Because data reflecting increased risk for suicide from
participants in this treatment group could not be compared with data from
Ethics of Mental Health Intervention Research 151
other treatment arms, it was impossible to know whether there were significant differences between study arms that signaled potential harm. Other
safety-relevant data in the trial, the suicide-related outcome data, were ascertained from medical and administrative records at periodic intervals during
the study and were available to the study team only with a 6- to 18-month
delay from the time of event. Any signal indicating change in suicide-related
outcomes would be identified after considerable time had elapsed and thus
would not likely identify an early signal of harm. An additional issue was that
the study team, while gathering clinical data and providing an adjunctive treatment to participants in the intervention arms, were not providing clinical care
to the participants. Any situation they learned about, such as an elevated score
on the CSSRS, would get routed to mental health or primary care providers.
Keeping these design parameters in mind, the team had to think through
what safety monitoring and potential harm reporting made sense for their
study based on the data they would have available, when they would have
access to the data, and the limits to the clinical care they could provide. The
study team developed a protocol for contacting participants and providers if
care management participants had an elevated score on the CSSRS. The team
tracked study team fidelity to this protocol; thus, although the team was not
providing patient clinical management in response to heightened suicide risk,
they were monitoring whether or not the team was making the agreed-upon
timely communications to patients and providers. Also, in lieu of traditional
adverse event reporting models used in clinical trials, the study team developed a report to compare the number of suicide-related events in each treatment arm of the trial collected up to that point in the study. Because this
report included study outcome data presented by treatment arm, it was reviewed
only by the study’s unblinded statistician and the NIMH DSMB.
Privacy and Data Management
The SPOT trial, from enrollment to intervention delivery through outcome
data collection, was conducted using automated systems to move study conduct
along. Participants were identified as eligible for the trial using electronic algorithms applied to data from electronic medical records; back-end programming
activities also determined the timing and content of some of the invitations to
participate and of the intervention delivery, as well as data collection from health
system medical and administrative records (Simon et al., 2016). Patient contact
with the trial study team took place via online secure messaging (phone contact
was an option, as needed). Skills training coaches and care managers made
clinical determinations related to participants in the treatment arms; the
clinical data they were responding to was obtained through electronic systems
(e.g., the administration and scoring of the CSSRS).
The trial’s heavy use of complex informatics and automated systems required
that the research team anticipate potential data management and privacy
problems so that procedures could be developed and put in place to protect
participants. The ethical issues presented in a study involving so many people’s
data, some including sensitive protected health information, transmitted within
152 Jones, Rudorfer, and Siegel
and between health systems required significant advance planning to minimize
the chance of a privacy breach. Additionally, as described previously, the SPOT
trial had numerous automated systems in place to run different aspects of the
trial. Those systems relied on algorithms being correct and also running correctly
in a larger informatics system. To add to the complexity, some of those algorithms were developed and maintained by the study team, whereas others were
developed and maintained by the health systems. Potential harms included the
possibility that problems with back-end enrollment or assessment scoring algorithms would lead to some people being offered services or receiving communications they did not need and others not receiving services or communications
they did need. Ethical conduct of this study involved having informatics checks
and stress tests in place so that when problems did occur in the back-end systems,
whether study specific or health system wide, these issues were identified early
and corrective action plans put in place in a timely manner.
As discussed above, the SPOT trial’s design involved enrolling individuals in
the trial without their consent. Those randomized to one of the experimental
treatments received online outreach inviting them to participate in an intervention on the basis of their elevated score on the PHQ-9. The study team was
aware that one of the risks to participants presented by this intervention trial
was sending online invitations to participants based on sensitive mental health
information when they had not consented to trial participation and had not
been informed by their providers (who had administered the PHQ-9) that this
invitation might be forthcoming. This design was considered ethically permissible, yet it presented the risk that participants (and providers) might consider
this an unacceptable intrusion on patient privacy (Simon et al., 2016). To
address this risk, in addition to obtaining the needed regulatory approvals for
the design, the study team developed protocols for responding to complaints
for staff to follow when issues of this nature came up from a study participant
or a health system provider. The team also maintained a tracking system for
these complaints to be able to assess whether the frequency and nature of the
complaints ever suggested that the original risk–benefit assessment needed to
be reconsidered in light of participant experiences during the trial.
Case Study 3: Treatment for Adolescents With Depression Study
Despite data showing that as many as 5% of teens in the United States experience moderate to severe depression with a serious impact on functioning and
risk of suicide, there were few evidence-based treatments for adolescent depression when the TADS was launched 2 decades ago. This study investigated the
short- and long-term effectiveness of an antidepressant medication and a
manualized psychotherapy, alone and in combination, in young people with
depression. The 439 participants represented real-world adolescents with
moderate to moderately severe major depression (March, Silva, et al., 2004).
For the acute phase of TADS, participants were randomly assigned to a
12-week trial of one of four interventions: (a) fluoxetine (Prozac) monotherapy,
the first and, at the time, only selective serotonin reuptake inhibitor (SSRI)
Ethics of Mental Health Intervention Research 153
antidepressant approved by the U.S. Food and Drug Administration (FDA) for
the treatment of depression in young people; (b) placebo medication monotherapy; (c) a 15-session course of CBT monotherapy; or (d) a combination of
fluoxetine and CBT. The medication-only and placebo groups were treated in a
double-blind fashion, with outcomes evaluated by a blinded independent
rater; the other treatment groups were open label. At the end of 12 weeks, the
blind was broken, and participants who were judged responders or partial
responders to any of the three active treatment groups entered a 6-week
continuation phase, followed, for those who remained improved, by an
18-week maintenance phase, for a total of up to 36 weeks of treatment.
The results of TADS suggest that combination treatment with the SSRI antidepressant fluoxetine and CBT is the safest and most effective intervention
overall for adolescents with moderate to severe major depression (March &
Vitiello, 2009). Fluoxetine alone or in combination with CBT accelerates
recovery from major depression compared with CBT alone. Although the time
lag to response must be considered for each individual adolescent, strengths
of CBT were evident in TADS: Response to psychotherapy without medication
caught up to the response rate of fluoxetine alone several weeks later and
to combination treatment several months later. Additionally, providing CBT
along with pharmacotherapy appeared to lessen the risk of suicidal thinking
and behavior and helped the young patients develop new skills to contend
with difficult negative emotions.
Use of a Placebo Control Group
One of the four treatment groups in TADS received placebo plus “clinical
management,” without either active medication or formal psychotherapy.
Although active medication was delivered blindly to the fluoxetine monotherapy patients, no participants in the three active treatment groups (CBT or
fluoxetine singly or combined) received placebo. Bearing in mind the principles
of balancing potential benefits and risks and providing informed consent, the
use of a placebo control group in clinical research offers a number of methodological advantages, especially the ability to determine the efficacy and safety of
an intervention by factoring out the contribution of the nonspecific components of the treatment. But at the same time, the use of placebo carries with it
a set of ethical challenges, ranging from the need for deception on the part of
the investigators to the more serious risks resulting from the intentional withholding, often for months, of potentially efficacious treatment from ill participants who instead are receiving an intervention known to be biologically inert.
The TADS investigators intentionally limited placebo to the single control
group, even though this “design choice” (March et al., 2007) to avoid a placebo
medication plus CBT arm (or a control psychosocial condition combined with
active fluoxetine) limited the interpretation of the mechanisms of action (i.e.,
experimental therapeutics) at work during the trial. But even this limited
use of placebo in a clinical trial presents ethical challenges for investigators
adhering to the binding precepts of appropriate clinical research methodology
accepted by the field.
154 Jones, Rudorfer, and Siegel
The Declaration of Helsinki, amended most recently at the 2013 meeting
of the World Medical Association General Assembly, includes “Ethical Principles for Medical Research Involving Human Subjects” and specifies conditions
under which the use of placebo in clinical trials is considered appropriate.
Most relevant to TADS is the acceptability of the use of placebo “where for
compelling and scientifically sound methodological reasons” the placebo “is
necessary to determine the efficacy or safety of an intervention” (Principle 33,
World Medical Association, 2013). There is one essential caveat: Patients who
receive placebo “will not be subject to additional risks of serious or irreversible
harm as a result of not receiving the best proven intervention” (Principle 33).
Does the use of a placebo control group in the initial acute phase of TADS
meet these ethical requirements, given that those participants presumably
were exposed to the potential harms of nontreatment (or at least delayed
treatment)? Millum and Grady (2013) of the NIH Department of Bioethics
discussed the possible justifications for placebo controls, citing as one example
treatment trials for “conditions whose response to both established treatments and placebo is highly variable” (p. 511). Without referencing TADS specifically, Millum and Grady used depression as an example of a disorder with
“fluctuating symptoms and a high placebo response rate” (p. 511). According
to these authors, the fact that “it is not uncommon to have inconsistent evidence of the efficacy of approved anti-depressants” (p. 511) could justify the
use of placebo rather than active control to better understand the efficacy and
safety of the experimental intervention. They cautioned, however, that
the fact that a placebo control is necessary to demonstrate efficacy is not sufficient
to justify it. Sometimes the risks of forgoing treatment—for example, for a
life-threatening condition—are so high that it would not be ethical to ask participants to accept them. (p. 511)
Ultimately, they concluded that “there is no consensus regarding the level of
risk to which participants may be exposed by forgoing treatment. For example,
is exposing participants to the risk of a depression relapse too great a risk?”
(p. 512).
Millum and Grady’s (2013) admonition that “there are limits to the level
of risk to which participants may be exposed, risks must be minimized, and
risks must be justified by the value of the expected knowledge” (p. 511)
endorses the guiding principles the TADS investigators adhered to in designing
and conducting their multisite study in the treatment of adolescent depression, consistent with earlier principles on the appropriate use of placebo
control in pediatric psychopharmacology issued by the American Academy
of Child and Adolescent Psychiatry (March, Kratochvil, et al., 2004). Importantly, too, the placebo arm was used solely in the initial 12-week acute
treatment stage and was dropped altogether in the subsequent, open activetreatment stages of the trial. Close follow-up of participants, with available “rescue procedures” throughout the trial, ensured that any worsening or
complications of depression in a patient on placebo would be recognized and
dealt with promptly.
Ethics of Mental Health Intervention Research 155
The experience of TADS participants who had been randomly assigned to
12 weeks of placebo bore out the investigators’ careful approach. Whereas
response rates were lower in the placebo group than the active treatment
groups at 12 weeks, response rates (> 80%) were similar at Week 36, with
slightly lower rates of remission in the initial-placebo participants (48% vs.
59%) following open active treatment for all participants after the 12-week
mark (Kennard et al., 2009). There were no differences between the groups
in rates of suicidal events, study retention, or symptom worsening. Although
they cautioned that delaying the onset of active treatment of depression in
young people in usual clinical settings is not ethical or clinically appropriate, the TADS investigators concluded that their findings argued against
harm being inflicted upon participants receiving an acute trial of placebo and
that placebo is an acceptable intervention in randomized controlled trials of
moderate to severe adolescent depression (Kennard et al., 2009).
Risks of Antidepressant Medication in Young People
In the case of young people, the use of medication for depression poses special
challenges. The most robust data supporting the efficacy and safety of anti­
depressant medication for children and adolescents exist for fluoxetine, which
was the only such FDA-approved pharmacotherapy on the market when
TADS was designed in the 1990s; escitalopram has subsequently gained an
indication for adolescent depression, although the validity of the underlying
data has been questioned (Garland et al., 2016).
Although ethical dilemmas presented in prescribing “experimental” medications were obviated by the use of FDA-approved fluoxetine in TADS, many
remaining questions about its potential benefits and risks made it more challenging for the investigators to predict possible adverse effects and in turn
attempt to prevent them and fully inform patients, their families, and the
IRBs and DSMB providing oversight to ensure subject safety. In the early
2000s, during the course of TADS, particular concern was raised regarding the
risk of suicidal ideation and behavior seemingly provoked by the use of antidepressant medication in children and adolescents. The FDA-mandated black
box warning of increased suicidality associated with antidepressant medication use by people younger than age 24 was not enacted until after the final
TADS participants were randomized in the summer of 2003. Nevertheless,
ethically, the TADS investigators were responsible for taking the risk of suicide
seriously and including appropriate levels of symptom monitoring and rescue
procedures in the protocol, prioritizing the safety of all study participants.
In addition to protecting participant safety, the assessments of suicide risk
conducted in the trial provided data that would benefit the field by contributing to the understanding of suicide in the context of adolescent depression.
Vitiello and the key TADS investigators (2009) reported that 10% of participants experienced a suicidal event during the trial, approximately evenly
divided between suicidal ideation and attempts, with no suicide deaths;
slightly more than half of these patients required emergency department or
inpatient treatment.
156 Jones, Rudorfer, and Siegel
Despite the many advances in the treatment of depression across the lifespan
over the past 2 decades, TADS is still often cited as the most definitive RCT of
pharmacotherapy for adolescent depression. Nevertheless, even today concern
has been raised that published results from TADS provide insufficient information about potential or observed adverse effects of fluoxetine, especially
following the acute phase of treatment, beyond the primary focus—then as
now—on suicidality (Westergren et al., 2019). Ongoing interest in the adverse
event reporting in this study underscores one of the central messages of this
chapter: Ethical conduct of intervention research in mental health requires
that informed consent is based on a complete understanding by potential
participants of the possible benefits and risks—both short- and long-term—of
study treatments (conventional and experimental, including placebo or “sham”
approaches), as well as options for alternative available treatments should
participation in the study be declined.
Risks of CBT as Monotherapy Treatment
In the absence of a medication with commonly associated side effects, psychotherapy is generally regarded as a safe intervention, with little in the way of
risk. Although that is usually true, it is not absolute. Individuals with depression may feel worse if they assess their participation in therapy as inadequate—
for example, not completing homework assignments integral to CBT—and
the treatment does not feel successful and hopeful. Psychosocial therapies
also run risks associated with deferral of biologically based pharmacotherapy
or somatic treatments; should the psychotherapy alone prove insufficient for
addressing a given disorder, the resulting extended time with an ineffectively
treated mental disorder can lead to serious damage in relationships, school or
job performance, or self-care or even to the development or worsening of
existing suicidal ideation or behavior (Klerman, 1990). Prolonged periods of
ineffectively treated serious mental illness might also be associated with less
robust eventual response to treatment. Thus, the informed consent process, with
appropriate discussion of potential benefits and risks to the subject, is as
relevant to the psychotherapy investigator as to those conducting medication
trials (Appelbaum, 1997).
The specific ethical question confronting the TADS investigators was whether
the CBT monotherapy group might be at risk for clinical worsening in the
event of poor therapeutic response. In addition to the then-prevalent concern
that SSRI antidepressant medications might induce suicidality in young
people, the investigators also had to consider whether any treatment of
depression, including CBT alone, would pose a similar risk of clinical worsening
for the young people in TADS. Particularly because psychosocial treatment
might be considered less risky to trial participants and their families, providing
a clear informed consent process regarding the risks of the various treatment
alternatives was essential.
Also, the TADS team developed a comprehensive risk mitigation strategy
that is generalizable to many studies of psychosocial interventions in which
medication is withheld from some or all participants. This strategy includes
Ethics of Mental Health Intervention Research 157
consideration of inclusion and exclusion criteria to avoid enrolling individuals,
such as those with a history of bipolar disorder or psychosis, who are most
likely to require pharmacological or somatic treatments for sustained response
as well as frequent monitoring of clinical status to permit early identification
and intervention in the event of serious worsening. As discussed elsewhere in
this chapter and throughout this volume, thoughtful planning of study design
in collaboration with relevant DSMB and IRB oversight can obviate many of
the potential problems and pitfalls of mental health intervention trials.
CONCLUSION
As the three case studies discussed in this chapter highlight, ethical conduct
of mental health intervention research requires consideration of a range of
potential harms to individuals that may result from their participation in the
study, balancing those against potential benefits and thinking through what
protections need to be put in place to minimize risks to participants and what
information must be conveyed to enable potential participants to provide
informed consent for their participation. Unique ethical issues need to be
considered on the basis of the risks and benefits that come from receiving, not
receiving, or delaying a mental health treatment. In TADS, questions about
the use of placebo had to be considered, as well as the ethics of using antidepressant medication treatment with adolescents given the mixed evidence
for a positive risk–benefit ratio in this population. The MoodSwings 2.0 and
SPOT intervention trials raised different ethical issues because these were
both testing adjunctive treatments that were neither the participant’s primary
source of care nor expected to present elevated safety risks to participants.
In both these studies, though, the interventions were delivered remotely, so a
central question was how to assess participant safety and provide appropriate
follow-up when the direct patient contact was both minimal and remote yet
the participants had significant mental health symptomatology. As an intervention study with a modified informed consent process, the SPOT trial had the
added ethical challenge of working through whether it was justifiable to
randomize individuals into a clinical trial without consent and include research
interventions with modified informed consent procedures.
The two online trials also brought up significant privacy questions that
arise when conducting a clinical trial remotely and, in the case of SPOT, using
electronic medical record data, raising questions about how to protect participant data from breaches that would expose protected health information.
As noted in the discussion of regulatory and administrative aspects of clinical
trial management, data and safety monitoring plans are an essential protection as these plans put procedures in place to support ethical research conduct.
These plans include identifying who will be responsible for conducting regular
reviews of study data and procedures to assess whether patients are being
well cared for and whether data quality and protocol integrity are being maintained. The growing use of large electronic databases, so-called big data, with
158 Jones, Rudorfer, and Siegel
their wealth of demographic and clinical information, as well as the increasing
adoption of smartphone apps and other remote sensors to collect real-time
data on treatment use and adherence, clinical response, and physiological and
social functioning, offer opportunities for the enrichment of psychological
intervention research going forward; at the same time, these new unobtrusive
yet intrusive sources of intimate information about research participants raise
a new host of confidentiality challenges (Rudorfer, 2017).
Finally, with adherence to the appropriate guidelines described in this
chapter, clinical trials in the field of mental health are ethically justifiable and
meaningful because they contribute to the knowledge available on how to
help people and communities live healthier and richer lives. Hence, the importance of the administrative activities that support timely data sharing and
distribution of information about clinical trials is paramount. Letting stakeholders know what research has taken place and is ongoing, as well as the
results of completed studies, which are both now required on ClinicalTrials.
gov, is an ethical mandate intended to allow people to benefit optimally from
the efforts of individuals and families who participate in mental health intervention research.
REFERENCES
Altman, E. G., Hedeker, D., Peterson, J. L., & Davis, J. M. (1997). The Altman Self-Rating
Mania Scale. Biological Psychiatry, 42(10), 948–955. https://doi.org/10.1016/S00063223(96)00548-3
Appelbaum, P. S. (1997). Informed consent to psychotherapy: Recent developments.
Psychiatric Services, 48(4), 445–446. https://doi.org/10.1176/ps.48.4.445
Blum, B. (2018, June 7). The lifespan of a lie. Medium. https://gen.medium.com/
the-lifespan-of-a-lie-d869212b1f62
Cho, H. L., Danis, M., & Grady, C. (2018). Post-trial responsibilities beyond posttrial access. Lancet, 391(10129), 1478–1479. https://doi.org/10.1016/S01406736(18)30761-X
Cosgrove, V., Gliddon, E., Berk, L., Grimm, D., Lauder, S., Dodd, S., Berk, M., &
Suppes, T. (2017). Online ethics: Where will the interface of mental health and the
internet lead us? International Journal of Bipolar Disorders, 5, Article 26. https://
doi.org/10.1186/s40345-017-0095-3
Drotar, D. (2008). Ethics of treatment and intervention research with children and
adolescents with behavioral and mental disorders: Recommendations for a future
research agenda. Ethics & Behavior, 18(2–3), 307–313. https://doi.org/10.1080/
10508420802067450
DuVal, G. (2004). Ethics in psychiatric research: Study design issues. Canadian Journal
of Psychiatry, 49(1), 55–59. https://doi.org/10.1177/070674370404900109
Emanuel, E. J., Wendler, D., & Grady, C. (2000). What makes clinical research ethical?
JAMA, 283(20), 2701–2711. https://doi.org/10.1001/jama.283.20.2701
Faden, R. R., Kass, N. E., Goodman, S. N., Pronovost, P., Tunis, S., & Beauchamp, T. L.
(2013). An ethics framework for a learning health care system: A departure from
traditional research ethics and clinical ethics. The Hastings Center Report, 43(Suppl. 1),
S16–S27. https://doi.org/10.1002/hast.134
Garland, E. J., Kutcher, S., Virani, A., & Elbe, D. (2016). Update on the use of SSRIs
and SNRIs with children and adolescents in clinical practice. Journal of the Canadian
Academy of Child and Adolescent Psychiatry, 25(1), 4–10.
Ethics of Mental Health Intervention Research 159
Gliddon, E., Cosgrove, V., Berk, L., Lauder, S., Mohebbi, M., Grimm, D., Dodd, S.,
Coulson, C., Raju, K., Suppes, T., & Berk, M. (2019). A randomized controlled trial
of Moodswings 2.0: An internet-based self-management program for bipolar disorder. Bipolar Disorders, 21(1), 28–39. https://doi.org/10.1111/bdi.12669
Grady, C. (2018). Ethical principles in clinical research. In J. I. Gallin, F. P. Ognibene, &
L. L. Johnson (Eds.), Principles and practice of clinical research (4th ed., pp. 19–31).
Academic Press. https://doi.org/10.1016/B978-0-12-849905-4.00002-2
Gupta, U. C., & Kharawala, S. (2012). Informed consent in psychiatry clinical research:
A conceptual review of issues, challenges, and recommendations. Perspectives in
Clinical Research, 3(1), 8–15. https://doi.org/10.4103/2229-3485.92301
Hamilton, M. (1960). A rating scale for depression. Journal of Neurology, Neurosurgery &
Psychiatry, 23(1), 56–61. https://doi.org/10.1136/jnnp.23.1.56
Hampton, T. (2010). Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA, 303(19), 1903–1905. https://doi.org/10.1001/jama.2010.595
Harriman, S., & Patel, J. (2014). The ethics and editorial challenges of internet-based
research. BMC Medicine, 12, Article 124. https://doi.org/10.1186/s12916-014-0124-3
Kennard, B. D., Silva, S. G., Tonev, S., Rohde, P., Hughes, J. L., Vitiello, B., Kratochvil,
C. J., Curry, J. F., Emslie, G. J., Reinecke, M., & March, J. (2009). Remission and
recovery in the Treatment for Adolescents With Depression Study (TADS): Acute and
long-term outcomes. Journal of the American Academy of Child & Adolescent Psychiatry,
48(2), 186–195. https://doi.org/10.1097/CHI.0b013e31819176f9
Klerman, G. L. (1990, April). The psychiatric patient’s right to effective treatment:
Implications of Osheroff v. Chestnut Lodge. The American Journal of Psychiatry, 147(4),
409–418. https://doi.org/10.1176/ajp.147.4.409
Kroenke, K., Spitzer, R. L., Williams, J. B. W., & Löwe, B. (2010). The Patient Health
Questionnaire somatic, anxiety, and depressive symptom scales: A systematic review.
General Hospital Psychiatry, 32(4), 345–359. https://doi.org/10.1016/j.genhosppsych.
2010.03.006
Lauder, S., Cosgrove, V. E., Gliddon, E., Grimm, D., Dodd, S., Berk, L., Castle, D.,
Suppes, T. S., & Berk, M. (2017). Progressing MoodSwings: The upgrade and evaluation of MoodSwings 2.0: An online intervention for bipolar disorder. Contemporary Clinical Trials, 56, 18–24. https://doi.org/10.1016/j.cct.2017.02.008
March, J., Kratochvil, C., Clarke, G., Beardslee, W., Derivan, A., Emslie, G., Green, E. P.,
Heiligenstein, J., Hinshaw, S., Hoagwood, K., Jensen, P., Lavori, P., Leonard, H.,
McNulty, J., Michaels, M. A., Mossholder, A., Osher, T., Petti, T., Prentice, E., . . .
Wells, K. (2004). AACAP 2002 research forum: Placebo and alternatives to placebo
in randomized controlled trials in pediatric psychopharmacology. Journal of the
American Academy of Child & Adolescent Psychiatry, 43(8), 1046–1056. https://doi.org/
10.1097/01.chi.0000129606.83206.77
March, J., Silva, S., Petrycki, S., Curry, J., Wells, K., Fairbank, J., Burns, B., Domino, M.,
McNulty, S., Vitiello, B., Severe, J., & the Treatment for Adolescents With Depression Study (TADS) Team. (2004). Fluoxetine, cognitive–behavioral therapy, and
their combination for adolescents with depression: Treatment for Adolescents With
Depression Study (TADS) randomized controlled trial. JAMA, 292(7), 807–820.
https://doi.org/10.1001/jama.292.7.807
March, J. S., Silva, S., Petrycki, S., Curry, J., Wells, K., Fairbank, J., Burns, B., Domino, M.,
McNulty, S., Vitiello, B., & Severe, J. (2007). The Treatment for Adolescents With
Depression Study (TADS): Long-term effectiveness and safety outcomes. Archives of
General Psychiatry, 64(10), 1132–1143. https://doi.org/10.1001/archpsyc.64.10.1132
March, J. S., & Vitiello, B. (2009). Clinical messages from the Treatment for Adolescents
With Depression Study (TADS). The American Journal of Psychiatry, 166(10), 1118–1123.
https://doi.org/10.1176/appi.ajp.2009.08101606
McKinney, R. E., Jr., Beskow, L. M., Ford, D. E., Lantos, J. D., McCall, J., Patrick-Lake, B.,
Pletcher, M. J., Rath, B., Schmidt, H., & Weinfurt, K. (2015). Use of altered informed
160 Jones, Rudorfer, and Siegel
consent in pragmatic clinical research. Clinical Trials, 12(5), 494–502. https://doi.org/
10.1177/1740774515597688
Milgram, S. (1963). Behavioral study of obedience. The Journal of Abnormal and Social
Psychology, 67(4), 371–378. https://doi.org/10.1037/h0040525
Millum, J. (2012). Introduction: Case studies in the ethics of mental health research.
The Journal of Nervous and Mental Disease, 200(3), 230–235. https://doi.org/10.1097/
NMD.0b013e318247cb5b
Millum, J., & Grady, C. (2013). The ethics of placebo-controlled trials: Methodological
justifications. Contemporary Clinical Trials, 36(2), 510–514. https://doi.org/10.1016/
j.cct.2013.09.003
Mueller, P. S., Montori, V. M., Bassler, D., Koenig, B. A., & Guyatt, G. H. (2007).
Ethical issues in stopping randomized trials early because of apparent benefit. Annals
of Internal Medicine, 146(12), 878–881. https://doi.org/10.7326/0003-4819-146-12200706190-00009
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/read-the-belmont-report/index.html
National Institute of Mental Health. (n.d.-a). Conducting research with participants at elevated risk for suicide: Considerations for researchers. https://www.nimh.nih.gov/funding/
clinical-research/conducting-research-with-participants-at-elevated-risk-for-suicideconsiderations-for-researchers.shtml#guidance
National Institute of Mental Health. (n.d.-b). NIMH guidance on risk-based monitoring.
https://www.nimh.nih.gov/funding/clinical-research/nimh-guidance-on-risk-basedmonitoring.shtml
National Institutes of Health. (1998, June 10). NIH policy for data and safety monitoring.
https://grants.nih.gov/grants/guide/notice-files/not98-084.html
National Institutes of Health. (2016a, September 16). NIH policy on the dissemination of
NIH-funded clinical trial information. https://grants.nih.gov/grants/guide/notice-files/
NOT-OD-16-149.html
National Institutes of Health. (2016b, September 16). Policy on good clinical practice training
for NIH awardees involved in NIH-funded clinical trials. https://grants.nih.gov/grants/
guide/notice-files/not-od-16-148.html
Neacsiu, A. D., Rizvi, S. L., & Linehan, M. M. (2010). Dialectical behavior therapy
skills use as a mediator and outcome of treatment for borderline personality dis­
order. Behaviour Research and Therapy, 48(9), 832–839. https://doi.org/10.1016/
j.brat.2010.05.017
Posner, K., Brown, G. K., Stanley, B., Brent, D. A., Yershova, K. V., Oquendo, M. A.,
Currier, G. W., Melvin, G. A., Greenhill, L., Shen, S., & Mann, J. J. (2011). The
Columbia–Suicide Severity Rating Scale: Initial validity and internal consistency
findings from three multisite studies with adolescents and adults. The American Journal
of Psychiatry, 168(12), 1266–1277. https://doi.org/10.1176/appi.ajp.2011.10111704
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
Rid, A., Emanuel, E. J., & Wendler, D. (2010). Evaluating the risks of clinical research.
JAMA, 304(13), 1472–1479. https://doi.org/10.1001/jama.2010.1414
Romm, C. (2015, January 28). Rethinking one of psychology’s most infamous experiments. The Atlantic. https://www.theatlantic.com/health/archive/2015/01/rethinkingone-of-psychologys-most-infamous-experiments/384913/
Rudorfer, M. V. (2017). Psychopharmacology in the age of “big data”: The promises and
limitations of electronic prescription records. CNS Drugs, 31(5), 417–419. https://
doi.org/10.1007/s40263-017-0419-y
Simon, G. E., Beck, A., Rossom, R., Richards, J., Kirlin, B., King, D., Shulman, L.,
Ludman, E. J., Penfold, R., Shortreed, S. M., & Whiteside, U. (2016). Population-based
Ethics of Mental Health Intervention Research 161
outreach versus care as usual to prevent suicide attempt: Study protocol for a
randomized controlled trial. Trials, 17, Article 452. https://doi.org/10.1186/s13063016-1566-z
Sisti, D. A., & Joffe, S. (2018, October). Implications of zero suicide for suicide prevention
research. JAMA, 320(16), 1633–1634. https://doi.org/10.1001/jama.2018.13083
Street, L. L., & Luoma, J. B. (2002). Control groups in psychosocial intervention
research: Ethical and methodological issues. Ethics & Behavior, 12(1), 1–30. https://
doi.org/10.1207/S15327019EB1201_1
Sugarman, J., & Califf, R. M. (2014). Ethics and regulatory complexities for pragmatic
clinical trials. JAMA, 311(23), 2381–2382. https://doi.org/10.1001/jama.2014.4164
Svanborg, P., & Asberg, M. (1994). A new self-rating scale for depression and anxiety
states based on the Comprehensive Psychopathological Rating Scale. Acta Psychiatrica
Scandinavica, 89(1), 21–28. https://doi.org/10.1111/j.1600-0447.1994.tb01480.x
Thong, I. S., Foo, M. Y., Sum, M. Y., Capps, B., Lee, T.-S., Ho, C., & Sim, K. (2016). Therapeutic misconception in psychiatry research: A systematic review. Clinical Psychopharmacology and Neuroscience, 14(1), 17–25. https://doi.org/10.9758/cpn.2016.14.1.17
U.S. Department of Health and Human Services. (n.d.). Informed consent FAQs.
https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/informed-consent/
index.html
U.S. National Library of Medicine. (2018). ClinicalTrials.gov background. https://clinicaltrials.
gov/ct2/about-site/background
Vitiello, B., Silva, S. G., Rohde, P., Kratochvil, C. J., Kennard, B. D., Reinecke, M. A.,
Mayes, T. L., Posner, K., May, D. E., & March, J. S. (2009). Suicidal events in the
Treatment for Adolescents With Depression Study (TADS). The Journal of Clinical
Psychiatry, 70(5), 741–747. https://doi.org/10.4088/JCP.08m04607
Westergren, T., Narum, S., & Klemp, M. (2019). Critical appraisal of adverse effects
reporting in the ‘Treatment for Adolescents With Depression Study (TADS).’ BMJ
Open, 9(3), Article e026089. https://doi.org/10.1136/bmjopen-2018-026089
World Medical Association. (2013, October). WMA Declaration of Helsinki—Ethical principles
for medical research involving human subjects. https://www.wma.net/policies-post/wmadeclaration-of-helsinki-ethical-principles-for-medical-research-involving-humansubjects/
Zelen, M. (1990). Randomized consent designs for clinical trials: An update. Statistics in
Medicine, 9(6), 645–656. https://doi.org/10.1002/sim.4780090611
Zimbardo, P. G. (1973). On the ethics of intervention in human psychological research:
With special reference to the Stanford Prison Experiment. Cognition, 2(2), 243–256.
https://doi.org/10.1016/0010-0277(72)90014-5
11
Ethical Issues in Neurobiological
Research
Young Cho and Barbara Stanley
N
eurobiological research has become a key area of investigation for psychologists interested in understanding human behavior. The U.S. BRAIN
Initiative, a national initiative to advance neuroscience research, was launched
in 2013, and in the 5 years following its conception, the National Institutes of
Health provided significant funding for neurobiological research projects
(Ramos et al., 2019). The term neurobiological research is broad and encompasses
neurotechnologies, pharmacology, basic neuroscience research, biobehavioral
studies, and epigenetics. Much of this research lies at the intersection of bench
and bedside, where laboratory research and clinical medicine overlap in ways
that, at times, can be ethically challenging.
Despite ethical challenges in neurobiological research, such research is also
of great interest, importance, and relevance (Illes et al., 2010), and therefore it
is crucial to address and resolve ethical concerns. The U.S. BRAIN Initiative
and other international neuroscience research initiatives have recognized both
the promise and the challenges and have worked to ensure that the rapid
growth in the field is accompanied by ethical awareness and practice (Ramos
et al., 2019; Rommelfanger et al., 2018, 2019). Key ethical issues in the field
include challenges to participant autonomy and agency, potential harmful
uses of neurotechnology, and the need for engagement with the public to
increase knowledge of neurobiological research.
Neurobiological research presents all of the ethical issues that are germane
to psychological research in general, including balancing of risks and potential benefits, conflicts of interests, and privacy concerns. This chapter focuses
https://doi.org/10.1037/0000258-011
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
163
164 Cho and Stanley
on three issues of particular relevance to neurobiological research: (a) issues
related to informed consent and potential vulnerability of research participants,
(b) incidental findings obtained during neurotechnology research, and (c) issues
related to so-called noninvasive brain stimulation.
INFORMED CONSENT AND POTENTIAL VULNERABILITY OF RESEARCH
PARTICIPANTS
Although issues related to informed consent must be addressed in all psychological research, there are some concerns relating to informed consent that are
more complex and nuanced in neurobiological research. First, explaining the
goals, methods, risks of harm, and potential benefits can be difficult because of
the complexity of the research. Neurobiological research can be particularly
difficult to describe in a comprehensible way to nonexperts, and thus it can
present challenges to obtaining informed consent. The following is an example
from a consent form describing the procedures for an imaging study:
If you agree to be in this study, images of your brain, called “scans,” will be taken
using a magnetic resonance imaging (MRI) scanner. The MRI scanner is a machine
that enables us to acquire images of the brain by changing magnetic fields inside
your body. Anatomical images (images that show us the structures in your
brain) will be obtained during the first 10 minutes or the last 10 minutes of
the scan. In addition, we will collect functional, diffusion, or perfusion scans.
Functional and perfusion images are scans that show us how the brain works by
illustrating what the brain is actively doing at a particular time. Diffusion scans
show us how different parts of the brain are connected and do not require you
to perform any tasks inside the scanner.
Although this form is written in plain language, the content is still fairly
complex and may not be easily understood. If the participant population has
psychiatric problems such as delusions, it may cause significant distress to
participants who fear that magnetic fields could alter their bodies.
This brings us to the second issue with consent to neurobiological research.
Although neurobiological research is often conducted with healthy volunteers,
vulnerable populations with potentially diminished cognitive capacity, such as
individuals with dementia, schizophrenia, or posttraumatic stress disorder, are
also of interest. Similarly, neurobiological research with suicidal individuals
poses unique challenges that can be addressed with careful planning and safety
procedures (Ballard et al., 2020).
Researchers working with vulnerable populations must take care to consider whether risks differentially affect a certain population and whether some
populations that are suffering greatly may disregard risks for even a small
chance of benefit. These instances do not mean that the proposed research
should not be conducted. Instead, it puts the onus on the investigator to ensure
that these factors have been given due consideration when planning the
research. Two important examples of vulnerable populations are children and
treatment-resistant participants.
Ethical Issues in Neurobiological Research 165
Researchers conducting clinical brain stimulation studies with children
should weigh the benefits of a potentially effective treatment with the presently
unknown risks to the developing brain, possible long-term side effects, and the
inherent breach of the individual child’s autonomy. There is evidence that
results found in adult participants do not translate to children (Cohen Kadosh
et al., 2012), and although some positive translational evidence has been found,
few longitudinal studies that clearly delineate the long-term effects of brain
stimulation have been conducted (Krishnan et al., 2015).
When studying treatment-resistant participants using noninvasive brain
stimulation, the issue can be more complex. If no treatment has worked in the
past, participants may be more willing to enroll in any experimental procedure
out of a sense of desperation. Therefore, it is important to consider the impact
of their treatment resistance on their willingness to participate in intervention
trials even if the benefit is completely unknown. Ensuring that consent is truly
informed and that the risks and potential for benefit are understood is critical
under these circumstances.
INCIDENTAL FINDINGS
An incidental finding (IF) is a potentially pathological finding that is identified
during a research procedure. Many investigators using brain imaging technologies encounter IFs. This section focuses on IFs in brain imaging, but other
research procedures, such as diagnostic neurocognitive tests or routine screening
blood tests, can also produce them. Brain imaging IFs are potentially significant
clinical findings discovered in the individual scans of research participants that
are unrelated to the original purpose of the study. For example, change in gray
matter volume found in a scan from a longitudinal study of brain morphological
differences in Alzheimer’s disease is not an IF, but evidence of a pineal cyst in
the same scan would be.
Prevalence reports of brain imaging IFs vary from 9.5% to 40% in healthy
control participants (Bos et al., 2016; Seki et al., 2010), with 2% to 3% requiring
follow-up, although figures as high as 19.5% have been reported (Shoemaker
et al., 2011). Two factors associated with higher IF prevalence are scan resolution and participant age (Morris et al., 2009). Differences in the criteria defining
IFs may also account for variability in these estimations (Presidential Commission for the Study of Bioethical Issues, 2013). Although clinically significant IFs
are relatively rare, their discovery in the course of imaging research is common;
in a survey of brain imaging researchers, 82% reported encountering IFs (Illes
et al., 2004).
Most participants view the prospect of IF disclosure favorably; one study
found that more than 90% of research participants surveyed wanted IFs to be
disclosed if discovered (Kirschen et al., 2006). When IFs were discovered and
reported in another study, over 90% of participants were grateful to have
received the IF report (Shoemaker et al., 2011).
166 Cho and Stanley
There is a wide consensus among institutional review boards and investigators that disclosing IFs to participants is ethically required (Cole et al.,
2015). There is also a consensus that findings with no established clinical
validity need not be disclosed. Examples of IFs that have no clinical value
include nonmorphological data from functional imaging (Illes et al., 2006)
and unexpected variation in brain structures the clinical implications of which
are unknown (Wolf et al., 2008).
Although there is an ethical imperative to actively acknowledge and disclose
IFs, there is also broad consensus that researchers do not have a duty to conduct follow-up scans in response to discovering IFs, nor do they have a clinical
obligation after referral unless the researcher is also the participant’s clinician
(Illes et al., 2006; Phillips et al., 2015; Presidential Commission for the Study
of Bioethical Issues, 2013; Seki et al., 2010). And although investigators are
responsible for anticipating, planning for, and communicating IFs in a clinically sensitive manner, their institutions can help the process by developing
standardized practices for doing so. Institution-wide policies and programs can
reduce this burden and facilitate consistent treatment across studies.
Disclosure of brain imaging IFs has not always been standard practice or seen
as an ethical imperative. In the early 2000s, surveys of imaging researchers
uncovered highly variable protocols for handling IFs and often no set procedure
at all. Large variability existed among research groups in the personnel operating
scanners and interpreting scans and in which, if any, IFs were being communicated to participants (Illes, 2006); one survey found that 53% of researchers had
a formal protocol for managing IFs (Illes et al., 2004). This uneven history invites
a critical look at the variables involved in the ethical discussion around IF
disclosure: the prevalence of false-positive IFs, the psychological and financial
burdens of follow-up care, and the problem of therapeutic misconception.
One of the main arguments for limiting IF disclosure is the high rate of false
positives. False positives are findings that initially appear to be significant but,
upon follow-up, require no further medical intervention. One study found
IFs in 36.4% of participants; however, only 2.7% of those participants were
referred for clinical follow-up (Seki et al., 2010). Another study found that
when participants were referred for follow-up, the number requiring further
care was even smaller, and of those referred, the majority either underwent
wait-and-see policies or were immediately informed that no further follow-up
was necessary (Bos et al., 2016). There are also reports of participants who
were told that follow-up was unnecessary but who sought further evaluation
regardless (Phillips et al., 2015).
The high rate of false-positive IFs has been used to argue that informing
participants of IFs may lead to more harm than good; for example, if participants do not understand the difference between clinically relevant findings
and false positives, they may seek unnecessary follow-up that results in financial burden and needless worry. Nevertheless, disclosing IFs has the potential
to be lifesaving and offers the opportunity to educate participants about how
the clinical significance of IFs is determined and the importance of disclosure
even if the findings upon follow-up are negative.
Ethical Issues in Neurobiological Research 167
Disclosure of IFs is related to the ethical principle of autonomy and is often
viewed by research participants as a basic right. Because participant autonomy
is dependent on a relationship of reciprocity between participant and researcher,
disclosure of clinically relevant findings is warranted. In order to accomplish
this, all research scans are typically reviewed by a qualified neuroradiologist
because relying on a system that allows optional radiology consultations or
scan evaluations by nonspecialists invites uneven treatment and can result in
missed IFs. Protocols that do not require qualified staff to interpret brain scans
can also give participants a false sense of investigator abilities (Cole et al., 2015).
In order to account for this, investigators should develop a plan for managing
IFs and include it in the study protocol and consent forms (Illes, 2006; Presidential Commission for the Study of Bioethical Issues, 2013). The protocol
should comprehensively cover all steps, including scan review, IF identification,
referral guidelines, and IF communication.
Additionally, in order to respect participants’ wishes and their autonomy,
research participants may be given the option to opt out of receiving information on IFs. When this decision is presented at the time of consent, participants
should have a clear and realistic understanding of what IFs are, the risks and
benefits of notification, and the protocol for identification and referral. However,
the danger with this option is the possibility that important IFs may not be conveyed to participants who have opted out of receiving results. Some have advocated for superseding patient wishes in these cases (Illes, 2006), and others have
suggested reconfirming with the participant their decision to decline disclosure
of IFs (Wolf et al., 2008). Regardless of the solution, this procedure should not
be ad hoc and should be disclosed during the consent process.
In addition to communicating the plan for IF disclosure, the consent discussion should address IF risks (financial burden, insurance eligibility, and anxiety)
and benefits (current and future health implications). One risk of IF disclosure
is that it can induce anxiety in participants. Because of the high prevalence of
false-positive IFs, anxiety is a risk that frequently comes without accompanying health benefits. Many investigators are concerned about causing needless
worry (Cole et al., 2015; Wolf et al., 2008). However, participants who have
received IFs report low anxiety, and high-anxiety outliers often are more anxious
about health issues in general (Phillips et al., 2015). Thus, the issue of inducing
anxiety by disclosing IFs may not be very significant. There is evidence that
lower participant anxiety is associated with greater health literacy (Phillips et al.,
2015). This issue underscores the importance of clear and effective communication at the time of consent and during IF disclosure. Communicating the
nature of the IF, prevalence, implications, and the rationale for referral status
can help improve health literacy, and in turn, may reduce anxiety.
The financial cost of follow-up is another potential risk of IF disclosure,
although for many participants, the risk may be strongly outweighed by the
benefits of potentially effective medical intervention. When research participants are asked to weigh the risks and benefits of IF disclosure, they place
significantly more value on decisional autonomy and the potential health
benefits of acting on IFs than on the burden of financial cost (Cole et al.,
168 Cho and Stanley
2015). However, when working with populations for whom monetary factors
have a greater effect on quality of life, such as low-income participants and
those who are uninsured (Berchick et al., 2019), special care should be taken
to consider the burden of follow-up. Participants may require assistance with
locating affordable follow-up care and navigating insurance difficulties (Illes
et al., 2006). Understanding and accounting for the differing financial situations
of participants during disclosure and referral relates to the principle of justice;
failure to consider the monetary impact of an IF on participants of different
socioeconomic situations contributes to health inequality.
The concept of therapeutic misconception is, in part, relevant here. Therapeutic misconception occurs when participants believe that the intervention
or procedure they are receiving is of proven therapeutic value and do not
recognize that, in fact, the research is designed to evaluate the efficacy of the
intervention or procedure. Misconceptions about the therapeutic value of the
scans themselves may be a critical point of tension between participant expectations and the realities of research. Participants may not be aware of the
differences between research and clinical scans. There is evidence that many
participants expect medical issues to be identified in a scan if they exist, even
when they are informed that their scans will not be reviewed by a specialist
(Kirschen et al., 2006). The review of scans not only is expected by many
but may be viewed by most participants as a main benefit of research participation (Phillips et al., 2015). This issue is especially relevant to potentially
vulnerable individuals whose condition is being studied. For example, an
individual with depression may think that the research MRI may provide
individualized information about the nature of their depression.
Two potential communication gaps between researcher and participant are
illustrated by these examples: Macroscopically, participant expectations of the
abilities of imaging technology may be overly high, and the roles of researcher
and clinician may be conflated; on the level of the individual participant, cursory
explanations of the technology involved may be inadequate to correct distorted
expectations. Investigators should attempt to identify therapeutic misconceptions that the participant has and work to manage their expectations. An
important part of this effort is clarifying the distinctions between research and
medical scans and explaining that a scan with no IFs does not guarantee the
absence of medical problems. Failing to account for participant expectations—
even when unrealistic—can lead to miscommunications with consequences
for participant well-being and trust. Inversely, establishing open dialogue
about expectations early on is an opportunity to strengthen the participant–
researcher alliance.
NONINVASIVE BRAIN STIMULATION
Brain stimulation research provides a second example demonstrating ethical
issues in neurobiological research. External, or “noninvasive,” brain stimulation
is increasingly used in clinical and healthy populations as a novel treatment
Ethical Issues in Neurobiological Research 169
and potential cognitive enhancement. Two of the most popular methods are
transcranial magnetic stimulation (TMS) and transcranial electrical stimulation (TES). TMS uses magnetic fields to modulate cortical excitability, whereas
TES uses a weak electrical current. Both techniques are applied externally,
through the scalp and skull, as opposed to deep-brain stimulation, which
requires surgical implantation of the stimulating device.
Two subtypes of TMS and TES have shown higher potential for clinical use:
rTMS, or repetitive TMS, which has been shown to produce longer lasting
effects than TMS (Polanía et al., 2018), and tDCS, or transcranial direct current
stimulation, a relatively inexpensive and portable type of TES that is unregulated
by the U.S. Food and Drug Administration and is commercially available. rTMS
has been found to have some efficacy in ameliorating symptoms of Parkinson’s
disease (Elahi et al., 2009), tinnitus (Soleimani et al., 2016), depression (Martin
et al., 2017), and posttraumatic stress disorder (Yan et al., 2017), whereas
tDCS has been found to improve dysphagia (Yang et al., 2015), age-related
cognitive decline (Summers et al., 2016), and depression (Meron et al., 2015;
Nitsche et al., 2009; Shiozawa et al., 2014) and to enhance working memory
in healthy populations (Hill et al., 2016; Mancuso et al., 2016). Use of tDCS in
research involves ethical issues similar to those of other external brain stimulation techniques and additionally is relevant to ethical issues involved in
enhancement research and lay do-it-yourself (DIY) use. Therefore, tDCS provides a good example of a biological technique to examine the ethical challenges of brain stimulation research. Specific ethical issues in tDCS research
include informed consent, research with vulnerable populations, enhancement
research, and the relationship between the scientific establishment and the
DIY tDCS community.
Methodological problems in the field of tDCS research may inadvertently
compromise participants’ understanding of the research in which they are
being asked to participate. For participants to give truly informed consent, the
risks and benefits should be contextualized by the strength of the evidence. This
contextualization can be challenging when an intervention is in the early stages
of development. Although there is evidence that tDCS and similar brain stimulation techniques are effective in treating some neurological and psychiatric
conditions, researchers should make efforts to describe the limits of the current
state of knowledge.
Another methodological issue with tDCS is that its neuromodulatory
effects have shown wide variability across participants (Horvath et al., 2014;
Polanía et al., 2018). One explanation is that neuroanatomical variability
can lead to inaccurate electrode placements, resulting in the stimulation of
unintended brain regions. MRI-guided neuronavigation can be used to inform
electrode placements, but many published tDCS studies have not used this
technique (Horvath et al., 2014). Another explanation, and one with more
complicated implications for the future of tDCS, is that individual differences
in physiology and psychology may affect the efficacy of the treatment (Horvath
et al., 2014). For example, it has been shown that skull thickness can affect the
current delivered (Opitz et al., 2015, as cited by Thair et al., 2017). Additionally,
170 Cho and Stanley
motor and cognitive interference may significantly impact tDCS efficacy (Horvath
et al., 2014; Thair et al., 2017; Wurzman et al., 2016). Because the specific tasks
completed and stimulation environment will likely vary depending on the
purpose of the study, all aspects of the stimulation procedure should be
thoroughly documented and reported.
Because of the lack of longitudinal studies of tDCS, long-term side effects
of the treatment are unknown. The documented short-term side effects are
generally mild and transient and include tingling, itchiness, and a burning
sensation (Bikson et al., 2016). Although the acute side effects of tDCS have
been found to be mild, this finding does not exclude the possibility of more
insidious effects. As tDCS has been shown to alter targeted functions, it follows
that the stimulation can affect other functions if applied improperly. In addition,
external stimulation methods have poorer spatial resolution than techniques
such as deep-brain stimulation (Polanía et al., 2018); therefore, the areas
surrounding the electrode site may also receive stimulation.
Together, the current and historical methodological problems with tDCS
and other external brain stimulation techniques qualify the evidence of their
potential benefits. In order for participants to give informed consent, the
known effects should be contextualized by the many unknowns.
An area related to noninvasive brain stimulation research is enhancement
research, which investigates the effectiveness of interventions that use medical
and biological technology to improve performance, appearance, or capability
beyond “normal” (Mehlman & Berg, 2008). This type of research stands in
contrast to health improvement research, which has as its goal the prevention,
treatment, or mitigation of the effects of a disorder or condition. A single
intervention can be both treatment for a condition and enhancement. For
example, medications that improve attention in individuals with attentiondeficit disorder (ADD) can also be used as enhancements to increase focus
in individuals without ADD while studying or doing a task that requires very
concentrated attention.
The distinction between enhancement and health improvement research is
not a bright line. Many of the ethical issues in clinical tDCS research are also
prominent in enhancement research. For example, the methodological problems in existing tDCS research have bearing on informed consent in both
clinical and enhancement research; however, the balance of risks and potential
benefits is inherently different. From a regulatory perspective, the justifiable
risk–benefit ratio in clinical research is significantly more lenient (Hyman,
2011). Currently, any harm incurred in tDCS enhancement research can be
disproportionate to the possible benefit because the documented enhancements
have been very modest. And broadly, the prospect of future more effective
enhancement techniques should raise the following questions: For what
purpose is this enhancement intended, and for whose benefit?
In 2018, the International BRAIN Initiative, a global organization of neuroscience researchers, released a set of “neuroethics questions to guide ethical
Ethical Issues in Neurobiological Research 171
research,” among which was the issue of enhancement neurotechnologies.
They asked “what constitutes public benefit and public harm” in the matter of
translational neurotech (Rommelfanger et al., 2018, p. 32). Although not
an immediate reality, should enhancement technology eventually improve
cognition in a lasting and significant way, it will be necessary to consider the
prospect of commodification of this technology (direct-to-consumer tDCS is
already available) and the resulting potential for disparities in access.
Data on current attitudes about enhancement are limited, but researchers
and the wider public alike show reservations. A survey of tDCS researchers
found that they had greater ethical concerns for enhancement use than for
clinical treatment, with the most agreement around ethical concerns of safety,
inaccurate or distorted communication of results, and weak methodology
(Riggall et al., 2015). As for use with children, there is evidence of public
disapproval, and among proponents, the majority became opposed when
informed of potential side effects (Wagner et al., 2018).
Although support of tDCS for enhancement appears limited, there exists a
small population of DIY tDCS users—people who use homemade or commercially available tDCS devices. For DIY tDCS users, the popular media is the
most common intermediary between user and researcher, often resulting
in sensationalized or inaccurate depictions of risks, benefits, and the device
itself—for example, the dangerously simplistic description of tDCS as little more
than a 9-volt battery (Fox, 2011; Murphy, 2013). Although researchers cannot
control journalistic misrepresentation, failing to thoroughly discuss methodological problems, inflating the external validity of positive findings, and minimizing null results give substance to these inaccurate accounts and may be
the foundation they rest upon (Walsh, 2013).
An important example of the power that investigators hold over public
perceptions of tDCS is its designation as a noninvasive form of stimulation.
Because tDCS and other forms of noninvasive stimulation must pass through
the skull and tissue and their effects are likely not confined to just the brain
region of interest, some have argued that the term “noninvasive” is inaccurate
and diminishes public perceptions of the technique’s risks (Cabrera et al.,
2014; Davis & van Koningsbruggen, 2013; Fitz & Reiner, 2015; Walsh, 2013).
CONCLUSION
This chapter has addressed some of the particular ethical issues faced when
conducting neurobiological research. Ethical issues in neurobiological research
include all those faced in other areas of psychological research. However,
additional ethical concerns must be addressed because of the complex nature
of this research. The dearth of information on the immediate and long-term
effects of novel neurobiological interventions in particular has serious ramifications for assessing the risks of harm in research participation and, consequently, obtaining informed consent from research participants.
172 Cho and Stanley
REFERENCES
Ballard, E. D., Waldman, L., Yarrington, J. S., Gerlus, N., Newman, L. E., Lee, L.,
Sparks, M., Liberty, V., Pao, M., Park, L., & Zarate, C. A., Jr. (2020). Neurobiological
research with suicidal participants: A framework for investigators. General Hospital
Psychiatry, 62, 43–48. https://doi.org/10.1016/j.genhosppsych.2019.11.007
Berchick, E. R., Barnett, J. C., & Upton, R. D. (2019). Health insurance coverage in the
United States: 2018 (Report No. P60-267RV). Current Population Reports. https://
www.census.gov/content/dam/Census/library/publications/2019/demo/p60-267.pdf
Bikson, M., Grossman, P., Thomas, C., Zannou, A. L., Jiang, J., Adnan, T., Mourdoukoutas,
A. P., Kronberg, G., Truong, D., Boggio, P., Brunoni, A. R., Charvet, L., Fregni, F.,
Fritsch, B., Gillick, B., Hamilton, R. H., Hampstead, B. M., Jankord, R., Kirton, A., . . .
Woods, A. J. (2016). Safety of transcranial direct current stimulation: Evidence based
update 2016. Brain Stimulation, 9(5), 641–661. https://doi.org/10.1016/j.brs.2016.06.004
Bos, D., Poels, M. M., Adams, H. H., Akoudad, S., Cremers, L. G., Zonneveld, H. I.,
Hoogendam, Y. Y., Verhaaren, B. F., Verlinden, V. J., Verbruggen, J. G., Peymani, A.,
Hofman, A., Krestin, G. P., Vincent, A. J., Feelders, R. A., Koudstaal, P. J., van der
Lugt, A., Ikram, M. A., & Vernooij, M. W. (2016). Prevalence, clinical management,
and natural course of incidental findings on brain MR images: The populationbased Rotterdam Scan Study. Radiology, 281(2), 507–515. https://doi.org/10.1148/
radiol.2016160218
Cabrera, L. Y., Evans, E. L., & Hamilton, R. H. (2014). Ethics of the electrified mind:
Defining issues and perspectives on the principled use of brain stimulation in medical
research and clinical care. Brain Topography, 27(1), 33–45. https://doi.org/10.1007/
s10548-013-0296-8
Cohen Kadosh, R., Levy, N., O’Shea, J., Shea, N., & Savulescu, J. (2012). The neuroethics of non-invasive brain stimulation. Current Biology, 22(4), R108–R111. https://
doi.org/10.1016/j.cub.2012.01.013
Cole, C., Petree, L. E., Phillips, J. P., Shoemaker, J. M., Holdsworth, M., & Helitzer, D. L.
(2015). ‘Ethical responsibility’ or ‘a whole can of worms’: Differences in opinion on
incidental finding review and disclosure in neuroimaging research from focus group
discussions with participants, parents, IRB members, investigators, physicians and
community members. Journal of Medical Ethics, 41(10), 841–847. https://doi.org/
10.1136/medethics-2014-102552
Davis, N. J., & van Koningsbruggen, M. G. (2013). “Non-invasive” brain stimulation
is not non-invasive. Frontiers in Systems Neuroscience, 7, Article 76. https://doi.org/
10.3389/fnsys.2013.00076
Elahi, B., Elahi, B., & Chen, R. (2009). Effect of transcranial magnetic stimulation on
Parkinson motor function—Systematic review of controlled clinical trials. Movement
Disorders, 24(3), 357–363. https://doi.org/10.1002/mds.22364
Fitz, N. S., & Reiner, P. B. (2015). The challenge of crafting policy for do-it-yourself
brain stimulation. Journal of Medical Ethics, 41(5), 410–412. https://doi.org/10.1136/
medethics-2013-101458
Fox, D. (2011). Neuroscience: Brain buzz. Nature, 472, 156–159. https://doi.org/
10.1038/472156a
Hill, A. T., Fitzgerald, P. B., & Hoy, K. E. (2016). Effects of anodal transcranial direct
current stimulation on working memory: A systematic review and meta-analysis of
findings from healthy and neuropsychiatric populations. Brain Stimulation, 9(2),
197–208. https://doi.org/10.1016/j.brs.2015.10.006
Horvath, J. C., Carter, O., & Forte, J. D. (2014). Transcranial direct current stimulation:
Five important issues we aren’t discussing (but probably should be). Frontiers in
Systems Neuroscience, 8, Article 2. https://doi.org/10.3389/fnsys.2014.00002
Hyman, S. E. (2011). Cognitive enhancement: Promises and perils. Neuron, 69(4),
595–598. https://doi.org/10.1016/j.neuron.2011.02.012
Ethical Issues in Neurobiological Research 173
Illes, J. (2006). ‘Pandora’s box’ of incidental findings in brain imaging research. Nature
Clinical Practice Neurology, 2, 60–61. https://doi.org/10.1038/ncpneuro0119
Illes, J., Kirschen, M. P., Edwards, E., Stanford, L. R., Bandettini, P., Cho, M. K., Ford,
P. J., Glover, G. H., Kulynych, J., Macklin, R., Michael, D. B., Wolf, S. M., & the
Working Group on Incidental Findings in Brain Imaging Research. (2006). Ethics:
Incidental findings in brain imaging research. Science, 311(5762), 783–784. https://
doi.org/10.1126/science.1124665
Illes, J., Kirschen, M. P., Karetsky, K., Kelly, M., Saha, A., Desmond, J. E., Raffin, T. A.,
Glover, G. H., & Atlas, S. W. (2004). Discovery and disclosure of incidental findings
in neuroimaging research. Journal of Magnetic Resonance Imaging, 20(5), 743–747.
https://doi.org/10.1002/jmri.20180
Illes, J., Moser, M. A., McCormick, J. B., Racine, E., Blakeslee, S., Caplan, A., Hayden,
E. C., Ingram, J., Lohwater, T., McKnight, P., Nicholson, C., Phillips, A., Sauvé, K. D.,
Snell, E., & Weiss, S. (2010). Neurotalk: Improving the communication of neuroscience
research. Nature Reviews Neuroscience, 11(1), 61–69. https://doi.org/10.1038/nrn2773
Kirschen, M. P., Jaworska, A., & Illes, J. (2006). Subjects’ expectations in neuroimaging
research. Journal of Magnetic Resonance Imaging, 23(2), 205–209. https://doi.org/
10.1002/jmri.20499
Krishnan, C., Santos, L., Peterson, M. D., & Ehinger, M. (2015). Safety of noninvasive
brain stimulation in children and adolescents. Brain Stimulation, 8(1), 76–87. https://
doi.org/10.1016/j.brs.2014.10.012
Mancuso, L. E., Ilieva, I. P., Hamilton, R. H., & Farah, M. J. (2016). Does transcranial
direct current stimulation improve healthy working memory? A meta-analytic
review. Journal of Cognitive Neuroscience, 28(8), 1063–1089. https://doi.org/10.1162/
jocn_a_00956
Martin, D. M., McClintock, S. M., Forster, J. J., Lo, T. Y., & Loo, C. K. (2017). Cognitive
enhancing effects of rTMS administered to the prefrontal cortex in patients with
depression: A systematic review and meta-analysis of individual task effects. Depression and Anxiety, 34(11), 1029–1039. https://doi.org/10.1002/da.22658
Mehlman, M. J., & Berg, J. W. (2008). Human subjects protections in biomedical
enhancement research: Assessing risk and benefit and obtaining informed consent.
The Journal of Law, Medicine & Ethics, 36(3), 546–549. https://doi.org/10.1111/j.1748720x.2008.303.x
Meron, D., Hedger, N., Garner, M., & Baldwin, D. S. (2015). Transcranial direct current
stimulation (tDCS) in the treatment of depression: Systematic review and metaanalysis of efficacy and tolerability. Neuroscience and Biobehavioral Reviews, 57, 46–62.
https://doi.org/10.1016/j.neubiorev.2015.07.012
Morris, Z., Whiteley, W. N., Longstreth, W. T., Jr., Weber, F., Lee, Y.-C., Tsushima, Y.,
Alphs, H., Ladd, S. C., Warlow, C., Wardlaw, J. M., & Al-Shahi Salman, R. (2009).
Incidental findings on brain magnetic resonance imaging: Systematic review and
meta-analysis. BMJ, 339, Article b3016. https://doi.org/10.1136/bmj.b3016
Murphy, K. (2013, October 28). Jump-starter kits for the mind. The New York Times.
https://www.nytimes.com/2013/10/29/science/jump-starter-kits-for-the-mind.html
Nitsche, M. A., Boggio, P. S., Fregni, F., & Pascual-Leone, A. (2009). Treatment of
depression with transcranial direct current stimulation (tDCS): A review. Experimental
Neurology, 219(1), 14–19. https://doi.org/10.1016/j.expneurol.2009.03.038
Phillips, K. A., Ladabaum, U., Pletcher, M. J., Marshall, D. A., & Douglas, M. P. (2015).
Key emerging themes for assessing the cost-effectiveness of reporting incidental
findings. Genetics in Medicine, 17(4), 314–315. https://doi.org/10.1038/gim.2015.13
Polanía, R., Nitsche, M. A., & Ruff, C. C. (2018). Studying and modifying brain function
with non-invasive brain stimulation. Nature Neuroscience, 21(2), 174–187. https://
doi.org/10.1038/s41593-017-0054-4
Presidential Commission for the Study of Bioethical Issues. (2013). Anticipate and communicate: Ethical management of incidental and secondary findings in the clinical, research and
174 Cho and Stanley
direct-to-consumer contexts. https://bioethicsarchive.georgetown.edu/pcsbi/sites/default/
files/FINALAnticipateCommunicate_PCSBI_0.pdf
Ramos, K. M., Grady, C., Greely, H. T., Chiong, W., Eberwine, J., Farahany, N. A.,
Johnson, L. S. M., Hyman, B. T., Hyman, S. E., Rommelfanger, K. S., Serrano, E. E.,
Churchill, J. D., Gordon, J. A., & Koroshetz, W. J. (2019). The NIH BRAIN Initiative: Integrating neuroethics and neuroscience. Neuron, 101(3), 394–398. https://
doi.org/10.1016/j.neuron.2019.01.024
Riggall, K., Forlini, C., Carter, A., Hall, W., Weier, M., Partridge, B., & Meinzer, M.
(2015). Researchers’ perspectives on scientific and ethical issues with transcranial
direct current stimulation: An international survey. Scientific Reports, 5(1), Article
10618. https://doi.org/10.1038/srep10618
Rommelfanger, K. S., Jeong, S. J., Ema, A., Fukushi, T., Kasai, K., Ramos, K. M., Salles, A.,
Singh, I., Amadio, J., Bi, G.-Q., Boshears, P. F., Carter, A., Devor, A., Doya, K.,
Garden, H., Illes, J., Johnson, L. S. M., Jorgenson, L., Jun, B.-O., . . . Global Neuroethics Summit Delegates. (2018). Neuroethics questions to guide ethical research in
the International BRAIN Initiatives. Neuron, 100(1), 19–36. https://doi.org/10.1016/
j.neuron.2018.09.021
Rommelfanger, K. S., Jeong, S. J., Montojo, C., & Zirlinger, M. (2019). Neuroethics: Think
global. Neuron, 101(3), 363–364. https://doi.org/10.1016/j.neuron.2019.01.041
Seki, A., Uchiyama, H., Fukushi, T., Sakura, O., & Tatsuya, K.; Japan Children’s Study
Group. (2010). Incidental findings of brain magnetic resonance imaging study
in a pediatric cohort in Japan and recommendation for a model management
protocol. Journal of Epidemiology, 20(Suppl. 2), S498–S504. https://doi.org/10.2188/
jea.je20090196
Shiozawa, P., Fregni, F., Benseñor, I. M., Lotufo, P. A., Berlim, M. T., Daskalakis, J. Z.,
Cordeiro, Q., & Brunoni, A. R. (2014). Transcranial direct current stimulation for
major depression: An updated systematic review and meta-analysis. The Inter­
national Journal of Neuropsychopharmacology, 17(9), 1443–1452. https://doi.org/10.1017/
S1461145714000418
Shoemaker, J. M., Holdsworth, M. T., Aine, C., Calhoun, V. D., de la Garza, R., Feldstein
Ewing, S. W., Hayek, R., Mayer, A. R., Kiehl, K. A., Petree, L. E., Sanjuan, P., Scott, A.,
Stephen, J., & Phillips, J. P. (2011). A practical approach to incidental findings
in neuroimaging research. Neurology, 77(24), 2123–2127. https://doi.org/10.1212/
WNL.0b013e31823d7687
Soleimani, R., Jalali, M. M., & Hasandokht, T. (2016). Therapeutic impact of repetitive
transcranial magnetic stimulation (rTMS) on tinnitus: A systematic review and
meta-analysis. European Archives of Oto-Rhino-Laryngology, 273(7), 1663–1675. https://
doi.org/10.1007/s00405-015-3642-5
Summers, J. J., Kang, N., & Cauraugh, J. H. (2016). Does transcranial direct current
stimulation enhance cognitive and motor functions in the ageing brain? A systematic
review and meta-analysis. Ageing Research Reviews, 25, 42–54. https://doi.org/10.1016/
j.arr.2015.11.004
Thair, H., Holloway, A. L., Newport, R., & Smith, A. D. (2017). Transcranial direct current
stimulation (tDCS): A beginner’s guide for design and implementation. Frontiers in
Neuroscience, 11, Article 641. https://doi.org/10.3389/fnins.2017.00641
Wagner, K., Maslen, H., Oakley, J., & Savulescu, J. (2018). Would you be willing to zap
your child’s brain? Public perspectives on parental responsibilities and the ethics of
enhancing children with transcranial direct current stimulation. AJOB Empirical Bioethics, 9(1), 29–38. https://doi.org/10.1080/23294515.2018.1424268
Walsh, V. Q. (2013). Ethics and social risks in brain stimulation. Brain Stimulation, 6(5),
715–717. https://doi.org/10.1016/j.brs.2013.08.001
Wolf, S. M., Lawrenz, F. P., Nelson, C. A., Kahn, J. P., Cho, M. K., Clayton, E. W.,
Fletcher, J. G., Georgieff, M. K., Hammerschmidt, D., Hudson, K., Illes, J., Kapur, V.,
Ethical Issues in Neurobiological Research 175
Keane, M. A., Koenig, B. A., LeRoy, B. S., McFarland, E. G., Paradise, J., Parker,
L. S., Terry, S. F., . . . Wilfond, B. S. (2008). Managing incidental findings in human
subjects research: Analysis and recommendations. The Journal of Law, Medicine &
Ethics, 36(2), 219–248. https://doi.org/10.1111/j.1748-720X.2008.00266.x
Wurzman, R., Hamilton, R. H., Pascual-Leone, A., & Fox, M. D. (2016). An open letter
concerning do-it-yourself users of transcranial direct current stimulation. Annals
of Neurology, 80(1), 1–4. https://doi.org/10.1002/ana.24689
Yan, T., Xie, Q., Zheng, Z., Zou, K., & Wang, L. (2017). Different frequency repetitive
transcranial magnetic stimulation (rTMS) for posttraumatic stress disorder (PTSD):
A systematic review and meta-analysis. Journal of Psychiatric Research, 89, 125–135.
https://doi.org/10.1016/j.jpsychires.2017.02.021
Yang, S. N., Pyun, S.-B., Kim, H. J., Ahn, H. S., & Rhyu, B. J. (2015). Effectiveness of
non-invasive brain stimulation in dysphagia subsequent to stroke: A systemic
review and meta-analysis. Dysphagia, 30(4), 383–391. https://doi.org/10.1007/
s00455-015-9619-0
12
Research Using the Internet
and Mobile Technologies
Timothy J. Trull, Ashley C. Helle, and Sarah A. Griffin
T
he advancements of the internet and mobile technologies have provided
rich and rapidly evolving platforms by which researchers can collect data
in more intensive ways to investigate complex questions. For example, one
major benefit of internet research is the ability to reach a large sample of
participants from a larger pool than previously possible (e.g., recruitment on
the internet, collection of survey data through crowdsourcing techniques).
Further, improvements in technologies and devices allow researchers to assess
constructs through multiple methodologies (e.g., assessing blood alcohol
concentration through transdermal monitoring, breath analysis, or self-report
of drinks) and more intensively over time (e.g., multiple assessments per day
using ambulatory assessment [AA]; Trull & Ebner-Priemer, 2013). This everwidening avenue of possibilities has a number of clear benefits yet also raises
important ethical issues (Carpenter et al., 2016).
PRIMARY ETHICAL CONSIDERATIONS
Research studies using electronic technologies often are associated with many
of the same ethical challenges as traditional studies conducted in the lab,
including ensuring the privacy and security of participants’ responses, maintaining confidentiality, minimizing harm, and respecting autonomy. Beyond
these, additional considerations arise when internet- or electronically mediated
https://doi.org/10.1037/0000258-012
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
177
178 Trull, Helle, and Griffin
data collection modifies components of the study process, such as the traditional informed consent process. Furthermore, limiting (or even removing) the
in-person interaction of the participant with the researcher during periods of
data collection influences the experimental control and may impact the validity
of the data in some cases. These issues are discussed and practical recommendations outlined later in this chapter.
At the outset, it is important to note that there are a number of relevant
and more comprehensive reviews of the ethical considerations of and recommendations for internet-mediated research (e.g., Hoerger & Currell, 2012;
Kraut et al., 2004). Guidelines exist in many instances and should be consulted
(e.g., the American Psychological Association [APA; 2017] Ethical Principles of
Psychologists and Code of Conduct [hereinafter referred to as “the APA Ethics
Code”). Although the overarching ethical principles and representative codes
are applicable to internet-based and ambulatory assessment research, many
codes do not directly speak to implementation of these principles for internetbased or AA research studies. For instance, the APA Ethics Code does not
include a section specific to internet-mediated research. There have been
reports on applications of the APA Ethics Code to online research, such as a
2002 report by the Board of Scientific Affairs (APA, 2002), which includes
an overview and recommendations for online research. A similar application,
although not an explicit code, is a published a report, Ethics Guidelines for
Internet-Mediated Research, by the British Psychological Society (2017), highlighting key ethical principles and recommendations, and applications of the
principles within the context of Canadian law have been outlined as well
(Gupta, 2017). Finally, it is important to note that the published codes and
guidelines may vary by discipline (e.g., psychological research).
Many of the existing reviews and recommendations focus on internet-based
research and give less attention to ambulatory assessment methodology,
a burgeoning area that is related yet distinct in many regards. This chapter
broadly covers the ethical considerations specific to internet-based research
(e.g., survey research administered online) and AA methodology with recruited
and consented participants. This type of research stands in contrast to internetbased research that relies on publicly available information (e.g., public social
media posts) or observational data collection (e.g., posts in chat rooms);
discussion of the ethical considerations of these types of data collection can
be found elsewhere (see Hoerger & Currell, 2012; Nosek et al., 2002). This
chapter reviews the ethical principles and considerations specific to conducting research using the internet and broader advancements in technology,
specifically under the umbrella of established ethical guidelines, the Belmont
Report (National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research [hereinafter referred to as the “National
Commission”], 1979), and the federal policy Protection of Human Subjects
(2018, hereinafter referred to as the “Common Rule”). We discuss these in the
context of protecting human subjects and maintaining the principles of scientific integrity (i.e., validity of the data).
Research Using the Internet and Mobile Technologies 179
PROTECTION OF HUMAN SUBJECTS: THE BELMONT REPORT
The Belmont Report describes three principles regarding the protection of
human research subjects: respect for persons, beneficence, and justice.
Respect for Persons
As described in Chapter 1, the principle of respect for persons includes the tenet
of autonomy, which encompasses informed consent, an issue central to internet-based research. Essential components of the informed consent process
include providing sufficient information, in a manner that is easy to comprehend, with adequate time to review. In-person studies allow for administration
of the consent process that invites participants to ask questions about the study
and allows for important nuances that internet-based studies do not have. For
instance, the face-to-face nature of traditional lab studies allows the researcher
to be alerted to indications of participant misunderstanding of information and
increases researcher confidence regarding study inclusion criteria, namely
restricting participation to those who can provide consent (e.g., protecting
children and those with diminished autonomy). This ability is clearly limited in
studies conducted solely online, prompting questions about informed consent
unique to these sorts of studies.
Voluntariness, another aspect of autonomy, requires that participants have
the option of withdrawing from participation at any time (Mathy et al., 2003).
This component should be included within informed consent and, in traditional
research studies, is often reviewed verbally. Internet-based studies may present
unique challenges in informing participants (at the time of informed consent) about their right to withdraw at any time and about the logistics of the
withdrawal process. For example, is exiting the browser enough to indicate
withdrawal? Will compensation still be provided when warranted? How will
the debriefing process occur (Nosek et al., 2002)? Furthermore, the option
to decline to respond on any individual item should be incorporated where
appropriate (e.g., allowing the participant to advance in the survey despite
declining to answer).
In ambulatory assessment studies, the tenet of autonomy may more closely
resemble that in traditional research studies; often the participant attends an
initial laboratory session for enrollment and training, which includes the
consent process. However, withdrawing from the study may present challenges if the participant wishes to withdraw midstudy, at a time that they are
still “in the field.” Further, withdrawing from specific aspects of AA data collection may be even more challenging for participants, specifically if the study
involves passive data monitoring (e.g., psychophysiological data collection,
GPS monitoring). The responsibility lies with the researcher to discuss this
process at the outset of the study and ensure the participant and researchers
know how to fully discontinue data collection if the participant indicates they
wish to cease participation. An example of this was highlighted in the Roy
180 Trull, Helle, and Griffin
(2017) study, in which participants’ GPS monitoring might have implicated
them in potential criminal activity; therefore, participants were not penalized
for turning off their phone for a period of time.
Beneficence
Central to the principle of beneficence is to do no harm through the maximization of benefits (i.e., beneficence) and minimization of harm (i.e., non­
maleficence) through careful assessment of the nature and scope of the risks and
benefits within each study. A systematic review of these factors is critical to
any study. Additional considerations are encountered with internet-based
research and studies involving mobile technology. Confidentiality of both
participants’ inclusion in the study and their individual responses is central
to this principle. With internet-based studies, this principle may be applied
by examining the Health Insurance Portability and Accountability Act of 1996
(HIPAA) compliance and encryption techniques provided by the platforms
and servers used to collect the data and anonymize responses (Jennings, 2014).
Furthermore, for AA research, although the same careful considerations need
to be made, questions regarding the devices themselves (e.g., does the smartphone itself identify the participant?) must be recognized and addressed at the
study design level. This is a particularly relevant consideration for clinical
intervention studies.
Regarding nonmaleficence, study procedures and items posed to participants may introduce risk. Are there questions about illegal behaviors that could
be connected to direct identifiers? Is there risk of this information being
compromised or intercepted by another party? Additionally, researchers asking
about topics such as suicidal thoughts or behaviors need to carefully consider
their study protocol and contingency plan to minimize participant harm. Certificates of Confidentiality (issued by the U.S. Department of Health and Human
Services and the National Institutes of Health) protect against compelled disclosure of participant-identifying information in legal proceedings and may minimize risk to participants when data collected have the potential to implicate
participants or be misused (Roy, 2017); however, such certificates are not
universally available or applicable. These challenges are largely similar to
those for traditional in-lab studies; however, additional risk is introduced with
digital data collection. The researcher must mitigate the risk of accidental or
unintended data transmission, as well as participant reactivity and safety if
adverse or unanticipated events occur.
Justice
Internet-based research and advances in technology present both benefits and
unique challenges to the principle of justice. Central is the selection of participants specifically to ensure that individuals receive equal share and effort.
Internet-based research may allow for a greater reach by garnering data from a
wider and more diverse group of participants at a lower overall cost compared
Research Using the Internet and Mobile Technologies 181
with traditional studies and ultimately by reaching people who may otherwise
not have access to participation. However, requiring access to a computer and
the internet for participation may restrict the sample to communities or persons
who have easy access to and the financial means to acquire certain technologies, which can thereby systemically select the sample for reasons other than
what is being assessed in the study. Ethical considerations vary on the basis of
the nature of the sample for an internet-based study (e.g., college sample, general public; Hoerger & Currell, 2012). For example, within a college sample, the
students who are participating and the researchers who are administering the
study may reside in the same department; therefore, confidentiality may be a
particularly salient ethical concern. However, in the case of a general, public,
online sample, participants’ knowledge of psychological research may be much
more limited than that of college students who are immersed in it, thus requiring
greater attention to detail in the study description and informed consent process
(Hoerger & Currell, 2012).
Careful consideration of the justice principle is also necessary for research
that includes vulnerable populations. As stated in the Common Rule (Protection of Human Subjects, 2018),
Selection of subjects is equitable. In making this assessment the [institutional
review board (IRB)] should take into account the purposes of the research and
the setting in which the research will be conducted. The IRB should be particularly cognizant of the special problems of research that involves a category of
subjects who are vulnerable to coercion or undue influence, such as children,
prisoners, individuals with impaired decision-making capacity, or economically
or educationally disadvantaged persons. (§ 46.111(a)(3))
Applying these principles to internet-based modalities can be challenging as
there are no in vivo interactions with participants in most cases, making it
impossible to verify participants’ identities and vulnerabilities.
Specific recommendations have been outlined to detect and prevent
“fraudsters” (Teitcher et al., 2015), which may assist in this area; however, a
separate consideration is ensuring that the participants meet the classification
of participants that the researcher has set out to include in the study, such as
a certain population or sample (e.g., adults with anxiety disorders). Recommendations related to sample selection and identity of participants are covered
later in the chapter. These considerations are important, particularly in cases
in which persons whom the study was not intended for (e.g., children) were
to take part in the research study.
SCIENTIFIC INTEGRITY
Scientific integrity is a fundamental responsibility of researchers. The expansion of data collection methods greatly expands researchers’ ability to assess
phenomena while also requiring the evolution of protocols and safeguards
for ensuring the integrity of data. The validity of data has been the topic of
recent discourse as online crowdsourcing studies have been compromised by
182 Trull, Helle, and Griffin
fraud (Teitcher et al., 2015). As with all human subjects research, random and
invalid responses contaminate data sets. Researchers should conduct careful
and thorough checks to identify these responses, manage them in the data
cleaning and analysis process, and report them transparently when disseminating findings. This responsibility lies with the researcher and is a primary
ethical consideration in the integrity of the data and conclusions that are
released to the scientific and public community. Nosek and colleagues (2002)
wrote about a number of ways the validity of data within online studies may
be compromised by deceptive respondents and the broader implications of and
recommendations regarding having less experimental control over a research
protocol with internet-based studies.
The Common Rule, originally published in 1991 and updated in 2018, is
largely based on the Belmont Report (National Commission, 1979). The
updated version of the Common Rule now includes specific sections relevant
to aspects of advancing technological research environments. Relevant to this
chapter, the Common Rule specifically addresses requirements for informed
consent and exempt research status. Electronically signed informed consent
(e.g., presenting the form and requiring the participant to “click” to agree) is
now acceptable (§ 46.117). The absence of personal or identifiable information
on a consent form can be of significant advantage in limiting connections
between identity and individual responses, particularly in internet-based
research when no other contact with the person will be made. In some cases,
a waiver of documentation of consent is appropriate (e.g., presenting a consent
form and not requiring a signature, rather allowing a check box only), as the
participant’s signature would be the sole link to their identity and data if it were
required. These changes, including the update to the Common Rule and the
ability to use a waiver of documentation of consent, can bolster the security
of anonymity when appropriate, limiting the chance of participants’ data
being compromised from online studies. However, increasing anonymity
in the consent process needs to be balanced with ensuring, as much as possible, that the participant is presented with consent documents that include
key information about the study and are “organized in a way that facilitates comprehension” (§ 46.116(a)(5)(i)). There may be modifications to
the informed consent process administered online to facilitate comprehension
(e.g., interactive components), though this may present additional ethical
considerations as it may “require outsourcing of the construction and/or hosting
of the site” (Emery, 2014, p. 300), which can introduce new and separate
confidentiality considerations.
Secondary uses of identifiable private information, if publicly available, are
now considered exempt research in the updated Common Rule (§ 46.104,
Exempt research). Such information may be obtained from public networking
sites, social media profile information, and so on (§ 46.104(d)(4)(i)). Distinct
from ecological momentary assessment, in which data are collected from
knowing and consenting participants, social environment sampling involves
observing and collecting data about bystanders (e.g., from an electronic
Research Using the Internet and Mobile Technologies 183
activated recorder or social media) and may be mediated by the internet or
technology (Robbins, 2017). Guidelines specific to legal and ethical aspects
of social environment sampling can be found in Robbins (2017).
PREVENTIVE STEPS TO LIMIT ETHICAL CONUNDRUMS
In preparing to design and launch a new study, the first critical factor to
consider is whether the researcher and the assembled research team are adequately prepared and willing to manage the risks and burdens associated with
the proposed project. In the case of internet-based studies, this preparation may
require some modifications, which could be as simple as excluding the assessment of a high-risk behavior or issue the team is inadequately equipped to
manage (e.g., suicidality) or as complex as seeking out additional institutional
resources or adding personnel to the research team. In the sections that follow,
we discuss steps to safeguard against ethical concerns specific to internet-based
research based on the principles mentioned already in this chapter.
Respect for Persons
The largest consideration in relation to respect for persons in internet-based
research is informed consent. Because of the nuances of internet-based research,
the consent process is uniquely important and may be more complex than
traditional lab-based protocols. Although the in-person informed consent process can be thought of as a dynamic one, centering on the interaction between
researcher and participant, this type of interaction is often precluded in
internet-based research. Minimally, participants are asked to check a box
indicating they have read and understood the consent documents. However,
given that informed consent documents are long and participants may be
inclined to skim them, researchers conducting internet-based studies are obligated to take additional steps to ensure informed consent. These steps should
aim to increase participant engagement with and understanding of the study
and might include quizzing participants about the form or aspects of the
study, highlighting or bolding important words or phrases in the consent
document, or supplementing the consent form with audiovisual materials
(Kraut et al., 2004; Varnhagen et al., 2005). Researchers should also allow a
means for participants to obtain additional information, perhaps through a list
of common questions and answers related to the study (i.e., FAQs) or contact
information for the research team.
Sufficient consent documents disclose possible risks and costs to participants. For internet-based studies, participants are entitled to know both the
reality of data security and the measures in place to maintain that security.
Therefore, consent documents must include within the disclosure of potential
risk the possibility of breach of confidentiality during electronic data collection and a description of the safeguards in place to prevent this (discussed
184 Trull, Helle, and Griffin
later in this chapter). Costs unique to internet-based or ambulatory assessment studies may include internet or cellular data use. For studies in which
participants can use their own cellular devices to participate, use of cellular
data or phone minutes or texting constitutes a direct cost of participation and,
therefore, must be disclosed.
The consent documents are also the primary opportunity to describe the
study and intended participants; this ensures that the participants who enroll
in the study fit the intended sample. In this way, explicit and thorough consent
documents can provide a wall of defense against accidental or unintended
recruitment consistent with protecting the voluntariness of the study and the
overarching justice principle. As mentioned previously, participants’ right to
withdraw from the study needs to be protected; this is achieved through placing
an option to withdraw from the study at the bottom of every online study
page, or at least intermittently throughout the study protocol, in addition to
providing participants with explicit instructions on how to withdraw. If participants do opt to withdraw or discontinue participation, they should be provided
with debriefing and contact information for the research team.
Beneficence
The primary concern of the principle of beneficence is to not harm participants. The primary risk to participants associated with internet-based research
is breach of participant confidentiality. First and foremost, anyone intending
to conduct internet-based research must obtain a secure data collection
platform. Without a secure platform, ethical collection of data from human
research participants is impossible.
When assessing the security of a data collection platform, several factors
should be considered. First, additional security is provided if the platform holds
a Hypertext Transfer Protocol Secure (HTTPS) certificate (indicated by the
website path starting with “https://” instead of “http://”). This certificate indicates and verifies that the information transferred between the participant and
the website is encrypted and thus is more secure against electronic eavesdropping during data transmission. Second, the data should be stored by the
platform on secure servers. Many online data collection platforms now make
publicly available their security standards for review by researchers and/or will
provide documentation of their security upon request. This is common practice, and it could be considered a red flag if an online platform does not have
or provide such documentation. Depending on the nature of the planned
research, and in consultation with institutional requirements and the IRB,
researchers need to determine the level of security required of their data
collection platform. Some projects may require that data collection be done
through a HIPAA-compliant data collection platform (for a more complete
discussion of HIPAA and internet-based research, see Mathy et al., 2003).
An additional layer of complexity is added when considering the use of
smartphones and internet-enabled devices (e.g., smartwatches, tablets, smart
boards, televisions), given that the researcher cannot necessarily restrict on
Research Using the Internet and Mobile Technologies 185
what devices or in what locations a participant completes an internet-based
protocol. This is a concern for data security in three ways: (a) public or poorly
secured wireless networks can jeopardize the confidentiality and security of
participant data; (b) mobile technologies allow participants to access internet
services via their cellular network, and the security of mobile phone service
providers is variable and relatively unknown; and (c) participants are able to
complete protocols in nonprivate or nonsecure settings, which could infringe
on the privacy and security of their data.
If participants can participate in the study using a mobile device, researchers
should protect participant data and confidentiality by selecting a data collection platform whose secure servers are compatible with mobile data collection, encouraging participants to secure their internet-enabled devices with
passwords and anti–virus/spyware/malware software, and providing participants with explicit recommendations to maintain the security and privacy of
their information. Some researchers prefer developing a native phone application through which to collect data directly, depending on the study design
and focus, as this allows researchers to encode the application themselves and
maintain the data on their own servers, thus providing more control over
the security of the data. These concerns become central to study design and
implementation within AA methodology in terms of both self-report and other
avenues of data collection (e.g., psychophysiological assessment; Carpenter
et al., 2016; Trull & Ebner-Priemer, 2013).
One confidentiality consideration that has plagued internet-based research
is whether Internet Protocol (IP) addresses constitute identifying information.
Although IP addresses can be unique to an individual’s computer, the realities
of the technical details of IP addresses can dilute both their uniqueness and
identifiability. The complexity of this issue and the evidence on all sides of the
argument warrant more thoughtful and careful discussion than can be given
here, so readers are referred to detailed and systematic reviews reported elsewhere (e.g., Hoerger & Currell, 2012; Nosek et al., 2002).
Justice
The principle of justice in relation to internet-based research primarily concerns
the recruitment and selection of participants, as was discussed in previous
sections of this chapter. To preserve this principle in action, researchers should
use best practices to recruit participants fairly and transparently. Screening
questions can help researchers to identify participants who are interested in
being recruited for a given study and also prevent accidental recruitment of
vulnerable populations. By placing these targeted questions at the start of the
online study, participants can be ruled in or out without placing unnecessary
demands on participant time and resources or contaminating a data set with
unusable data. Screening is particularly important when the entire study,
including recruitment, is conducted online without any in vivo contact with
the research team. This is also an important consideration when recruiting a
particular substrate of individuals from a larger or more diverse online source.
186 Trull, Helle, and Griffin
Scientific Integrity
The validity and integrity of scientific endeavors are fundamental responsibilities of researchers. This responsibility requires researchers to update their
protocols and practices in tandem with new and changing technology. The
rise of internet-based research has sparked concerns about the validity of
data collected from participants entirely remotely when the researcher has
no individual or direct contact with participants. Although the anonymity
of internet-based research may feel novel and disconcerting, in many ways it
is not dissimilar to the lack of certainty researchers have in traditional research
settings about the accuracy or truth of participant responses. For issues
or questions that bear particular importance to the integrity of the research
(e.g., identity of participants, construct of interest in the study), it is certainly
important that internet-based studies make the best possible efforts to verify
or validate participant information or responses. Such efforts could include
adding duplicate or similar questions to use as validity checks, repeating
important screening questions at multiple points in the study protocol, or
asking for collateral (non–personally identifying) information for additional
assurance of identity (e.g., asking what year they were first eligible to vote in
a presidential election to verify age). For more specific suggestions on detecting
and counteracting participant misinformation, see the recommendations
provided by Teitcher and colleagues (2015).
Ensuring and assessing validity within ambulatory assessment studies must
go beyond the validity of participant responses; additional factors must be
considered, including the timeliness of response and accuracy or precision of
ambulatory recording devices (e.g., electrodermal activity). Many of these concerns can be addressed at the study design level. For example, ensuring that
the time stamp of responses is recorded can alleviate some concerns and assist
with data cleaning later, bolstering data integrity. Additionally, identifying
appropriate and reliable devices and pilot testing devices before using them in
the protocol are recommended so that potential problems with hardware and
software can be detected and resolved prior to data collection.
The other primary recommendation to maintain scientific integrity in
internet-based research and studies using new technologies is to be transparent in reporting protocols and results in manuscripts and publications.
The complexity of internet-based research comes from the myriad of choices
researchers must make in constructing and executing study design and
in analyzing and reporting findings. The replication difficulty that already
exists within psychology is likely to worsen for internet-based research unless
researchers make concerted efforts to improve their reporting of the specifics
of their study protocols and data analyses.
ETHICAL RESPONSES
Sometimes, in spite of researchers’ best efforts, ethical violations occur or
come to light. In this section, we discuss recommendations for what to
do when this occurs. These recommendations are not meant to replace or
Research Using the Internet and Mobile Technologies 187
preclude individualized recommendations or feedback such as can be obtained
from consulting peers, the local IRB, or the APA Research Ethics Office (https://
www.apa.org/science/programs/research). In fact, obtaining such consultation is our primary recommendation. Although it is established practice to
notify the IRB in cases of adverse events or unanticipated problems, the lack
of formalized policy and guidelines regarding internet-based research means
that the criteria for what constitutes a reportable concern may be ambiguous.
One way to prevent ethical violations (and to attempt to establish whether
one has occurred) is to have relevant licensed professionals and consultants
involved in every stage of the research process. These professionals and consultants should be selected with reference to the unique characteristics or highest
risk aspects of the study. They can serve both to help researchers traverse ethically ambiguous situations and to protect participants’ interests and safety. It is
best practice, for example, to have a licensed clinical psychologist involved in
any project that involves participants actively in mental health treatment or
projects that assess for high-risk behaviors (e.g., suicidality, homicidality, abuse),
which may require participant safety checks or meet criteria for mandated
reporting laws. Similarly, any studies involving medication administration or
monitoring should have a licensed physician on the research team and available
for consultation or for direct participant interaction during specified participation times. These professionals can then also be available for consultation
regarding ethical behavior and as informed experts in case of ethical violation.
As a general rule of thumb, given the ambiguity surrounding internet-based
research, we recommend seeking consultation from the local IRB if any concern
or question arises regarding ethical behavior within a specific study. If ambiguity
remains after consulting the IRB, or if the question does not pertain to a specific
study protocol, researchers should contact their own licensing or professional
association for guidance. Psychologists, regardless of membership, can contact
the APA Research Ethics Office directly and request consultation. Communication between the researcher, the relevant ethics board or office, and the IRB thus
becomes a critical tool for managing ethical breaches or concerns and for
preventing similar issues in the future.
Our second recommendation is to maintain open and transparent communication with everyone involved in the project, in addition to institutional
authorities. Disclosure of ethical violations and feedback given by consultants
to the entire research team allows for repair of the violating procedure or
process. Similar to protocols for in vivo studies, ethical violations in internet-based studies may warrant notification of participants. A scenario unique
to internet-based research is a breach of the data collection platform or servers
used by either the collection platform or the researcher. If there is reason
to believe a breach or unauthorized access of data has occurred, it may be
grounds to notify participants. This determination would need to be made in
conjunction with appropriate consultation, as discussed above. Transparency
regarding ethical concerns or violations may, in some circumstances, need to
extend to the reporting of the issues in manuscripts and publications. Again,
this reporting would depend on the scenario, and the decision to take this
step needs to be made in conjunction with consultants.
188 Trull, Helle, and Griffin
CASE EXAMPLE
A brief case example and commentary bring to life some of the information
we have provided in this chapter. Professor F is conducting a National Institutes of Health–funded study of young adults’ use of cannabis to reduce
anxiety and depression. She has chosen to recruit from her local community
(using community-based internet message boards) and to distribute an online
survey to those who, based on a brief screening online, appear eligible for the
study. The inclusion criteria include age 18+, not currently receiving treatment for cannabis or other substance use, not currently suicidal, and use of
cannabis at least twice per week. The study involves taking two online
surveys 2 weeks apart. Questions on the survey target cannabis use in the
past 2 weeks, use of other substances, depression symptoms, anxiety symptoms, and motives for and consequences of cannabis use. What are some of
the ethical considerations for this study, as well as remedies or protections
that might be built into the study?
For a study that assesses illegal activity, mental health information, and
potential suicidality or self-harm, potential participants must be informed
during the informed consent process of the possible risks of participating in
the study, as well as the study’s protection against these risks. Possible risks
relevant to this case example include (a) the potential for breach of confidentiality, (b) the request to answer questions related to illegal behaviors, (c) the
possibility of subpoena of data for legal (i.e., civil or criminal) purposes, and
(d) potential embarrassment or distress (e.g., because of disclosing substance
use or mental health issues). Protections (a–d) put in place by this researcher
should then also be explicitly outlined as follows:
a(1). All tracking information and web-signed consent forms will be stored in
a separate secure server from study data.
a(2). The software platform used in this study is HIPAA compliant, and electronic data will be password protected on local computers and servers;
stored on secure, password-protected computers; and kept separate from
all identifiers.
a(3). Once the participant has completed the study, data will be kept on a
password-protected server at a HIPAA-compliant data center that can be
accessed only by approved research personnel with current human subjects training.
a(4). Data are stored separately from participants’ identifying information and
are linked only to a unique subject identification number. The unique
subject identification numbers are linked back to participants’ identifying
information only in a separate password-protected file on a secure server.
b, c. Professor F applied for and the study was issued a Certificate of Confidentiality, which will provide some protection of participant data from
subpoena from local, state, and federal jurisdictions.
Research Using the Internet and Mobile Technologies 189
d. Participants can refuse to answer any questions in the survey and can
withdraw from the study at any time.
Professor F should also include in the consent documents information
regarding participant compensation and study procedures. Documentation of
the remuneration schedule should be provided that reflects what happens if
a participant withdraws or fails to provide complete data at both time points.
Discussion of withdrawal should also include the procedure for participant
debriefing. Given that this protocol assesses mental health symptoms, participants must be informed of study procedures to monitor and ensure their
health and safety, including steps that will be taken (and by whom) if
participants reveal information indicating that they may be a danger to themselves or others and steps that will be taken if participants reveal information
suggesting extreme depression or distress that warrants clinical attention. It is
also best practice to provide a list of mental health resources that participants
can access if they feel it is necessary when the study involves assessing current
mental health or recruiting a sample of individuals at higher risk for mental
health problems.
We hope this case example highlights the complexity and breadth of
informed consent in internet-based research. It reinforces our recommendation that the consent process involve a short quiz for participants that provides
the researcher with more confidence that the participants read the consent
form carefully and understood it (especially for studies in which there is no
in-person contact).
CONCLUSION
In this chapter, we have provided a brief overview of ethical challenges and
some guidelines for conducting research using the internet and mobile technologies. Use of these technologies presents some unique ethical considerations, especially pertaining to privacy, confidentiality, and potential risk.
There are certainly methods to mitigate these concerns and unique risks, but
they do require forethought and well-conceived design features and implementation in order to protect participants and minimize risks from participating in these studies.
REFERENCES
American Psychological Association. (2002). Psychological research online: Opportu­
nities and challenges. https://www.apa.org/science/leadership/bsa/internet/internetreport.aspx
American Psychological Association. (2017). Ethical principles of psychologists and code
of conduct (2002, amended effective June 1, 2010, and January 1, 2017). http://
www.apa.org/ethics/code/ethics-code-2017.pdf
British Psychological Society. (2017). Ethics guidelines for internet-mediated research.
https://www.bps.org.uk/news-and-policy/ethics-guidelines-internet-mediatedresearch-2017
190 Trull, Helle, and Griffin
Carpenter, R. W., Wycoff, A. M., & Trull, T. J. (2016). Ambulatory assessment: New
adventures in characterizing dynamic processes. Assessment, 23(4), 414–424. https://
doi.org/10.1177/1073191116632341
Emery, K. (2014). So you want to do an online study: Ethics considerations and lessons
learned. Ethics & Behavior, 24(4), 293–303. https://doi.org/10.1080/10508422.2013.
860031
Gupta, S. (2017). Ethical issues in designing internet-based research: Recommendations for good practice. Journal of Research Practice, 13(2), Article D1.
Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, 110
Stat. 1936 (1996). https://www.govinfo.gov/content/pkg/PLAW-104publ191/pdf/
PLAW-104publ191.pdf
Hoerger, M., & Currell, C. (2012). Ethical issues in internet research. In S. J. Knapp,
M. C. Gottlieb, M. M. Handelsman, & L. D. VandeCreek (Eds.), APA handbook of
ethics in psychology: Vol. 2. Practice, teaching, and research (pp. 385–400). American
Psychological Association. https://doi.org/10.1037/13272-018
Jennings, J. M. (2014). Can I learn to stop worrying and love online research? Monitor
on Psychology, 45(5), 52. https://www.apa.org/monitor/2014/05/online-research
Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Couper, M. (2004). Psychological research online: Report of Board of Scientific Affairs’ Advisory Group on the
Conduct of Research on the Internet. American Psychologist, 59(2), 105–117. https://
doi.org/10.1037/0003-066X.59.2.105
Mathy, R. M., Kerr, D. L., & Haydin, B. M. (2003). Methodological rigor and ethical
considerations in internet-mediated research. Psychotherapy: Theory, Research, Practice,
Training, 40(1–2), 77–85. https://doi.org/10.1037/0033-3204.40.1-2.77
National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research. (1979). The Belmont report: Ethical principles and guidelines for the protection
of human subjects of research. https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/read-the-belmont-report/index.html
Nosek, B. A., Banaji, M. R., & Greenwald, A. G. (2002). E-research: Ethics, security,
design, and control in psychological research on the internet. Journal of Social Issues,
58(1), 161–176. https://doi.org/10.1111/1540-4560.00254
Protection of Human Subjects, 45 C.F.R. Part 46 (2018). https://www.hhs.gov/ohrp/
sites/default/files/ohrp/policy/ohrpregulations.pdf
Robbins, M. L. (2017). Practical suggestions for legal and ethical concerns with social
environment sampling methods. Social Psychological & Personality Science, 8(5), 573–580.
https://doi.org/10.1177/1948550617699253
Roy, A. L. (2017). Innovation or violation? Leveraging mobile technology to conduct
socially responsible community research. American Journal of Community Psychology,
60(3–4), 385–390. https://doi.org/10.1002/ajcp.12187
Teitcher, J. E., Bockting, W. O., Bauermeister, J. A., Hoefer, C. J., Miner, M. H., &
Klitzman, R. L. (2015). Detecting, preventing, and responding to “fraudsters” in
Internet research: Ethics and tradeoffs. The Journal of Law, Medicine & Ethics, 43(1),
116–133. https://doi.org/10.1111/jlme.12200
Trull, T. J., & Ebner-Priemer, U. (2013). Ambulatory assessment. Annual Review of
Clinical Psychology, 9(1), 151–176. https://doi.org/10.1146/annurev-clinpsy-050212185510
Varnhagen, C. K., Gushta, M., Daniels, J., Peters, T. C., Parmar, N., Law, D., Hirsch, R.,
Takach, B. S., & Johnson, T. (2005). How informed is online informed consent? Ethics
& Behavior, 15(1), 37–48. https://doi.org/10.1207/s15327019eb1501_3
13
Research in and With
Communities
Mary Cwik
I
ncreasing emphasis and effort have been placed on conducting studies in
community settings instead of university-driven research based in labs or
academic hospitals (Binns et al., 2007; Gilbert et al., 2008; Graham et al., 2007;
Mold & Peterson, 2005; Westfall et al., 2007). This shift has been emphasized
for many reasons, including the potential for having more representative or
diverse study samples, ensuring that results generalize to real-world settings,
and addressing intervention implementation and sustainability concerns as
part of the initial research. Accompanying the momentum for research in
community settings has been a shift in research methods to reflect communitybased participatory research (CBPR), which represents research with communities (Minkler & Wallerstein, 2008; National Center for Advancing Translational
Sciences, 2021; National Institutes of Health, 2007). Research in and with
communities does not necessarily have more or fewer ethical dilemmas than
other types of research. However, several overarching themes emerge: Do the
community settings that investigators conduct research in and with represent
more vulnerable populations? Do human research participant protections that
were designed to protect individuals also safeguard the communities in which
and with whom the studies are conducted? Can research in and with communities become a determinant of health?
For research in and with communities, this chapter outlines the main ethical
challenges, highlights ethical guidelines and other relevant standards, recommends preventive steps, and highlights examples of research with and in Indigenous communities related to the author’s own work. Many articles reviewed
https://doi.org/10.1037/0000258-013
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
Copyright © 2021 by the American Psychological Association. All rights reserved.
191
192 Mary Cwik
are Native focused but apply to all community-based research, including studies
with other rural and disadvantaged or marginalized communities.
ETHICAL CHALLENGES POSED BY RESEARCH IN AND WITH
COMMUNITIES
Researchers face several ethical dilemmas when conducting studies with communities. Historical and cultural factors are related both to the specific research
project and more broadly to how a community perceives and engages in research.
The risks and benefits of the research also must be considered beyond the individual level, from the community perspective. In addition, ethical issues may
be encountered regarding who has the right and ability to approve research
and to own the data on behalf of a community and what the multiple layers of
approval are or should be in addition to an institutional review board (IRB).
History and Views of Research
One of the most important concerns related to ethics, which also largely determines whether researchers can even start a project and, if they start, whether
they will be able to recruit participants, is the history and views of research
generally by participants and the community. Historical events themselves can
also have an overarching impact; for example, the colonization of Indigenous
traditional practices might affect communities’ willingness to include these
domains in research or even to subject a community to an enterprise that
might be viewed as a Western way of knowing and potentially recolonizing
or retraumatizing (Caldwell et al., 2005). The opposite dynamic also exists—
historical events, lack of knowledge about history, personal experiences, and
societal norms shape the ways researchers interact with participants and
communities (Caldwell et al., 2005).
Many Indigenous community members and leaders have voiced concerns
that they are being exploited and overresearched in some instances, describing
feeling as if they were the “most researched people in the world” or “researched
to death.” In a qualitative study to better understand how low-risk community
research was perceived by its members, Williams et al. (2010) found that
several important risks were discussed. Participants described experiencing
labeling, stigmatization, discrimination, job losses, increased insurance rates or
denial of services because of high rates of nonresponse to a traditional treatment, lack of investment in a new treatment if participants were labeled as
nonresponders, and loss of income at the community level (Weijer & Miller,
2004; Williams et al., 2010). Thus, harms can be diffuse, going beyond the
individual and perceptions of others (Weijer & Miller, 2004). Threats to a
sense of community identity and belonging are also important to consider
because some researchers trying to understand the causes of psychological
(and other) types of problems may disagree with community knowledge about
the meaning, causes, and treatments of these issues (Weijer, 1999).
Research in and With Communities
193
Approval of Research
Another ethical challenge relates to the concept and structure of ethics review
boards or IRBs. Researchers should be mindful of the differences between the
conduct of ethical research and the process of ethics review committees (Honan
et al., 2013). The standard model of ethics appears to be based on the assumption
that if individual rights are protected, then the community is protected; yet
individual interests are distinct from community interests, and people do not
view themselves individualistically in general (Weijer, 1999). For example,
personal health information can be handled securely and then destroyed, but
results are disseminated in reporting that a certain community is at higher risk
for a stigmatized mental health disorder.
Researchers need to think carefully about at what levels and by whom the
research needs to be approved and the resulting implications. The research process might be thought of as reversed because of differences in worldviews, with
community approval first and most important to community partners (rather
than the centrality of an academic IRB, which often is approached first and has
the final approval authority). Some communities already have authority to
make decisions on behalf of their members (Weijer & Miller, 2004). For example,
in the author’s work, community partners at the individual level, plus the Tribal
Health Board and Tribal Council, approve our research before we seek funding,
and if funded, we then seek university IRB approval. Often there is another
level of approval through the area office for the Indian Health Service. Implications include additional time for this process to be completed and the need
to integrate differing and sometimes contradictory feedback from the many
approving bodies. There are also power-based dynamics in play, including the
power of the IRB to decide what is ethical research for the community (Honan
et al., 2013), and tensions between macro ethics, which are often dictated by
academic Western contexts, and micro ethics, which are carried out by local
researchers in the community context. When IRBs have the power, they may
unknowingly perpetuate social exclusion, silencing individuals or communities
collectively. There have even been cases when despite scientific and community approvals, academic IRBs rejected studies (Flicker et al., 2007).
Failure to Understand the Cultural Context
A central ethical issue is researchers’ failure to understand the cultural context
of the communities in which and with whom they conduct research, which
can lead to misinterpreting the data and, in psychological research specifically, the causes and consequences of behavior (Caldwell et al., 2005). There
are differences in sociocultural norms, ways of knowing, and understandings
of ethics in different societies, cultures, and communities (Honan et al., 2013).
Researchers must realize, too, that it is practically impossible to be truly objective or independent—their perspectives influence the full spectrum of the
research process from specification of research questions, methods, and instruments to conclusions and recommendations (Segall et al., 1998). These “biases,”
or tendencies to think from one’s own cultural perspective (culture-centric
194 Mary Cwik
error), pervade across methodologies both qualitative and quantitative. Some
surface-level cultural factors may seem obvious even to less experienced
researchers in this area, such as the need to translate materials and have
research staff who also speak participants’ primary languages (Caldwell et al.,
2005). Other factors affected by culture are much more complex, such as how
“illness” is viewed and expressed verbally and physically. Interventions have
also been based primarily on Western models, which may differ markedly
from the community’s values and worldviews. For example, early in the author’s
career, when Native American participants discussed “hearing voices,” she
was quick to ask whether they had been diagnosed with schizophrenia from
a Western lens, when instead it is a normal part of some Native cultures to
hear the voices of ancestors and deceased loved ones.
Risks and Benefits of Research
The benefits of research are in the eye of the beholder, and researchers often
fail to truly assess the risks and benefits of research, especially research of their
own; they also may weigh risks and benefits differently than do participants,
individually and communally. When it comes to research in and with communities, it is important to realize that the concept of benefits and what the
benefits might be can differ across individuals, communities, and cultures—
the benefits researchers identify, for example, can be very abstract, whereas
those identified by the individual and community might be more explicit and
concrete (Caldwell et al., 2005).
In Indian Country, one of the biggest ethical dilemmas related to risks and
benefits is what the control condition will be in an intervention study—
withholding treatment is mostly viewed as unethical by the communities. In
addition, it is not standard practice for researchers to identify community
harms that could expand to nonparticipants and that often persist after the
study is complete (Flicker et al., 2007). Many investigators who have focused
their research in communities would also argue that the work not only
has to do no harm but also has an obligation to build on and build up the
strengths in the communities in which they conduct studies. This reciprocity
could consist of providing resources, skills, employment, and training. Finally,
some participants have reported feeling that the informed consent process
did not guarantee complete understanding or confidentiality, that current
human research participant requirements were not adequate to protect or
assist their communities, and that the research did not seem to improve their
communities (Williams et al., 2010).
Data
Another set of challenges relates to data, including ownership, cultural appropriateness, interpretation, and dissemination (Caldwell et al., 2005; Mihesuah,
1993; Stubben, 2001; Weijer, 1999). First, researchers must think about who
“owns” the data. Many tribes, for instance, will not engage in research unless
Research in and With Communities
195
they ultimately own the data. The implications are that researchers must seek
approval for publication and other uses of the data, even when a university
researcher is the principal investigator. This issue is becoming more complex
now that many federal funding agencies are requiring open access to data;
waivers have been granted for many tribes, but this may not always be the case.
Second, protecting participant privacy can pose a unique challenge for small
or interconnected tribes and communities. Some communities want their name
acknowledged in publications; however, when the research topic is sensitive,
the group may not want their identity listed (Caldwell et al., 2005). A related
challenge is that researchers must balance having one or several communities
represented. Research that focuses on several communities helps with both
ethical (e.g., privacy) and methodological issues (i.e., sample size), whereas
research with individual communities can be essential for local participation,
relevance, and planning and can buffer against overgeneralization and the
implication that similar communities are homogeneous (Caldwell et al., 2005).
Finally, a critical ethical dilemma is how to avoid stereotyping, stigmatization, and bias in the resulting data. Non-Native interpretation of research
outcomes can misrepresent people’s behaviors and may lack appropriate context.
In the study by Williams et al. (2010), participants felt that the researchers
failed to communicate results adequately (or descriptions were too technical or
not practical), continue programs, and assist with follow-through.
TYPES OF ETHICAL VIOLATIONS
The ethical challenges identified above reveal some of the most common types
of ethical violations that occur working in and with communities. The most
serious type of violation is inadequate protection from potential physical, emotional, financial, or social harm posed by research activities, including community stigmatization. Another serious violation, but one potentially less obvious
to inexperienced researchers in this area, is an actual or perceived lack of
respect for and understanding of cultural knowledge. A second common set
of ethical violations are related to participants’ understanding of the extent of
participation because of the basic literacy and health literacy level of participants, participants’ unfamiliarity with research terminology, or the researcher’s
failure to obtain consent using participants’ primary language. In addition,
participants with few financial resources may be coerced by incentives.
A third group of ethical violations relate to data and include perceived
or actual lack of respect for privacy and failure to protect individual and
group identity. Specifically, unauthorized public dissemination of research
can perpetuate negative stereotypes (group harm), and undisclosed uses of
information and samples collected from Native participants, for instance,
have led to data interpretation that conflicts with and disrespects traditional
tribal cultural knowledge.
A final group of ethical violations relate to the methods and research enterprise. Research may be conducted to advance researchers’ academic careers at
196 Mary Cwik
the expense of communities, may waste resources on inappropriate methodologies, or may be insensitive to community priorities (Flicker et al., 2007).
Ethics boards may also not be familiar with and accepting of nontraditional
forms of research (e.g., theater, digital storytelling) that communities prefer to
use (Flicker et al., 2007).
ETHICAL GUIDELINES AND OTHER STANDARDS FOR RESEARCH IN
AND WITH COMMUNITIES
Several authors have been critical of the principles of the Belmont Report—
respect for persons, beneficence, and justice—which were designed to protect
individual research participants (National Commission for the Protection of
Human Subjects of Biomedical and Behavioral Research, 1979). These authors
have argued for an expansion of the Belmont Report principles to include a
fourth principle: respect for community (Weijer, 1999, 2000; Weijer & Anderson,
2002; Weijer et al., 1999). A number of national organizations have developed
guidelines to minimize risks and increase benefits to communities from research
(Canadian Institutes of Health Research, 2013; Hueston et al., 2006; National
Health and Medical Research Council, 2003; Quinn, 2004; Sharp & Foster, 2002;
Weijer, 1999, Weijer & Anderson, 2002), and several authors have proposed
concrete steps for consideration of communities in review processes (Flicker
et al., 2007; Hueston et al., 2006; Sahota, n.d.; Weijer, 2000; Weijer et al., 1999;
Weijer & Miller, 2004; Williams et al., 2010). Respect for communities is
particularly important when those communities are easily identifiable or have a
history of discrimination against them (Gbadegesin & Wendler, 2006; Sharp &
Foster, 2002; Weijer & Anderson, 2002).
Despite this attention, a study of IRB processes at 30 research institutions
found that many elements of respect for communities were missing (Flicker
et al., 2007; McGrath et al., 2009). For example, none of the institutions asked
about community involvement in defining the problem or communal consent
processes, only four institutions asked about broader community risks and
benefits (often societal), and only five asked about data dissemination. Under
federal regulations, IRBs are required to include at least one community representative who can identify local risks that might go unnoticed, but there are
times when this requirement might not suffice (Sharp & Foster, 2002). One of
the most difficult aspects of IRB processes is developing grievance procedures
(Mihesuah, 1993; Williams et al., 2010). Using a community lens in addition to
the individual lens has several potential benefits: increased confidence of individuals and communities in the research, increased diversity and retention of
participants, and demonstration of respect for diversity (Sharp & Foster, 2002).
Generally, researchers operate from a stance, consciously or unconsciously,
that generating new knowledge is the sole domain of researchers who are traditionally in an academic setting (Flicker et al., 2007). Ethical guidelines for
research in communities, however, should reflect shared leadership, power, and
decision making (Macaulay et al., 1998; Sharp & Foster, 2002). Protection of
nonparticipants and socially identifiable communities is challenging to capture
Research in and With Communities
197
in regulatory language because it can be difficult to define “community” and for
researchers to interpret and apply principles such as respect for communities
(Sharp & Foster, 2002). Community consent and community consultation have
been suggested as ways to protect communities that are the subject of or
engaging in research (Weijer & Miller, 2004), and accompanying protections
have been proposed from a pharmacogenetic lens that can be applied to
psychological research (see Weijer & Miller, 2004, for a description).
Research agreements and practical tools and checklists have been and can
be developed by researchers, community representatives, and IRBs (Macaulay
et al., 1998). Specific proposed deliverables include training IRB members
in CBPR, requiring local approval and memoranda of understanding, and
documenting how communities were consulted in making key decisions,
with changes in protocol forms that investigators submit to the IRB to include
these and other items related to the community (Flicker et al., 2007). IRBs
also might be missing a chance to educate researchers (Flicker et al., 2007).
Other authors have recommended working directly with the community on
study design to minimize group harms, discuss potential group harms for inclusion in the consent form, and consider reporting group harms in the dissemination of research results (Sharp & Foster, 2002). In terms of data, some of
the suggested guidelines have specifically emphasized that communities should
have the opportunity to comment before publication, to have ways to stop
the research, and to have opportunities to publish dissenting opinions if they
exist (Macaulay et al., 1998). Macaulay et al. (1998) provided an example of
a code of research ethics that must be accepted and abided by all partners (see
also Macaulay et al., 1999).
Tribal communities are increasingly forming and requiring approval by
their own review boards and establishing codes for research that require discussion of how the research study would affect the community (Cornwall &
Jewkes, 1995; Manson et al., 2004). Policies governing tribal IRBs have several
distinctive characteristics: They often do not have an expedited review process;
may require proof of community support; may require support or sponsorship
from a specific local department, association, or other political or cultural entity;
often require researchers to change research protocols and consent and assent
documents; may require regular in-person progress reports; and may require
approval of press releases, presentations, and publications. Some have argued
that only elected political and religious leaders should review and approve
research activities (vs. one or two individuals; Mihesuah, 1993).
PREVENTIVE STEPS FOR RESEARCH IN AND WITH COMMUNITIES
Making judgments about ethics can be challenging to researchers because of
potentially conflicting roles and circumstances, but these challenges can be
proactively addressed when partnerships are built on trust relationships and
adhere to community-based principles (Williams et al., 2010). For example,
challenges include how to weigh individual against community risk–benefit
ratios, what to do when different stakeholders or parts of a community do not
198 Mary Cwik
agree, how to best identify representative members when a community is
diffuse, and what to do when a community does not want research findings
to be published. The sponsor of the research may also have agendas, rules,
and expectations that are different from or in conflict with those of the community participating in the study. Furthermore, issues such as researchers’ lack
of cultural competence, relatively high rates of poverty and illness among
participants, and prevalent infrastructure deficits can exacerbate ethical problems in some communities.
Trust
The community, community-based researchers, and academic researchers
should all be equal partners (Caldwell et al., 2005; Macaulay et al., 1999).
Participatory and collaborative research balances the researcher’s agenda with
the needs and wants of the community, building a foundation of trust among
all involved—without trust, significant problems are likely. Genuine and equal
partnerships call for both researchers and communities to initiate issues to be
researched and to share equally in any resulting benefits. Researchers who show
authentic concern, as well as a willingness to learn and be a part of the community, can cultivate trust and efficacy, ideally as a precursor to research in the
community. This reciprocity, at times, must balance the tension between process and outcomes (Caldwell et al., 2005; Flicker et al., 2007). The ultimate
result of a trust-based relationship between researcher and community is
community empowerment and self-determination—the communities assume
ownership of the research process and use the results to improve their quality
of life.
Community-Based Participatory Research Process
Community-based participatory research is a process that by its very essence
deals proactively with ethical challenges that traditionally arise within a biomedical framework, even though most would not consider CBPR to have
been developed as an ethical framework. Community-based, collaborative,
and participatory research engages community members as full partners,
benefits the communities whose data are being gathered, and empowers
people to set the research agenda on issues that affect them (Caldwell et al.,
2005; Cornwall & Jewkes, 1995; Israel et al., 1998; Minkler & Wallerstein,
2008). However, there are specific dilemmas associated with CBPR that often
are not addressed in standard ethics reviews and that involve the research
agenda, timing, benefits to the community, collaboration on methodology,
dissemination of findings and benefits, and sustainability (Flicker et al., 2007).
Research Agenda
In CBPR, the first question researchers need to ask themselves is who is
driving the research agenda—is it the researchers, the academic institution,
state or federal policy makers, community leaders, or grassroots activists? The
Research in and With Communities
199
research agenda should always be based on what is known, what is not known,
and identified gaps between the two, as it is in other types of research. Different
communities and, in the author’s work, tribes also have varying interests in
or current capacity for research—it is important to meet communities at their
level of readiness.
Another critical consideration is that the research process often focuses on
deficits and problems (traditionally how researchers and funders drive the
agenda), not strengths, assets, or resilience (typically how communities strive
to view the same research topic). The author’s research team does not endorse
a standard approach, and it depends on the individual circumstances who
might be considered the driver, but ideally a project should involve all relevant partners. For example, researchers might have passion, expertise, and an
idea that they mold to fit an existing grant announcement; they work in partnership with community stakeholders to obtain buy-in and shape their ideas
to meet the researchers’ own needs. Alternatively, a tribal community might
want to address a certain health disparity in their community and seeks collaboration with local or Native American researchers; these researchers search
for funding that fits the community’s needs and do not necessitate changes to
the original ideas.
Timing
Timing is important, with CBPR processes needing to occur from the outset.
A model approach would be to have a tribe or community reach out to a
researcher about a topic members want to learn more about. In other scenarios,
a researcher would form an advisory board or gather feedback from key individuals in the community when they are developing the research question or
identify a funding announcement that they are interested in. For example, the
author was working with a community on a proposal about nonsuicidal selfinjury (NSSI) and included the community early and often while developing
the proposal. The community expressed that although NSSI was a priority, they
felt that binge drinking was a more pressing problem, and the grant proposal
evolved to address this topic instead.
An important aspect of CBPR is ensuring authentic community representation. Representation needs to vary on the bases of the community, the
research question, and the researcher. The community through representation needs to be actively involved at all levels of the research process. For
example, community advisory boards are often formed to ensure representation and could include individuals from local leadership, state or county
agencies, health and human services departments, and potential participants.
In the beginning of relationships and new studies, these boards might meet
as often as monthly. A caveat is that researchers may inadvertently cause
conflict among community members, and individuals may disagree on the
importance of a research topic (Flicker et al., 2007). Researchers can consider
using a consensus process to resolve disagreements about research topics and/or
qualitative methods such as free listing and ranking of community concerns to
help set priorities.
200 Mary Cwik
Benefits to the Community
An often-downplayed step is conducting a risk–benefit analysis and evaluating
the perceptions of related factors among community members. Benefits can
and should be reciprocal, with capacity-building opportunities to be discussed
prior to initiation of research; such benefits might include technology, program
implementation and improvement, health care (e.g., diagnostics, treatment),
information, knowledge, and education. Some authors would even say that
benefits to the community and individuals need to be maximized (Macaulay
et al., 1999). For example, the author was conducting a case-control study of
youth engaging in binge drinking in a Native American community. There were
societal benefits from the study, and warm referrals and handoffs were made
for the youth to services, in addition to providing gift cards to reimburse them
for their time. However, local research assistants emphasized that these benefits
were not enough for the participants; they felt that even though the study did
not provide an intervention, the researchers could provide information that
would benefit the participants (while not affecting the research findings). Therefore, as a more meaningful benefit, psychoeducational materials on binge
drinking were handed out to participants at the end of the study.
Collaboration on Methodology
The importance of collaboration with the community about the researcher’s
choice of methodology cannot be overstated. Research methods must be
(a) acceptable to the community and any local oversight bodies, including
ethics boards; (b) adequate to answer the research question; and (c) a good fit
for the political, social, and cultural environment in which potential research
participants and their families live. For example, one of the tribes that the
author collaborates with had an interest in adapting and evaluating an existing
evidence-based intervention for suicidal youth. The original intervention was
delivered in an emergency department (ED) setting. This method of delivery
was not acceptable to our community partners, especially the ED because of
staffing and space issues in that setting. The author and community partners
revised the intervention to be delivered at home or in a study office shortly
after youth were discharged from the ED.
Dissemination of Findings and Benefits
There needs to be joint knowledge translation with dissemination of research
findings and benefits. Researchers have specified several domains related to
data that need to be decided up front: ownership, control, access, and possession (Schnarch, 2004). The gold standard is that all data collected from the
community belong to the community—researchers need to be cognizant of
what is rightfully considered to be the community’s own intellectual and
cultural property (e.g., traditional knowledge, traditional medicines). Certain
biological samples must be returned to research participants or their family
for proper disposition, and not just destroyed after the research is done.
Authors have highlighted that research data should be returned to the community at the end of the project, that the community should have full control
Research in and With Communities
201
over the data, and that community members should be fully involved at the
time of publication (Macaulay et al., 1998). Data stewardship is one solution
that has worked in this area. For example, even on projects in which the
author is the PI, the data are owned by the tribes that are part of the specific
study. The author seeks approval for any presentations and papers and often
is not able to release those data for data sharing purposes.
Another facet is that the data from the research need to lead to knowledge
translation and to have local applicability and generalizability. Research results
often are not applied, are not “put into action,” do not directly benefit the
people, or are not even reported back to the tribe. For example, the Center for
American Indian Health, where the author works, developed a set of guidelines
(available from the author) to ensure better local dissemination of findings
through (a) seven or eight slides on the results, (b) a one-page lay summary
of the results, (c) letters or emails thanking participants, (d) presentations at
community meetings with time for questions and answers, (e) distribution of
the primary journal article, and (f) national outreach as appropriate.
Sustainability
Lastly, the research process should be a continuous circle, with sustainability
built in as a strong foundation. The information and understanding achieved
by research projects can help the researcher and the community identify
priorities, engage in community planning, and direct next research steps. What
issues affect the most people? What issues have significant adverse effects on
those impacted? What issues are people ready to address? What solutions
would or do work best?
For example, the suicide prevention research that the author has conducted
in partnership with one community has evolved in this circular pattern over
10 years. Early formative work with the community identified parenting as a
priority for suicide prevention through strengthening families. Then, the burden
of youth suicide attempts was identified through surveillance, including identifying future research directions focused on binge drinking and social network
analysis. More recently, protective factor data revealed the importance of
staying in school and connection with tribal identity and traditions; this
research has informed youth entrepreneur training and development of an
elder’s curriculum. Community approval and involvement necessitate certain
community characteristics, such as having an approving body of some sort to
provide community consent, representative individuals to serve on an advisory
board, and networks to disseminate results (Weijer & Miller, 2004).
CONCLUSION
The participatory process described in this chapter as a way to ethically conduct
community-based research takes months to a year, and some long-term
research partnerships are iterative over years and decades. CBPR can contradict notions of academic freedom and scientific independence and can be at
202 Mary Cwik
odds with traditional benchmarks used to measure a researcher’s success
(e.g., number of publications, number of grants as PI; Manson et al., 2004).
However, using a participatory process is vital to research generally and to the
psychology field, as well as one of the author’s most rewarding professional
endeavors. More literature describing the relationships between researchers
and communities with stakeholder perspectives will be critical to helping ethics
review committees and other psychologists new to engaging in this type of
research (Sharp & Foster, 2002).
REFERENCES
Binns, H. J., Lanier, D., Pace, W. D., Galliher, J. M., Ganiats, T. G., Grey, M., Ariza, A. J.,
Williams, R., & the Primary Care Network Survey (PRINS) Participants. (2007).
Describing primary care encounters: The primary care network survey and the
National Ambulatory Medical Care Survey. Annals of Family Medicine, 5(1), 39–47.
https://doi.org/10.1370/afm.620
Caldwell, J. Y., Davis, J. D., Du Bois, B., Echo-Hawk, H., Erickson, J. S., Goins, R. T.,
Hill, C., Hillabrant, W., Johnson, S. R., Kendall, E., Keemer, K., Manson, S. M.,
Marshall, C. A., Running Wolf, P., Santiago, R. L., Schacht, R., & Stone, J. B. (2005).
Culturally competent research with American Indians and Alaska Natives: Findings
and recommendations of the First Symposium of the Work Group on American
Indian Research and Program Evaluation Methodology. American Indian and Alaska
Native Mental Health Research, 12(1), 1–21. https://doi.org/10.5820/aian.1201.2005.1
Canadian Institutes of Health Research. (2013). CIHR guidelines for health research involving
Aboriginal people. https://cihr-irsc.gc.ca/e/29134.html
Cornwall, A., & Jewkes, R. (1995). What is participatory research? Social Science &
Medicine, 41(12), 1667–1676. https://doi.org/10.1016/0277-9536(95)00127-S
Flicker, S., Travers, R., Guta, A., McDonald, S., & Meagher, A. (2007). Ethical dilemmas
in community-based participatory research: Recommendations for institutional
review boards. Journal of Urban Health, 84(4), 478–493. https://doi.org/10.1007/
s11524-007-9165-7
Gbadegesin, S., & Wendler, D. (2006). Protecting communities in health research from
exploitation. Bioethics, 20(5), 248–253. https://doi.org/10.1111/j.1467-8519.2006.
00501.x
Gilbert, G. H., Williams, O. D., Rindal, D. B., Pihlstrom, D. J., Benjamin, P. L., Wallace,
M. C., & the DPBRN Collaborative Group. (2008). The creation and development of
the Dental Practice-Based Research Network. The Journal of the American Dental Association, 139(1), 74–81. https://doi.org/10.14219/jada.archive.2008.0024
Graham, D. G., Spano, M. S., Stewart, T. V., Staton, E. W., Meers, A., & Pace, W. D.
(2007). Strategies for planning and launching PBRN research studies: A project of
the Academy of Family Physicians National Research Network (AAFP NRN). Journal of the American Board of Family Medicine, 20(2), 220–228. https://doi.org/10.3122/
jabfm.2007.02.060103
Honan, E., Hamid, M. O., Alhamdan, B., Phommalangsy, P., & Lingard, B. (2013).
Ethical issues in cross-cultural research. International Journal of Research & Method
in Education, 36(4), 386–399. https://doi.org/10.1080/1743727X.2012.705275
Hueston, W. J., Mainous, A. G., III, Weiss, B. D., Macaulay, A. C., Hickner, J.,
Sherwood, R. A., the North American Primary Care Research Group, & the Society
of Teachers of Family Medicine. (2006). Protecting participants in family medicine
research: A consensus statement on improving research integrity and participants’
safety in educational research, community-based participatory research, and practice
network research. Family Medicine, 38(2), 116–120.
Research in and With Communities
203
Israel, B. A., Schulz, A. J., Parker, E. A., & Becker, A. B. (1998). Review of communitybased research: Assessing partnership approaches to improve public health. Annual
Review of Public Health, 19, 173–202. https://doi.org/10.1146/annurev.publhealth.
19.1.173
Macaulay, A. C., Commanda, L. E., Freeman, W. L., Gibson, N., McCabe, M. L.,
Robbins, C. M., Twohig, P. L., & the North American Primary Care Research Group.
(1999). Participatory research maximises community and lay involvement. British
Medical Journal, 319(7212), 774–778. https://doi.org/10.1136/bmj.319.7212.774
Macaulay, A. C., Delormier, T., McComber, A. M., Cross, E. J., Potvin, L. P., Paradis, G.,
Kirby, R. L., Saad-Haddad, C., & Desrosiers, S. (1998). Participatory research with
Native community of Kahnawake creates innovative code of research ethics. Canadian Journal of Public Health, 89(2), 105–108. https://doi.org/10.1007/BF03404399
Manson, S. M., Garroutte, E., Goins, R. T., & Henderson, P. N. (2004). Access, relevance, and control in the research process: Lessons from Indian Country. Journal of
Aging and Health, 16(5 Suppl.), 58S–77S. https://doi.org/10.1177/0898264304268149
McGrath, M. M. S., Fullilove, R. E., Kaufman, M. R., Wallace, R., & Fullilove, M. T.
(2009). The limits of collaboration: A qualitative study of community ethical review
of environmental health research. American Journal of Public Health, 99(8), 1510–1514.
https://doi.org/10.2105/AJPH.2008.149310
Mihesuah, D. A. (1993). Suggested guidelines for institutions with scholars who
conduct research on American Indians. American Indian Culture and Research Journal,
17(3), 131–139. https://doi.org/10.17953/aicr.17.3.630943325746p3x4
Minkler, M., & Wallerstein, M. (2008). Community-based participatory research for health:
From process to outcomes (2nd ed.). Jossey-Bass.
Mold, J. W., & Peterson, K. A. (2005). Primary care practice-based research networks:
Working at the interface between research and quality improvement. Annals of
Family Medicine, 3(Suppl. 1), S12–S20. https://doi.org/10.1370/afm.303
National Center for Advancing Translational Sciences. (2021). Clinical and Translational
Science Awards (CTSA) Program. https://ncats.nih.gov/ctsa
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report: Ethical principles and guidelines for the protection of human subjects of research. https://www.hhs.gov/ohrp/regulations-and-policy/
belmont-report/read-the-belmont-report/index.html
National Health and Medical Research Council. (2003). Values and ethics—Guidelines for
ethical conduct in Aboriginal and Torres Strait Islander health research. https://www.
nhmrc.gov.au/about-us/publications/values-and-ethics-guidelines-ethical-conductaboriginal-and-torres-strait-islander-health-research
National Institutes of Health. (2007). NCMHD community-based participatory research (CBPR)
initiative in reducing and eliminating health disparities: Intervention research phase (R24).
http://grants.nih.gov/grants/guide/rfa-les/RFA-MD-07-003.html
Quinn, S. C. (2004). Ethics in public health research: Protecting human subjects: The
role of community advisory boards. American Journal of Public Health, 94(6), 918–922.
https://doi.org/10.2105/AJPH.94.6.918
Sahota, P. C. (n.d.). Research regulation in American Indian/Alaska Native communities: Policy
and practice considerations. https://www.ncai.org/policy-research-center/initiatives/
Research_Regulation_in_AI_AN_Communities_-_Policy_and_Practice.pdf
Schnarch, B. (2004). Ownership, control, access, and possession (OCAP) or selfdetermination applied to research: A critical analysis of contemporary First Nations
research and some options for First Nations communities. International Journal of
Indigenous Health, 1(1), 80–95.
Segall, M. H., Lonner, W. J., & Berry, J. W. (1998). Cross-cultural psychology as a
scholarly discipline: On the flowering of culture in behavioral research. American
Psychologist, 53(10), 1101–1110. https://doi.org/10.1037/0003-066X.53.10.1101
204 Mary Cwik
Sharp, R. R., & Foster, M. W. (2002). Community involvement in the ethical review of
genetic research: Lessons from American Indian and Alaska Native populations.
Environmental Health Perspectives, 110(Suppl. 2), 145–148. https://doi.org/10.1289/
ehp.02110s2145
Stubben, J. D. (2001). Working with and conducting research among American Indian
families. American Behavioral Scientist, 44(9), 1466–1481. https://doi.org/10.1177/
0002764201044009004
Weijer, C. (1999). Protecting communities in research: Philosophical and pragmatic
challenges. Cambridge Quarterly of Healthcare Ethics, 8(4), 501–513. https://doi.org/
10.1017/S0963180199004120
Weijer, C. (2000). Benefit-sharing and other protections for communities in genetic
research. Clinical Genetics, 58(5), 367–368. https://doi.org/10.1034/j.1399-0004.
2000.580506.x
Weijer, C., & Anderson, J. A. (2002). A critical appraisal of protections for Aboriginal
communities in biomedical research. Jurimetrics, 42(2), 187–198.
Weijer, C., Goldsand, G., & Emanuel, E. J. (1999). Protecting communities in research:
Current guidelines and limits of extrapolation. Nature Genetics, 23(3), 275–280.
https://doi.org/10.1038/15455
Weijer, C., & Miller, P. B. (2004). Protecting communities in pharmacogenetic and
pharmacogenomic research. The Pharmacogenomics Journal, 4(1), 9–16. https://
doi.org/10.1038/sj.tpj.6500219
Westfall, J. M., Mold, J., & Fagnan, L. (2007). Practice-based research—”Blue Highways”
on the NIH roadmap. JAMA, 297(4), 403–406. https://doi.org/10.1001/jama.297.4.403
Williams, R. L., Willging, C. E., Quintero, G., Kalishman, S., Sussman, A. L., Freeman,
W. L., & the RIOS Net Members. (2010). Ethics of health research in communities:
Perspectives from the southwestern United States. Annals of Family Medicine, 8(5),
433–439. https://doi.org/10.1370/afm.1138
14
Legal and Ethical
Requirements for Research
With Minors
Mary Ann McCabe and Maryland Pao
M
inors are defined in federal regulations as a “vulnerable population” in
the context of research. Yet there is an ethical imperative to conduct
research with minors1 in order to fully understand human development, education, and mental and physical health conditions that occur during infancy,
childhood, and adolescence and the corresponding preventive interventions
and treatments, as well as the social circumstances unique to this population.
Minors would be unduly harmed if, compared with adults, they were not able
to benefit from a robust science base or if scientific advances were of questionable applicability to them.
A seminal body of literature has elucidated the basic ethical, regulatory, and
developmental issues involved with research with minors (e.g., Hurley &
Underwood, 2002; Institute of Medicine [IOM], 2004; Kuther & Posada, 2004;
Miller et al., 2004; Rossi et al., 2003). This chapter briefly reviews these core
issues and also focuses on recent advances and innovation in terms of both
minors’ participation and the research in which they are invited to participate.
In this chapter, minors are defined as “children” in federal regulations, inclusive of
infants, children, and adolescents who have not reached the legal age for consent
(i.e., birth–17 years). When referring to a specific age period, that specific terminology
is used in this chapter.
1
This chapter was coauthored by employees of the United States government as part of
official duty and is considered to be in the public domain. Any views expressed herein
do not necessarily represent the views of the United States government, and the
authors’ participation in the work is not meant to serve as an official endorsement.
https://doi.org/10.1037/0000258-014
Handbook of Research Ethics in Psychological Science, S. Panicker and B. Stanley (Editors)
In the public domain.
205
206 McCabe and Pao
UNIQUE ETHICAL ISSUES IN RESEARCH WITH MINORS
The fundamental ethical principles underlying research with human subjects
outlined in Chapter 1 of this volume (respect for persons, beneficence, and
justice) fully apply to minors, as do considerations of risk and benefit and
compliance issues addressed in other chapters. However, there are unique
issues involved in research with minors, primarily stemming from (a) their
legal status relative to consent processes and (b) their warranting additional
protections from harm in research contexts (IOM, 2004). Core aspects of Subpart A of the U.S. Department of Health and Human Services (HHS) regulations (45 C.F.R. Part 46), also called the “Common Rule” (HHS, 2018a), apply
to minors (i.e., definitions of minimal risk, criteria for exemption or expedited
review, confidentiality protections, and rules for waiver of consent or parental
permission), and the unique consent requirements and additional protections
are provided in Subpart D. (Subpart B applies to pregnant minors, and Subpart C applies to incarcerated minors.) The HHS Office of Human Research
Protections (OHRP) provides easy access to guidance for research with minors
(HHS, n.d.). Importantly, a task force of the Society for Research in Child Development developed recommendations that can be considered state of the art
regarding human subjects protections for minors (Fisher et al., 2013). The task
force authors cautioned that viewing all minors as “vulnerable” sometimes
triggers institutional review boards (IRBs) to be overly protective or to over­
estimate research risk. They argued that minors deserve an equitable share of
both “the burdens and benefits of research” (p. 4). They recommended that
IRBs should be directed to define vulnerable child populations in terms of special
needs or protections that may arise from medical illness; physical, emotional or
intellectual disability; or unsafe environments, including children who may be
abused, homeless, or living in war-torn countries. (p. 4)
Categories of Research
Federal regulations (Subpart A, § 46.102) define minimal risk as follows: the
probability and magnitude of harm or discomfort anticipated in the research
are not greater in and of themselves than those ordinarily encountered in
daily life or during the performance of routine physical or psychological
examinations or tests. Determination of the level of risk (e.g., physical,
emotional, social, legal) in relation to the prospect of direct benefit results in
three categories of research for IRB review for minors’ participation (Subpart D,
§§ 46.404a–46.406):
• Category 1: research that does not involve greater than minimal risk.2
• Category 2: research involving greater than minimal risk but presenting
the prospect of direct benefit to the research subject.
Perhaps the most common category in social and behavioral research.
2
Legal and Ethical Requirements for Research With Minors 207
• Category 3: research involving greater than minimal risk and no prospect of benefit. In order to approve research in this category, an IRB must
determine that the risk of the research represents no more than a minor
increase over minimal risk; that the intervention or procedure presents
experiences to the minor subjects that are reasonably similar to their actual
(or expected) medical, dental, psychological, social, or educational situations;
and the intervention or procedure in the research is likely to yield generalizable knowledge about the subject’s disorder or condition that is of
vital importance.
A fourth category involves research not approvable through one of these
categories above but that the IRB determines presents a reasonable opportunity to further the understanding, prevention, or alleviation of a serious
problem affecting the health or welfare of children. In these cases, the secretary of HHS may approve the research after consultation with a panel of experts
(Subpart D, § 46.407).
Parental Permission and Minors’ Assent
As noted earlier, there is yet another unique issue for research with minors:
In lieu of informed consent as in research with adults, parental permission3
and the minors’ assent are required (unless conditions for consent may be
waived under Subpart A; see Chapter 4, this volume). Permission of one
parent is allowable for Categories 1 and 2 above, whereas permission of both
parents is required for Categories 3 and 4. Parental permission may be waived
in certain circumstances.4
In keeping with the ethical principle of respect for persons, minors are
expected to be provided information about the purpose, procedures, risks, and
potential benefits (for themselves or others) of participation in research. Their
assent means affirmative agreement to participate in research; their failure to
object is not considered assent, as assent implies both understanding and voluntariness. Assent may be waived in special circumstances.5 The regulations
are silent about the age at which assent should be sought. An IRB must take
into account the “ages, maturity, and psychological state” (§ 46.408(a)) of
either individual children or the subject population as a whole, in relation
to the nature of the study, when reviewing the procedure for obtaining
assent. Generally speaking, because they are not held to the same standard for
“Parent” in this chapter is inclusive of legal guardians.
An IRB may decide that a given research project involves conditions or a subject
population (e.g., child abuse or neglect) for which parental permission is not
reasonable, in which case a substitute mechanism would be chosen for protection
of subjects (§ 46.408).
5
An IRB may determine that some or all of the minor subjects for a given study are
not capable of providing assent. In addition, if an intervention or procedure in a
study offers prospective direct benefit to minor subjects’ health or well-being and is
available only in the context of research, assent may be waived (§ 46.408).
3
4
208 McCabe and Pao
information comprehension as adolescents or adults, it has become customary
for IRBs to seek assent from children older than 6 or 7 years (IOM, 2004;
Roth-Cline & Nelson, 2013), or “school age” (Waligora et al., 2014). Although
most adolescents are capable of understanding information about research
similarly to adults (e.g., Bruzzese & Fisher, 2003), it may be more appropriate with children to focus on what the actual experience of participation
would be like.
Importantly, the growing science about adolescent development is pertinent
to assent to research participation. First, both social and behavioral science and
neuroscience regarding adolescent brain development have shown that changes
in the limbic system and prefrontal cortex carry implications for adolescent
decision making. The context, timing, and emotional significance of a decision
influence adolescents more than adults. Second, particularly during early to
mid-adolescence (approximately age 10–15 years or so), adolescents often
value immediate over long-term benefits and make different decisions in
the heat of the moment than they might otherwise make about the future
(Steinberg, 2012); “when faced with an immediate personal decision, adolescents will rely less on intellectual capabilities and more on feelings” (Casey
et al., 2008, p. 29). Third, social and behavioral science about decision making
has shown subtle differences between adolescents and adults in the nature of
the information they use. Briefly, adults tend to rely on “gist-based” decision
making—integrating new information about risks and benefits in relation to
experience and values—whereas adolescents engage in both gist-based and
verbatim processing for decisions. In addition, adolescents often overestimate
both risks and benefits, so perceived benefits can outweigh perceived risks
(Reyna & Farley, 2006).
Importantly, there is individual variability in the timing and behavioral
manifestations of these developmental changes, so much so that the likely
age range at which “adult levels of neurobiological maturity” are reached is
15 to 22 years (Steinberg, 2012, p. 76). However, although these developmental differences have not been studied in relation to assent to research, it
may well be that early to mid-adolescence poses differences with late adolescence for assent in some research contexts, particularly if the nature of the
research is emotionally charged (e.g., clinical trial for a serious condition or
reproductive health).
An IRB is able to determine whether and how assent is documented.
Scholars have recommended a flexible, “developmental” (e.g., Miller &
Nelson, 2006), “goodness-of-fit” (Masty & Fisher, 2008), or “personalized”
approach (Giesbertz et al., 2014). OHRP has suggested that written forms are
generally appropriate for most adolescents, and qualitative research suggests
that adolescents actually prefer signing a form for assent (Grady et al., 2014).
Fisher et al. (2013) discussed the benefits of oral assent for children and suggested that it may be less coercive because children are not accustomed to
signing forms. Surely, an interactive assent process supports voluntariness
Legal and Ethical Requirements for Research With Minors 209
better than a more formal written or reading exchange. Importantly, assent
implies understanding of the risks and benefits of the research, as well as the
right to refuse participation. There is a growing body of literature about innovative means for ensuring children’s understanding for assent (e.g., picture
books—Pyle & Danniels, 2016; videos—O’Lonergan & Forster-Harwood, 2011;
modification of the MacArthur Competence Assessment Tool for Clinical
Research—Hein et al., 2014) and voluntariness (e.g., with or without parents
present—Annett et al., 2017; exposure to a Research Participants’ Bill of
Rights—Bruzzese & Fisher, 2003).
Mature and Emancipated Minors
Recall that the federal regulations define minors (or “children”) as those who
have not attained the legal age for consent. There are two important legal categories of adolescents who are afforded the right to provide consent for health
and mental health care without parental permission. Emancipated minors are
those who are financially independent, have married, or are parents themselves.
Mature minors are those who are deemed to be of sufficient intelligence and
maturity to understand and appreciate the consequences of a proposed treatment (often used in emergency situations). It is important for investigators to
know the legal age defining adulthood in their state. In addition, most states also
have specific laws that grant rights to minors for decisions about certain medical
(usually reproductive health) and mental health care without parental consent;
see, for example, Center for Adolescent Health & the Law (2014), and for
policies governing confidentiality for adolescents in health care, see Center for
Adolescent Health & the Law (2005).
Unfortunately, Subpart D of the Common Rule is silent on these categories
of minors in the research context. However, OHRP guidance highlights that
the definition of “children” [in the Common Rule] . . . takes into account the
particular treatments or procedures involved in the proposed research; for
example, in some places individuals who are sixteen years of age may legally
consent to certain medical treatments, and so if the involvement of human subjects in a proposed research activity consists of these treatments, then they may
be considered as adults for that purpose. (HHS, n.d., How do the human subject
research regulations define “children”? section, para. 3)
Indeed, many experts advocate that minors who are afforded rights to
consent to health and mental health care without parental consent should
be allowed the right to consent to research in those same contexts (American
College of Obstetrics and Gynecology, 2016; American Psychological Asso­
ciation [APA], 2018; Fisher et al., 2013; IOM, 2004; Santelli et al., 2003).
Some scholars consider this right an ethical imperative in research contexts
that offer the advancement of understanding or prospective benefit for
vulnerable or stigmatized populations (e.g., Cheah & Parker, 2015). Conversely, some emphasize caution in expanding the mature minor doctrine
210 McCabe and Pao
in light of the growing science on adolescent brain development and decision making (Partridge, 2013).
Privacy and Confidentiality
The Common Rule provisions for the privacy of participants and confidentiality
of data apply equally to minors. Additional federal regulations apply to protecting
minors’ information in research conducted in health and educational settings.
The Health Insurance Portability and Accountability Act of 1996 applies to
research that is conducted within a “covered entity” (e.g., hospital) or that
relies upon or creates “protected health information” (PHI; see HHS, 2018b).
The Family Educational Rights and Privacy Act of 1974 applies to research
conducted with data obtained from educational records or students’ “personally identifiable information” (see Department of Education, n.d.). Minors may
be at especially high “informational risk” (National Research Council, 2014)
because of their frequent use of social media, internet websites, cell phones,
smartwatches, and new emerging technologies, any of which can be platforms for research now and in the future (see Chapters 3 and 12, this volume).
Fisher et al. (2013) also warned that emerging technology may interfere with
deidentification and other data security. They recommended that original
parental permission and minors’ assent processes include assurance that
investigators who use the data in the future will be bound by best (including
emerging) practices in data security and confidentiality.
Fisher (2013) and her colleagues (Fisher et al., 2013) raised an additional
concern regarding confidentiality in the context of research with minors.
Information may be obtained during the course of a study that indicates risk
of harm to self or others (e.g., suicidal ideation, illegal activities, child abuse
or neglect). These authors argued that balancing subjects’ safety and confidentiality can be especially challenging in socially sensitive research and
recommended that future regulations
state that investigators conducting child and adolescent research are permitted,
but not required, to disclose confidential and identifiable information indicating
participant self-harming, violent or illegal behaviors to parents, school counselors,
health care providers or appropriate authorities when a problem has been discovered during the course of research that places the welfare of the participant or
others in jeopardy. (p. 9)
Clark et al. (2018) extended these ethical challenges to neuroimaging
studies that include parent and self-report instruments, in which “clinically
relevant” neurological anomalies may indicate risk for medical or mental
health conditions (see Chapter 11, this volume). They noted conflicts that
may arise among ethical principles and described four instances in which the
most ethical course of action seems to be overriding confidentiality: adolescent
disclosure of (a) child abuse or neglect, (b) life-threatening substance abuse,
or (c) imminent harm to self or others, and (d) incidental discovery of neurological anomalies that warrant further evaluation.
Legal and Ethical Requirements for Research With Minors 211
RESEARCH FEATURES POSING ETHICAL CHALLENGES
There are research features, including incentives, methodologies, and socially
sensitive research and special populations, that raise additional ethical challenges for research with minors.
Incentives
Payment for participation in research may include reimbursement for costs
incurred (e.g., transportation), compensation for time and effort, appreciation gifts (e.g., gift cards), and financial incentives to encourage participation
(Wendler et al., 2002). These various forms of payment are considered differently for minors than adults because of concern that they may be misunderstood or carry undue influence with children and adolescents (IOM, 2004) or
may encourage parents’ willingness to permit children’s participation when
they otherwise would not (Wendler et al., 2002).
The American Academy of Pediatrics (1995) policy on payment for research
encourages reimbursement for direct and indirect costs of participation to
children and families but specifically recommends that participants not be
informed until after participation so as to not unduly influence either children
or parents. This policy has been challenged by several authors as not being
fully transparent about all aspects of research participation in advance of
permission or assent (Fernhoff, 2002), and IRBs appear to favor disclosure in
advance (Weise et al., 2002). The APA (2017a) Ethical Principles of Psychologists
and Code of Conduct expressly include incentives for research participation as
required information for informed consent (Standard 8.02, Informed Consent
to Research (a)(7)).
In a recent Australian study, Taplin and colleagues (2019) found that the
higher the payment, the higher the rate of participation of children and
adolescents. Yet they found that payment did not result in undue influence;
minors’ participation varied with the level of risk in the study. In a similar
study of adolescent research participants and their parents, Weiner and colleagues (2015) found that payments were more likely to influence healthy
volunteers than those with a health or mental health condition; they, too, did
not find evidence of undue coercion from payments. Research suggests that
younger children (age < 9 years) understand monetary incentives differently
than do older children and adolescents, consistent with developmental differences (Bagley et al., 2007).
Afkinich and Blachman-Demner (2020) recently reviewed the literature
regarding the provision of financial incentives to minors for participation in
research. They noted that studies have generally found payment to be either
helpful or neutral for recruitment of minors in research but have produced
less clear findings (and less investigation) regarding retention. These authors
identified a well-honed research agenda to more fully understand the role,
appropriateness, and meaning of financial incentives for minors’ participation
212 McCabe and Pao
in research, including differences across developmental and socioeconomic
levels and type of research.
Methodologies
Longitudinal research designs, “big data,” aggregation of data across systems,
and secondary analysis of large data sets are particularly important for understanding development, epigenetics, prevention, and the impact of educational
or clinical interventions across the lifespan. Federal regulations have introduced the notion of “broad consent” to keep pace with these emerging research
paradigms and technologies (HHS, 2018a). Briefly, broad consent (with specific
requirements) allows research participants to decide up-front whether their
samples or data may be used by future researchers—for unspecified research—
for either a specified or unlimited time period (Fisher & Layman, 2018). Early
research with parents suggests that they may be more conservative with broad
consent for genetic samples, out of concern about unknown future risks for
their children, than adults consenting for themselves (Burstein et al., 2014).
And, as was noted previously regarding the influence of incentives for research
participation, adolescents with medical conditions may differ from healthy
volunteers (and their parents) in their willingness to contribute biospecimens
(Kong et al., 2016). Genetic testing on stored samples for epigenetic and prevention science raises additional ethical concerns about revealing information
to participants who are minors or were minors at the time of research participation (Fisher & McCarthy, 2013).
Change From Minor to Adult Status During the Course of a Study
Longitudinal research in which participants are minors at the start of the study
but reach the legal age of maturity (18 years) during the project poses the question, Do participants who provided assent, and whose parents provided permission, need to provide informed consent (“reconsent”) upon reaching adult
status? This issue is complex when identifiable data (or biological samples)
have been obtained and stored for future (as yet unknown) research use. OHRP
guidance suggests that the answer is “yes.” Essentially, the informed consent
process is viewed as ongoing throughout the duration of research, and in these
circumstances, the regulations governing subjects’ participation change from
those governing minors to those governing adults:
Unless the Institutional Review Board (IRB) determines that the requirements
for obtaining informed consent can be waived, the investigators should seek and
obtain the legally effective informed consent . . . for the now-adult subject for
any ongoing interactions or interventions with the subjects. This is because the
prior parental permission and child assent are not equivalent to legally effective
informed consent for the now-adult subject. However, the IRB could approve a
waiver of informed consent . . . if . . . the required conditions are met.
Similarly, if the research does not involve any ongoing interactions or interventions with the subjects, but continues to meet the regulatory definition of
“human subjects research” (for example, it involves the continued analysis of
specimens or data for which the subject’s identity is readily identifiable to the
investigator(s)), then it would be necessary for the investigator(s) to seek and
Legal and Ethical Requirements for Research With Minors 213
obtain the legally effective informed consent of the now-adult subjects. (HHS, n.d.,
“What happens if a child reaches the legal age of consent while enrolled in a
study?”, paras. 2 and 3)
Many IRBs interpret this standard as requiring “reasonable” attempts to
reach the original minor participant to reconsent as an adult. This guidance has
still led to challenges for researchers; quite simply, it can be impracticable for
new research methodologies with completed longitudinal data sets. Berkman
et al. (2018) argued that the original parental permission should be sufficient
for ongoing research unless the research requires interaction with the nowadult subject or poses greater than minimal risk. Fisher et al. (2013) generally
agreed, with the caveat that linking archival data with new data collection
should require reconsent.
Goldenberg and colleagues (2009) conducted telephone interviews with
adult medical patients using hypothetical scenarios regarding use of biological
samples and data obtained when they were minors. A majority (67%) expressed
minimal or no concerns about the use of their samples and data, yet 46%
believed that they should be reconsented (and, of these, 75% would be willing
to consent). However, results also showed that those who were more concerned
about the future use of data obtained when minors were more private about
medical information, less trusting of medical researchers, and more likely African
American than European American. This finding is suggestive that the domain
of future use of research samples and data obtained when subjects are minors
is complex. Fisher et al. (2013) emphasized that more research is needed with
adults who actually participated in such studies as minors.
Technology Use
There are unique ethical issues posed by recruitment, retention, or tracing of
minors and parents via technology (e.g., smartphones, social media, email,
listservs, blogs or forums, websites). Among the issues are verifying the age of
participants, ensuring assent and parental permission, protecting privacy and
confidentiality, undergoing IRB approval (with varied and unknown locations of subjects), and addressing disparities in access to technology. Hokke
et al. (2018) conducted a scoping review of studies published 2006 through
2016 to explore the extent of these research methodologies with children and
families and noted that ethical considerations or professional and university
guidelines were rarely included. Bagot et al. (2018) noted unique ethical
issues regarding privacy with mHealth research using mobile or wearable
devices (e.g., GPS data) and social media (e.g., third-party apps). They recommended methods to minimize risk to privacy, such as uploading data via private
Wi-Fi networks or via a physical connection to a hard drive.
The Association of Internet Researchers (2012) and the Secretary’s Advisory
Committee on Human Research Protections (2013) have both issued thoughtful
recommendations on this topic, though not specific to minors. Additionally,
the University of California San Diego (n.d.), with support from the Robert
Wood Johnson Foundation, launched the Connected and Open Research Ethics
214 McCabe and Pao
Project to allow sharing of concerns and best practices in this area. Finally, it is
important to acknowledge that advancing medical technologies pose unique
ethical issues for study with minors (e.g., neuroimaging, Clark et al., 2018;
neuromodulation, Gardner, 2017). (See Chapter 12, this volume, for more
in-depth exploration of ethical issues related to technology.)
Socially Sensitive Research and Special Populations
Socially sensitive research is research that carries a potential social impact for
study participants or the groups represented in the study population; such
research is critical for reducing disparities. Investigators wishing to conduct
socially sensitive research with minors should review the scholarship related
to specific ethical issues in that area (e.g., Sieber & Stanley, 1988). Researchers
should strive to conduct culturally sensitive research with minors, including
those from underrepresented populations. Critical issues have been reviewed
by Trimble and Fisher (2005) and are addressed in APA’s (2017b) multicultural
guidelines and APA’s (2019) race and ethnicity guidelines.
Much has been written on sexual health research with minors (e.g., Caskey
& Rosenthal, 2005; Flicker & Guta, 2008), including research with sexual and
gender minority youth (e.g., Fisher & Mustanski, 2014; Gray et al., 2020;
Matson et al., 2019; Mustanski, 2011). Brody and Waldron (2000) and the
National Advisory Council on Drug Abuse (2005) provided recommendations
regarding the predominant ethical issues in research on substance abuse with
minors. There is seminal work on psychological research with minors who
have been diagnosed with mental health conditions (Drotar, 2008; Hoagwood
et al., 1996), but this area warrants updating. Scholarship regarding research
with minors facing family violence and child maltreatment has been updated
in a comprehensive review by Guttmann and colleagues (2019).
Many authors have discussed the ethical issues involved in research with
minors who have serious medical conditions. (Indeed, much of the scholarship
regarding human research participants issues with minors has included this
population, in part because of the critical importance of clinical trials for therapeutic advances in medicine.) Particularly pertinent for this special population
is the “therapeutic misconception,” or the belief that participation in research
may yield direct personal health or mental health benefits. Originally described
with adults enrolled in psychiatric research (Appelbaum et al., 1987), this
concept has subsequently been distinguished from “therapeutic optimism,” or
hope for benefit (Horng & Grady, 2003).
Grady and colleagues (2016) conducted interviews with adolescents (ages
13–17 years) with medical conditions, healthy volunteers, and their parents—
all enrolled in research—regarding the purpose of research and its difference
from regular medical care. The majority of responses reflected the understanding that the goal of research is producing knowledge and possibly helping
others. The authors cautioned that the therapeutic misconception has often
been found when querying subjects about a study in which they are actively
Legal and Ethical Requirements for Research With Minors 215
enrolled, as opposed to Grady et al.’s method of asking about research more
generally. Miller and Feudtner (2016) interviewed children and adolescents
(ages 8–17 years) with and without chronic medical conditions and their
parents, all actively enrolled in either observational or intervention studies.
They found that 80% of parents and 74% of minors enrolled in intervention
studies expected direct health benefits from participation. Fewer parents
(5%) and participants (21%) anticipated direct health benefits (“collateral
benefits”) from observational (vs. intervention) studies, and children anticipating
direct health benefits were generally younger (mean age of 12 years). Importantly, Miller and Feudtner also pointed out that significant numbers of both
parents and minors anticipated possible future health benefits from participation, which are not part of routine informed consent, permission, and assent
discussions. Other authors have noted that medically ill minors and their
parents and caregivers identified benefit (rather than burden) from participation
in psychological (rather than just medical) research (Weiner et al., 2015).
Some minors with life-threatening medical conditions participate in research
regarding palliative and end-of-life care. A recent systematic review (Weaver
et al., 2019) found a dearth of studies examining perceived benefits and
burdens on the part of study participants. Clearly, more work is needed to
understand misconceptions among minors and their parents regarding potential
benefits from research participation; these misconceptions are particularly
emotionally salient in the context of serious illness.
CONCLUSION
The field has gained a rich understanding of the ethical issues involved in
research with minors in recent decades, and yet there remain some important
areas where we need further study in order to better advise researchers, IRBs,
regulators, and minor participants. More careful study is needed regarding
developmental differences within adolescence, distinctions between oral and
written assent processes, emerging technologies and innovative research platforms, incentives, the future use of data and samples, and special populations.
Developmental and psychological science has much to offer in providing further
clarity in these areas.
REFERENCES
Afkinich, J. L., & Blachman-Demner, D. R. (2020). Providing incentives to youth participants in research: A literature review. Journal of Empirical Research on Human
Research Ethics, 15(3), 202–215. https://doi.org/10.1177/1556264619892707
American Academy of Pediatrics, Committee on Drugs. (1995). Guidelines for the
ethical conduct of studies to evaluate drugs in pediatric populations. Pediatrics, 95(2),
286–294.
American College of Obstetrics and Gynecology. (2016). Committee Opinion No. 665:
Guidelines for adolescent health research. Obstetrics and Gynecology, 127(6), e183–e186.
https://doi.org/10.1097/AOG.0000000000001486
216 McCabe and Pao
American Psychological Association. (2017a). Ethical principles of psychologists and code of
conduct (2002, amended effective June 1, 2010, and January 1, 2017). http://www.
apa.org/ethics/code/ethics-code-2017.pdf
American Psychological Association. (2017b). Multicultural guidelines: An ecological approach
to context, identity, and intersectionality. https://www.apa.org/about/policy/multiculturalguidelines.pdf
American Psychological Association. (2018). APA resolution on support for the expansion
of mature minors’ ability to participate in research. https://www.apa.org/about/policy/
resolution-minors-research.pdf
American Psychological Association. (2019). APA guidelines on race and ethnicity in
psychology: Promoting responsiveness and equity. https://www.apa.org/about/policy/
guidelines-race-ethnicity.pdf
Annett, R. D., Brody, J. L., Scherer, D. G., Turner, C. W., Dalen, J., & Raissy, H. (2017).
A randomized study of a method for optimizing adolescent assent to biomedical
research. AJOB Empirical Bioethics, 8(3), 189–197. https://doi.org/10.1080/23294515.
2016.1251507
Appelbaum, P. S., Roth, L. H., Lidz, C. W., Benson, P., & Winslade, W. (1987). False
hopes and best data: Consent to research and the therapeutic misconception. The
Hastings Center Report, 17(2), 20–24. https://doi.org/10.2307/3562038
Association of Internet Researchers. (2012). Ethical decision-making and Internet research:
Recommendations from the AoIR Ethics Working Committee (Version 2.0). http://aoir.org/
reports/ethics2.pdf
Bagley, S. J., Reynolds, W. W., & Nelson, R. M. (2007). Is a “wage-payment” model for
research participation appropriate for children? Pediatrics, 119(1), 46–51. https://
doi.org/10.1542/peds.2006-1813
Bagot, K. S., Matthews, S. A., Mason, M., Squeglia, L. M., Fowler, J., Gray, K., Herting, M.,
May, A., Colrain, I., Godino, J., Tapert, S., Brown, S., & Patrick, K. (2018). Current,
future and potential use of mobile and wearable technologies and social media data
in the ABCD study to increase understanding of contributors to child health. Developmental Cognitive Neuroscience, 32, 121–129. https://doi.org/10.1016/j.dcn.2018.03.008
Berkman, B. E., Howard, D., & Wendler, D. (2018). Reconsidering the need for reconsent
at 18. Pediatrics, 142(2), Article e20171202. https://doi.org/10.1542/peds.2017-1202
Brody, J. L., & Waldron, H. B. (2000). Ethical issues in research on the treatment of
adolescent substance abuse disorders. Addictive Behaviors, 25(2), 217–228. https://
doi.org/10.1016/S0306-4603(99)00041-6
Bruzzese, J. M., & Fisher, C. B. (2003). Assessing and enhancing the research consent
capacity of children and youth. Applied Developmental Science, 7(1), 13–26. https://
doi.org/10.1207/S1532480XADS0701_2
Burstein, M. D., Robinson, J. O., Hilsenbeck, S. G., McGuire, A. L., & Lau, C. C. (2014).
Pediatric data sharing in genomic research: Attitudes and preferences of parents.
Pediatrics, 133(4), 690–697. https://doi.org/10.1542/peds.2013-1592
Casey, B. J., Jones, R. M., & Hare, T. A. (2008). The adolescent brain. Annals of the New
York Academy of Sciences, 1124(1), 111–126. https://doi.org/10.1196/annals.1440.010
Caskey, J. D., & Rosenthal, S. L. (2005). Conducting research on sensitive topics with
adolescents: Ethical and developmental considerations. Journal of Developmental and
Behavioral Pediatrics, 26(1), 61–67.
Center for Adolescent Health & the Law. (2005). Policy compendium on confidential health
services for adolescents (2nd ed.). http://www.cahl.org/policy-compendium-2nd-2005/
Center for Adolescent Health & the Law. (2014). State minor consent laws: A summary
(3rd ed.). http://www.cahl.org/state-minor-consent-laws-a-summary-third-edition/
Cheah, P. Y., & Parker, M. (2015). Research consent from young people in resource-poor
settings. Archives of Disease in Childhood, 100(5), 438–440. https://doi.org/10.1136/
archdischild-2014-307121
Legal and Ethical Requirements for Research With Minors 217
Clark, D. B., Fisher, C. B., Bookheimer, S., Brown, S. A., Evans, J. H., Hopfer, C.,
Hudziak, J., Montoya, I., Murray, M., Pfefferbaum, A., & Yurgelun-Todd, D. (2018).
Biomedical ethics and clinical oversight in multisite observational neuroimaging
studies with children and adolescents: The ABCD experience. Developmental Cognitive Neuroscience, 32, 143–154. https://doi.org/10.1016/j.dcn.2017.06.005
Drotar, D. (2008). Editorial: Thoughts on establishing research significance and preserving integrity. Journal of Pediatric Psychology, 33(1), 1–5. https://doi.org/10.1093/
jpepsy/jsm092
Family Educational Rights and Privacy Act of 1974, 20 U.S.C. § 1232g; 34 C.F.R.
Part 99 (1974). https://www.govinfo.gov/content/pkg/USCODE-2011-title20/pdf/
USCODE-2011-title20-chap31-subchapIII-part4-sec1232g.pdf
Fernhoff, P. M. (2002). Paying for children to participate in research: A slippery slope or
an enlightened stairway? The Journal of Pediatrics, 141(2), 153–154. https://doi.org/
10.1067/mpd.2002.126454
Fisher, C. B. (2013). Confidentiality and disclosure in non-intervention adolescent
risk research. Applied Developmental Science, 17(2), 88–93. https://doi.org/10.1080/
10888691.2013.775915
Fisher, C. B., Brunnquell, D. J., Hughes, D. L., Liben, L. S., Maholmes, V., Plattner, S.,
Russell, S. T., & Susman, E. J. (2013). Preserving and enhancing the responsible
conduct of research involving children and youth: A response to proposed changes
in federal regulations. Social Policy Report, 27(1), 1–15. https://doi.org/10.1002/
j.2379-3988.2013.tb00074.x
Fisher, C. B., & Layman, D. M. (2018). Genomics, big data, and broad consent: A new
ethics frontier for prevention science. Prevention Science, 19(7), 871–879. https://
doi.org/10.1007/s11121-018-0944-z
Fisher, C. B., & McCarthy, E. L. H. (2013). Ethics in prevention science involving genetic
testing. Prevention Science, 14(3), 310–318. https://doi.org/10.1007/s11121-0120318-x
Fisher, C. B., & Mustanski, B. (2014). Reducing health disparities and enhancing the
responsible conduct of research involving LGBT youth. The Hastings Center Report,
44(Suppl. 4), S28–S31. https://doi.org/10.1002/hast.367
Flicker, S., & Guta, A. (2008). Ethical approaches to adolescent participation in sexual
health research. The Journal of Adolescent Health, 42(1), 3–10. https://doi.org/
10.1016/j.jadohealth.2007.07.017
Gardner, J. (2017). Securing a future for responsible neuromodulation in children: The
importance of maintaining a broad clinical gaze. European Journal of Paediatric
Neurology, 21(1), 49–55. https://doi.org/10.1016/j.ejpn.2016.04.019
Giesbertz, N. A., Bredenoord, A. L., & van Delden, J. J. (2014). Clarifying assent in pedia­
tric research. European Journal of Human Genetics, 22(2), 266–269. https://doi.org/
10.1038/ejhg.2013.119
Goldenberg, A. J., Hull, S. C., Botkin, J. R., & Wilfond, B. S. (2009). Pediatric biobanks:
Approaching informed consent for continuing research after children grow up. The
Journal of Pediatrics, 155(4), 578–583. https://doi.org/10.1016/j.jpeds.2009.04.034
Grady, C., Nogues, I., Wiener, L., Wilfond, B. S., & Wendler, D. (2016). Adolescent
research participants’ descriptions of medical research. AJOB Empirical Bioethics,
7(1), 1–7. https://doi.org/10.1080/23294515.2015.1017059
Grady, C., Wiener, L., Abdoler, E., Trauernicht, E., Zadeh, S., Diekema, D. S., Wilfond, B. S.,
& Wendler, D. (2014). Assent in research: The voices of adolescents. The Journal of
Adolescent Health, 54(5), 515–520. https://doi.org/10.1016/j.jadohealth.2014.02.005
Gray, A., Macapagal, K., Mustanski, B., & Fisher, C. B. (2020). Surveillance studies
involving HIV testing are needed: Will at-risk youth participate? Health Psychology,
39(1), 21–28. https://doi.org/10.1037/hea0000804
Guttmann, K., Shouldice, M., & Levin, A. V. (2019). Ethical issues in child abuse research.
Springer Nature. https://doi.org/10.1007/978-3-319-94586-6
218 McCabe and Pao
Health Insurance Portability and Accountability Act of 1996, Pub. L. No. 104-191, 110
Stat. 1936 (1996). https://www.govinfo.gov/content/pkg/PLAW-104publ191/pdf/
PLAW-104publ191.pdf
Hein, I. M., Troost, P. W., Lindeboom, R., Benninga, M. A., Zwaan, C. M., van
Goudoever, J. B., & Lindauer, R. J. L. (2014). Accuracy of the MacArthur Competence Assessment Tool for Clinical Research (MacCAT-CR) for measuring children’s
competence to consent to clinical research. JAMA Pediatrics, 168(12), 1147–1153.
https://doi.org/10.1001/jamapediatrics.2014.1694
Hoagwood, K., Jensen, R., & Fisher, C. B. (Eds.). (1996). Ethical issues in mental health
research with children and adolescents: A casebook and guide. Erlbaum.
Hokke, S., Hackworth, N. J., Quin, N., Bennetts, S. K., Win, H. Y., Nicholson, J. M.,
Zion, L., Lucke, J., Keyzer, P., & Crawford, S. B. (2018). Ethical issues in using the
Internet to engage participants in family and child research: A scoping review. PLOS
ONE, 13(9), Article e0204572. https://doi.org/10.1371/journal.pone.0204572
Horng, S., & Grady, C. (2003). Misunderstanding in clinical research: Distinguishing
therapeutic misconception, therapeutic misestimation, and therapeutic optimism.
IRB: Ethics & Human Research, 25(1), 11–16. https://doi.org/10.2307/3564408
Hurley, J. C., & Underwood, M. K. (2002). Children’s understanding of their research
rights before and after debriefing: Informed assent, confidentiality, and stopping
participation. Child Development, 73(1), 132–143. https://doi.org/10.1111/14678624.00396
Institute of Medicine. (2004). Ethical conduct of clinical research involving children. The
National Academies Press. https://doi.org/10.17226/10958
Kong, C. C., Tarling, T. E., Strahlendorf, C., Dittrick, M., & Vercauteren, S. M. (2016).
Opinions of adolescents and parents about pediatric biobanking. The Journal of Adolescent Health, 58(4), 474–480. https://doi.org/10.1016/j.jadohealth.2015.12.015
Kuther, T. L., & Posada, M. (2004). Children and adolescents’ capacity to provide
informed consent for participation in research. Advances in Psychology Research, 32,
163–173.
Masty, J., & Fisher, C. B. (2008). A goodness-of-fit approach to parent permission and
child assent pediatric intervention research. Ethics & Behavior, 18(2–3), 139–160.
https://doi.org/10.1080/10508420802063897
Matson, M., Macapagal, K., Kraus, A., Coventry, R., Bettin, E., Fisher, C. B., &
Mustanski, B. (2019). Sexual and gender minority youth’s perspectives on sharing
de-identified data in sexual health and HIV prevention research. Sexuality Research
and Social Policy, 16(1), 1–11. https://doi.org/10.1007/s13178-018-0372-7
Miller, V. A., Drotar, D., & Kodish, E. (2004). Children’s competence for assent and
consent: A review of empirical findings. Ethics & Behavior, 14(3), 255–295. https://
doi.org/10.1207/s15327019eb1403_3
Miller, V. A., & Feudtner, C. (2016). Parent and child perceptions of the benefits of
research participation. IRB: Ethics & Human Research, 38(4), 1–7.
Miller, V. A., & Nelson, R. M. (2006). A developmental approach to child assent for
nontherapeutic research. The Journal of Pediatrics, 149(1, Suppl.), S25–S30. https://
doi.org/10.1016/j.jpeds.2006.04.047
Mustanski, B. (2011). Ethical and regulatory issues with conducting sexuality research
with LGBT adolescents: A call to action for a scientifically informed approach.
Archives of Sexual Behavior, 40(4), 673–686. https://doi.org/10.1007/s10508-0119745-1
National Advisory Council on Drug Abuse. (2005). NACDA guidelines for substance abuse
research involving children and adolescents. https://archives.drugabuse.gov/sites/default/
files/nacdaguidelines.pdf
National Research Council. (2014). Proposed revisions to the Common Rule for the protection
of human subjects in the behavioral and social sciences. The National Academies Press.
https://doi.org/10.17226/18614
Legal and Ethical Requirements for Research With Minors 219
O’Lonergan, T. A., & Forster-Harwood, J. E. (2011). Novel approach to parental permission and child assent for research: Improving comprehension. Pediatrics, 127(5),
917–924. https://doi.org/10.1542/peds.2010-3283
Partridge, T. A. (2013). The mdx mouse model as a surrogate for Duchenne muscular
dystrophy. The FEBS Journal, 280(17), 4177–4186. https://doi.org/10.1111/febs.12267
Pyle, A., & Danniels, E. (2016). A continuum of play-based learning: The role of the
teacher in play-based pedagogy and the fear of hijacking play. Early Education and
Development, 28(3), 274–289. https://doi.org/10.1080/10409289.2016.1220771
Reyna, V. F., & Farley, F. (2006). Risk and rationality in adolescent decision making:
Implications for theory, practice, and public policy. Psychological Science in the Public
Interest, 7(1), 1–44. https://doi.org/10.1111/j.1529-1006.2006.00026.x
Rossi, W. C., Reynolds, W., & Nelson, R. M. (2003). Child assent and parental permission in pediatric research. Theoretical Medicine and Bioethics, 24(2), 131–148. https://
doi.org/10.1023/A:1024690712019
Roth-Cline, M., & Nelson, R. M. (2013). Parental permission and child assent in research
on children. The Yale Journal of Biology and Medicine, 86(3), 291–301.
Santelli, J. S., Rogers, A. S., Rosenfeld, W. D., DuRant, R. H., Dubler, N., Morreale, M.,
English, A., Lyss, S., Wimberly, Y., Schissel, A., & the Society for Adolescent Medicine. (2003). Guidelines for adolescent health research: A position paper of the
Society for Adolescent Medicine. The Journal of Adolescent Health, 33(5), 396–409.
https://doi.org/10.1016/j.jadohealth.2003.06.009
Secretary’s Advisory Committee on Human Research Protections. (2013). January 10,
2013 SACHRP letter to the HHS secretary. https://www.hhs.gov/ohrp/sachrp-committee/
recommendations/2013-january-10-letter/index.html
Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research. The American Psychologist, 43(1), 49–55. https://doi.org/10.1037/0003066X.43.1.49
Steinberg, L. (2012). Should the science of adolescent brain development inform public
policy? Issues in Science and Technology, 28(3), 67–78.
Taplin, S., Chalmers, J., Hoban, B., McArthur, M., Moore, T., & Graham, A. (2019).
Children in social research: Do higher payments encourage participation in riskier
studies? Journal of Empirical Research on Human Research Ethics, 14(2), 126–140. https://
doi.org/10.1177/1556264619826796
Trimble, J. E., & Fisher, C. B. (2005). The handbook of ethical research with ethnocultural
populations & communities. Sage Publications. https://doi.org/10.4135/9781412986168
University of California San Diego. (n.d.). Connected and Open Research Ethics (CORE).
https://ethics.ucsd.edu/related-programs/CORE.html
U.S. Department of Education. (n.d.). Protecting student privacy. https://studentprivacy.
ed.gov/audience/researchers
U.S. Department of Health and Human Services. (n.d.). Research with children FAQs.
https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/children-research/
index.html
U.S. Department of Health and Human Services. (2018a). Protection of Human Subjects,
45 C.F.R. Part 46, Subpart A. https://www.hhs.gov/ohrp/regulations-and-policy/
regulations/45-cfr-46/index.html
U.S. Department of Health and Human Services. (2018b). Research. https://www.hhs.
gov/hipaa/for-professionals/special-topics/research/index.html
Waligora, M., Dranseika, V., & Piasecki, J. (2014). Child’s assent in research: Age threshold
or personalisation? BMC Medical Ethics, 15, Article 44. https://doi.org/10.1186/
1472-6939-15-44
Weaver, M. S., Mooney-Doyle, K., Kelly, K. P., Montgomery, K., Newman, A. R.,
Fortney, C. A., Bell, C. J., Spruit, J. L., Kurtz Uveges, M., Wiener, L., Schmidt, C. M.,
Madrigal, V. N., & Hinds, P. S. (2019). The benefits and burdens of pediatric palliative
220 McCabe and Pao
care and end-of-life research: A systematic review. Journal of Palliative Medicine,
22(8), 915–926. https://doi.org/10.1089/jpm.2018.0483
Weiner, L., Battles, H., Zadeh, S., & Pao, M. (2015). Is participating in psychological
research a benefit, burden, or both for medically ill youth and their caregivers? IRB:
Ethics & Human Research, 37(6), 1–8.
Weise, K. L., Smith, M. L., Maschke, K. J., & Copeland, H. L. (2002). National practices
regarding payment to research subjects for participating in pediatric research.
Pediatrics, 110(3), 577–582. https://doi.org/10.1542/peds.110.3.577
Wendler, D., Rackoff, J. E., Emanuel, E. J., & Grady, C. (2002). The ethics of paying for
children’s participation in research. The Journal of Pediatrics, 141(2), 166–171. https://
doi.org/10.1067/mpd.2002.124381
APPENDIX A
The Belmont Report
THE BELMONT REPORT
Office of the Secretary
Ethical Principles and Guidelines for the Protection of Human
Subjects of Research
The National Commission for the Protection of Human Subjects of
Biomedical and Behavioral Research
April 18, 1979
AGENCY: Department of Health, Education, and Welfare.
ACTION: Notice of Report for Public Comment.
SUMMARY: On July 12, 1974, the National Research Act (Pub. L. 93-348)
was signed into law, there-by creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. One of
the charges to the Commission was to identify the basic ethical principles
that should underlie the conduct of biomedical and behavioral research
involving human subjects and to develop guidelines which should be followed
to assure that such research is conducted in accordance with those principles.
In carrying out the above, the Commission was directed to consider:
(i) the boundaries between biomedical and behavioral research and the
accepted and routine practice of medicine, (ii) the role of assessment of
risk-benefit criteria in the determination of the appropriateness of research
involving human subjects, (iii) appropriate guidelines for the selection of
The content of this appendix is reprinted from a U.S. government report. Reprinted
from "The Belmont Report: Ethical Principles and Guidelines for the Protection of
Human Subjects of Research," by National Commission for the Protection of Human
Subjects of Biomedical and Behavioral Research, 1979 (https://www.hhs.gov/ohrp/
sites/default/files/the-belmont-report-508c_FINAL.pdf). In the public domain.
221
222 Appendix A
human subjects for participation in such research, and (iv) the nature and
definition of informed consent in various research settings.
The Belmont Report attempts to summarize the basic ethical principles identified by the Commission in the course of its deliberations. It is the outgrowth of
an intensive four-day period of discussions that were held in February 1976 at
the Smithsonian Institution’s Belmont Conference Center supplemented by
the monthly deliberations of the Commission that were held over a period of
nearly four years. It is a statement of basic ethical principles and guidelines that
should assist in resolving the ethical problems that surround the conduct of
research with human subjects. By publishing the Report in the Federal Register, and providing reprints upon request, the Secretary intends that it may be
made readily available to scientists, members of Institutional Review Boards,
and Federal employees. The two-volume Appendix, containing the lengthy
reports of experts and specialists who assisted the Commission in fulfilling this
part of its charge, is available as DHEW Publication No. (OS) 78-0013 and No.
(OS) 78-0014, for sale by the Superintendent of Documents, U.S. Government
Printing Office, Washington, D.C. 20402.
Unlike most other reports of the Commission, the Belmont Report does not
make specific recommendations for administrative action by the Secretary of
Health, Education, and Welfare. Rather, the Commission recommended that the
Belmont Report be adopted in its entirety, as a statement of the Department’s
policy. The Department requests public comment on this recommendation.
National Commission for the Protection of Human Subjects of
Biomedical and Behavioral Research
Members of the Commission
Kenneth John Ryan, M.D., Chairman, Chief of Staff, Boston Hospital for Women.
Joseph V. Brady, Ph.D., Professor of Behavioral Biology, Johns Hopkins University.
Robert E. Cooke, M.D., President, Medical College of Pennsylvania.
Dorothy I. Height, President, National Council of Negro Women, Inc.
Albert R. Jonsen, Ph.D., Associate Professor of Bioethics, University of California at
San Francisco.
Patricia King, J.D., Associate Professor of Law, Georgetown University Law Center.
Karen Lebacqz, Ph.D., Associate Professor of Christian Ethics, Pacific School of Religion.
*** David W. Louisell, J.D., Professor of Law, University of California at Berkeley.
Donald W. Seldin, M.D., Professor and Chairman, Department of Internal Medicine,
University of Texas at Dallas.
*** Eliot Stellar, Ph.D., Provost of the University and Professor of Physiological Psychology,
University of Pennsylvania.
*** Robert H. Turtle, LL.B., Attorney, VomBaur, Coburn, Simmons & Turtle,
Washington, D.C.
*** Deceased.
Appendix A
223
Table of Contents
Ethical Principles and Guidelines for Research Involving Human Subjects
A. Boundaries Between Practice and Research
B. Basic Ethical Principles
1. Respect for Persons
2. Beneficence
3. Justice
C. Applications
1. Informed Consent
2. Assessment of Risk and Benefits
3. Selection of Subjects
Ethical Principles & Guidelines for Research Involving
Human Subjects
Scientific research has produced substantial social benefits. It has also posed
some troubling ethical questions. Public attention was drawn to these questions by reported abuses of human subjects in biomedical experiments, especially during the Second World War. During the Nuremberg War Crime
Trials, the Nuremberg code was drafted as a set of standards for judging
physicians and scientists who had conducted biomedical experiments on
concentration camp prisoners. This code became the prototype of many later
codes1 intended to assure that research involving human subjects would be
carried out in an ethical manner.
The codes consist of rules, some general, others specific, that guide the
investigators or the reviewers of research in their work. Such rules often are
inadequate to cover complex situations; at times they come into conflict, and
they are frequently difficult to interpret or apply. Broader ethical principles
will provide a basis on which specific rules may be formulated, criticized and
interpreted.
Three principles, or general prescriptive judgments, that are relevant to
research involving human subjects are identified in this statement. Other
principles may also be relevant. These three are comprehensive, however,
and are stated at a level of generalization that should assist scientists, subjects,
reviewers and interested citizens to understand the ethical issues inherent
Since 1945, various codes for the proper and responsible conduct of human experimentation in medical research have been adopted by different organizations. The best
known of these codes are the Nuremberg Code of 1947, the Helsinki Declaration of
1964 (revised in 1975), and the 1971 Guidelines (codified into Federal Regulations in
1974) issued by the U.S. Department of Health, Education, and Welfare Codes for the
conduct of social and behavioral research have also been adopted, the best known
being that of the American Psychological Association, published in 1973.
1
224 Appendix A
in research involving human subjects. These principles cannot always be
applied so as to resolve beyond dispute particular ethical problems. The objective is to provide an analytical framework that will guide the resolution of
ethical problems arising from research involving human subjects.
This statement consists of a distinction between research and practice, a discussion of the three basic ethical principles, and remarks about the application
of these principles.
Part A: Boundaries Between Practice & Research
A. Boundaries Between Practice and Research
It is important to distinguish between biomedical and behavioral research, on
the one hand, and the practice of accepted therapy on the other, in order to
know what activities ought to undergo review for the protection of human
subjects of research. The distinction between research and practice is blurred
partly because both often occur together (as in research designed to evaluate
a therapy) and partly because notable departures from standard practice are
often called “experimental” when the terms “experimental” and “research”
are not carefully defined.
For the most part, the term “practice” refers to interventions that are designed
solely to enhance the well-being of an individual patient or client and that
have a reasonable expectation of success. The purpose of medical or behavioral practice is to provide diagnosis, preventive treatment or therapy to
particular individuals2. By contrast, the term “research” designates an activity
designed to test an hypothesis, permit conclusions to be drawn, and thereby
to develop or contribute to generalizable knowledge (expressed, for example,
in theories, principles, and statements of relationships). Research is usually
described in a formal protocol that sets forth an objective and a set of procedures designed to reach that objective.
When a clinician departs in a significant way from standard or accepted
practice, the innovation does not, in and of itself, constitute research. The fact
that a procedure is “experimental,” in the sense of new, untested or different,
does not automatically place it in the category of research. Radically new
Although practice usually involves interventions designed solely to enhance the
well-being of a particular individual, interventions are sometimes applied to one
individual for the enhancement of the well-being of another (e.g., blood donation,
skin grafts, organ transplants) or an intervention may have the dual purpose of
enhancing the well-being of a particular individual, and, at the same time, providing
some benefit to others (e.g., vaccination, which protects both the person who is
vaccinated and society generally). The fact that some forms of practice have elements
other than immediate benefit to the individual receiving an intervention, however,
should not confuse the general distinction between research and practice. Even
when a procedure applied in practice may benefit some other person, it remains an
intervention designed to enhance the well-being of a particular individual or groups
of individuals; thus, it is practice and need not be reviewed as research.
2
Appendix A
225
procedures of this description should, however, be made the object of formal
research at an early stage in order to determine whether they are safe and
effective. Thus, it is the responsibility of medical practice committees, for
example, to insist that a major innovation be incorporated into a formal
research project3.
Research and practice may be carried on together when research is designed
to evaluate the safety and efficacy of a therapy. This need not cause any
confusion regarding whether or not the activity requires review; the general
rule is that if there is any element of research in an activity, that activity
should undergo review for the protection of human subjects.
Part B: Basic Ethical Principles
B. Basic Ethical Principles
The expression “basic ethical principles” refers to those general judgments
that serve as a basic justification for the many particular ethical prescriptions
and evaluations of human actions. Three basic principles, among those generally accepted in our cultural tradition, are particularly relevant to the ethics
of research involving human subjects: the principles of respect of persons,
beneficence and justice.
1. Respect for Persons.—Respect for persons incorporates at least two ethical
convictions: first, that individuals should be treated as autonomous agents,
and second, that persons with diminished autonomy are entitled to protection. The principle of respect for persons thus divides into two separate
moral requirements: the requirement to acknowledge autonomy and the
requirement to protect those with diminished autonomy.
An autonomous person is an individual capable of deliberation about personal goals and of acting under the direction of such deli­beration. To respect
autonomy is to give weight to autonomous persons’ considered opinions and
choices while refraining from obstructing their actions unless they are clearly
detrimental to others. To show lack of respect for an autonomous agent is to
repudiate that person’s considered judgments, to deny an individual the freedom to act on those considered judgments, or to withhold information
necessary to make a considered judgment, when there are no compelling
reasons to do so.
However, not every human being is capable of self-determination. The
capacity for self-determination matures during an individual’s life, and
3
Because the problems related to social experimentation may differ substantially
from those of biomedical and behavioral research, the Commission specifically
declines to make any policy determination regarding such research at this time.
Rather, the Commission believes that the problem ought to be addressed by one
of its successor bodies
226 Appendix A
some individuals lose this capacity wholly or in part because of illness,
mental disability, or circumstances that severely restrict liberty. Respect
for the immature and the incapacitated may require protecting them as
they mature or while they are incapacitated.
Some persons are in need of extensive protection, even to the point of
excluding them from activities which may harm them; other persons
require little protection beyond making sure they undertake activities
freely and with awareness of possible adverse consequence. The extent of
protection afforded should depend upon the risk of harm and the likelihood of benefit. The judgment that any individual lacks autonomy should
be periodically reevaluated and will vary in different situations.
In most cases of research involving human subjects, respect for persons demands that subjects enter into the research voluntarily and with
adequate information. In some situations, however, application of the
principle is not obvious. The involvement of prisoners as subjects of
research provides an instructive example. On the one hand, it would
seem that the principle of respect for persons requires that prisoners not be
deprived of the opportunity to volunteer for research. On the other
hand, under prison conditions they may be subtly coerced or unduly
influenced to engage in research activities for which they would not
otherwise volunteer. Respect for persons would then dictate that prisoners be protected. Whether to allow prisoners to “volunteer” or to “protect” them presents a dilemma. Respecting persons, in most hard cases,
is often a matter of balancing competing claims urged by the principle
of respect itself.
2. Beneficence.—Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making
efforts to secure their well-being. Such treatment falls under the principle
of beneficence. The term “beneficence” is often understood to cover acts of
kindness or charity that go beyond strict obligation. In this document,
beneficence is understood in a stronger sense, as an obligation. Two general
rules have been formulated as complementary expressions of beneficent
actions in this sense: (1) do not harm and (2) maximize possible benefits and
minimize possible harms.
The Hippocratic maxim “do no harm” has long been a fundamental
principle of medical ethics. Claude Bernard extended it to the realm of
research, saying that one should not injure one person regardless of the
benefits that might come to others. However, even avoiding harm requires
learning what is harmful; and, in the process of obtaining this information,
persons may be exposed to risk of harm. Further, the Hippocratic Oath
requires physicians to benefit their patients “according to their best judgment.” Learning what will in fact benefit may require exposing persons to
risk. The problem posed by these imperatives is to decide when it is justifiable to seek certain benefits despite the risks involved, and when the benefits should be foregone because of the risks.
Appendix A
227
The obligations of beneficence affect both individual investigators and
society at large, because they extend both to particular research projects
and to the entire enterprise of research. In the case of particular projects,
investigators and members of their institutions are obliged to give forethought to the maximization of benefits and the reduction of risk that
might occur from the research investigation. In the case of scientific
research in general, members of the larger society are obliged to recognize
the longer term benefits and risks that may result from the improvement
of knowledge and from the development of novel medical, psychotherapeutic, and social procedures.
The principle of beneficence often occupies a well-defined justifying
role in many areas of research involving human subjects. An example is
found in research involving children. Effective ways of treating childhood
diseases and fostering healthy development are benefits that serve to
justify research involving children—even when individual research subjects are not direct beneficiaries. Research also makes it possible to avoid
the harm that may result from the application of previously accepted routine practices that on closer investigation turn out to be dangerous. But the
role of the principle of beneficence is not always so unambiguous. A difficult ethical problem remains, for example, about research that presents
more than minimal risk without immediate prospect of direct benefit to
the children involved. Some have argued that such research is inadmissible,
while others have pointed out that this limit would rule out much research
promising great benefit to children in the future. Here again, as with all
hard cases, the different claims covered by the principle of beneficence
may come into conflict and force difficult choices.
3. Justice.—Who ought to receive the benefits of research and bear its
burdens? This is a question of justice, in the sense of “fairness in distribution” or “what is deserved.” An injustice occurs when some benefit to
which a person is entitled is denied without good reason or when some
burden is imposed unduly. Another way of conceiving the principle of
justice is that equals ought to be treated equally. However, this statement
requires explication. Who is equal and who is unequal? What considerations justify departure from equal distribution? Almost all commen­
tators allow that distinctions based on experience, age, deprivation,
competence, merit and position do sometimes constitute criteria justifying differential treatment for certain purposes. It is necessary, then, to
explain in what respects people should be treated equally. There are several widely accepted formulations of just ways to distribute burdens and
benefits. Each formulation mentions some relevant property on the basis
of which burdens and benefits should be distributed. These formulations
are (1) to each person an equal share, (2) to each person according to
individual need, (3) to each person according to individual effort, (4) to
each person according to societal contribution, and (5) to each person
according to merit.
228 Appendix A
Questions of justice have long been associated with social practices
such as punishment, taxation and political representation. Until recently
these questions have not generally been associated with scientific
research. However, they are foreshadowed even in the earliest reflections on the ethics of research involving human subjects. For example,
during the 19th and early 20th centuries the burdens of serving as
research subjects fell largely upon poor ward patients, while the benefits of improved medical care flowed primarily to private patients. Subsequently, the exploitation of unwilling prisoners as research subjects in
Nazi concentration camps was condemned as a particularly flagrant
injustice. In this country, in the 1940’s, the Tuskegee syphilis study used
disadvantaged, rural black men to study the untreated course of a disease
that is by no means confined to that population. These subjects were
deprived of demonstrably effective treatment in order not to interrupt
the project, long after such treatment became generally available.
Against this historical background, it can be seen how conceptions of
justice are relevant to research involving human subjects. For example,
the selection of research subjects needs to be scrutinized in order to
determine whether some classes (e.g., welfare patients, particular racial
and ethnic minorities, or persons confined to institutions) are being systematically selected simply because of their easy availability, their compromised
position, or their manipulability, rather than for reasons directly related
to the problem being studied. Finally, whenever research supported by
public funds leads to the development of therapeutic devices and procedures, justice demands both that these not provide advantages only
to those who can afford them and that such research should not unduly
involve persons from groups unlikely to be among the beneficiaries of
subsequent applications of the research.
Part C: Applications
C. Applications
Applications of the general principles to the conduct of research leads to
consideration of the following requirements: informed consent, risk/benefit
assessment, and the selection of subjects of research.
1. Informed Consent.—Respect for persons requires that subjects, to the
degree that they are capable, be given the opportunity to choose what
shall or shall not happen to them. This opportunity is provided when
adequate standards for informed consent are satisfied.
While the importance of informed consent is unquestioned, controversy prevails over the nature and possibility of an informed consent.
Nonetheless, there is widespread agreement that the consent process
can be analyzed as containing three elements: information, comprehension and voluntariness.
Appendix A
229
Information. Most codes of research establish specific items for disclosure intended to assure that subjects are given sufficient information.
These items generally include: the research procedure, their purposes, risks
and anticipated benefits, alternative procedures (where therapy is involved),
and a statement offering the subject the opportunity to ask questions and
to withdraw at any time from the research. Additional items have been
proposed, including how subjects are selected, the person responsible
for the research, etc.
However, a simple listing of items does not answer the question of
what the standard should be for judging how much and what sort of
information should be provided. One standard frequently invoked in
medical practice, namely the information commonly provided by practitioners in the field or in the locale, is inadequate since research takes
place precisely when a common understanding does not exist. Another
standard, currently popular in malpractice law, requires the practitioner to
reveal the information that reasonable persons would wish to know in
order to make a decision regarding their care. This, too, seems insufficient since the research subject, being in essence a volunteer, may wish
to know considerably more about risks gratuitously undertaken than
do patients who deliver themselves into the hand of a clinician for
needed care. It may be that a standard of “the reasonable volunteer”
should be proposed: the extent and nature of information should be
such that persons, knowing that the procedure is neither necessary for
their care nor perhaps fully understood, can decide whether they wish
to participate in the furthering of knowledge. Even when some direct
benefit to them is anticipated, the subjects should understand clearly
the range of risk and the voluntary nature of participation.
A special problem of consent arises where informing subjects of some
pertinent aspect of the research is likely to impair the validity of the
research. In many cases, it is sufficient to indicate to subjects that they
are being invited to participate in research of which some features will
not be revealed until the research is concluded. In all cases of research
involving incomplete disclosure, such research is justified only if it is clear
that (1) incomplete disclosure is truly necessary to accomplish the goals of
the research, (2) there are no undisclosed risks to subjects that are more
than minimal, and (3) there is an adequate plan for debriefing subjects,
when appropriate, and for dissemination of research results to them.
Information about risks should never be withheld for the purpose of
eliciting the cooperation of subjects, and truthful answers should always
be given to direct questions about the research. Care should be taken to
distinguish cases in which disclosure would destroy or invalidate the
research from cases in which disclosure would simply inconvenience
the investigator.
Comprehension. The manner and context in which information is
conveyed is as important as the information itself. For example, presenting
230 Appendix A
information in a disorganized and rapid fashion, allowing too little time for
consideration or curtailing opportunities for questioning, all may adversely
affect a subject’s ability to make an informed choice.
Because the subject’s ability to understand is a function of intelligence,
rationality, maturity and language, it is necessary to adapt the presentation
of the information to the subject’s capacities. Investigators are responsible for ascertaining that the subject has comprehended the information.
While there is always an obligation to ascertain that the information about
risk to subjects is complete and adequately comprehended, when the risks
are more serious, that obligation increases. On occasion, it may be suitable
to give some oral or written tests of comprehension.
Special provision may need to be made when comprehension is severely
limited—for example, by conditions of immaturity or mental disability.
Each class of subjects that one might consider as incompetent (e.g.,
infants and young children, mentally disable patients, the terminally ill
and the comatose) should be considered on its own terms. Even for
these persons, however, respect requires giving them the opportunity
to choose to the extent they are able, whether or not to participate in
research. The objections of these subjects to involvement should be
honored, unless the research entails providing them a therapy unavailable elsewhere. Respect for persons also requires seeking the permission of other parties in order to protect the subjects from harm. Such
persons are thus respected both by acknowledging their own wishes
and by the use of third parties to protect them from harm.
The third parties chosen should be those who are most likely to understand the incompetent subject’s situation and to act in that person’s best
interest. The person authorized to act on behalf of the subject should be
given an opportunity to observe the research as it proceeds in order to be
able to withdraw the subject from the research, if such action appears in
the subject’s best interest.
Voluntariness. An agreement to participate in research constitutes a
valid consent only if voluntarily given. This element of informed consent
requires conditions free of coercion and undue influence. Coercion occurs
when an overt threat of harm is intentionally presented by one person to
another in order to obtain compliance. Undue influence, by contrast,
occurs through an offer of an excessive, unwarranted, inappropriate or
improper reward or other overture in order to obtain compliance. Also,
inducements that would ordinarily be acceptable may become undue
influences if the subject is especially vulnerable.
Unjustifiable pressures usually occur when persons in positions of
authority or commanding influence—especially where possible sanctions are involved—urge a course of action for a subject. A continuum
of such influencing factors exists, however, and it is impossible to state
precisely where justifiable persuasion ends and undue influence begins.
But undue influence would include actions such as manipulating a
Appendix A
231
person’s choice through the controlling influence of a close relative and
threatening to withdraw health services to which an individual would
otherwise be entitled.
2. Assessment of Risks and Benefits.—The assessment of risks and benefits
requires a careful arrayal of relevant data, including, in some cases, alternative ways of obtaining the benefits sought in the research. Thus, the
assessment presents both an opportunity and a responsibility to gather
systematic and comprehensive information about proposed research. For
the investigator, it is a means to examine whether the proposed research is
properly designed. For a review committee, it is a method for determining
whether the risks that will be presented to subjects are justified. For prospective subjects, the assessment will assist the determination whether or
not to participate.
The Nature and Scope of Risks and Benefits. The requirement
that research be justified on the basis of a favorable risk/benefit assessment bears a close relation to the principle of beneficence, just as the
moral requirement that informed consent be obtained is derived primarily from the principle of respect for persons. The term “risk” refers
to a possibility that harm may occur. However, when expressions such
as “small risk” or “high risk” are used, they usually refer (often ambiguously) both to the chance (probability) of experiencing a harm and the
severity (magnitude) of the envisioned harm.
The term “benefit” is used in the research context to refer to something of positive value related to health or welfare. Unlike, “risk,” “benefit” is not a term that expresses probabilities. Risk is properly contrasted
to probability of benefits, and benefits are properly contrasted with
harms rather than risks of harm. Accordingly, so-called risk/benefit
assessments are concerned with the probabilities and magnitudes of
possible harm and anticipated benefits. Many kinds of possible harms
and benefits need to be taken into account. There are, for example,
risks of psychological harm, physical harm, legal harm, social harm and
economic harm and the corresponding benefits. While the most likely
types of harms to research subjects are those of psychological or physical pain or injury, other possible kinds should not be overlooked.
Risks and benefits of research may affect the individual subjects, the
families of the individual subjects, and society at large (or special groups
of subjects in society). Previous codes and Federal regulations have
required that risks to subjects be outweighed by the sum of both the
anticipated benefit to the subject, if any, and the anticipated benefit to
society in the form of knowledge to be gained from the research. In
balancing these different elements, the risks and benefits affecting the
immediate research subject will normally carry special weight. On the
other hand, interests other than those of the subject may on some occasions be sufficient by themselves to justify the risks involved in the
232 Appendix A
research, so long as the subjects’ rights have been protected. Beneficence
thus requires that we protect against risk of harm to subjects and also
that we be concerned about the loss of the substantial benefits that
might be gained from research.
The Systematic Assessment of Risks and Benefits. It is commonly said that benefits and risks must be “balanced” and shown to be
“in a favorable ratio.” The metaphorical character of these terms draws
attention to the difficulty of making precise judgments. Only on rare
occasions will quantitative techniques be available for the scrutiny of
research protocols. However, the idea of systematic, nonarbitrary analysis of risks and benefits should be emulated insofar as possible. This ideal
requires those making decisions about the justifiability of research to be
thorough in the accumulation and assessment of information about all
aspects of the research, and to consider alternatives systematically. This
procedure renders the assessment of research more rigorous and precise, while making communication between review board members and
investigators less subject to misinterpretation, misinformation and conflicting judgments. Thus, there should first be a determination of the
validity of the presuppositions of the research; then the nature, probability and magnitude of risk should be distinguished with as much clarity
as possible. The method of ascertaining risks should be explicit, especially
where there is no alternative to the use of such vague categories as
small or slight risk. It should also be determined whether an investigator’s estimates of the probability of harm or benefits are reasonable, as
judged by known facts or other available studies.
Finally, assessment of the justifiability of research should reflect at
least the following considerations: (i)Brutal or inhumane treatment of
human subjects is never morally justified. (ii) Risks should be reduced to
those necessary to achieve the research objective. It should be determined whether it is in fact necessary to use human subjects at all. Risk
can perhaps never be entirely eliminated, but it can often be reduced by
careful attention to alternative procedures. (iii) When research involves
significant risk of serious impairment, review committees should be
extraordinarily insistent on the justification of the risk (looking usually
to the likelihood of benefit to the subject—or, in some rare cases, to the
manifest voluntariness of the participation). (iv) When vulnerable populations are involved in research, the appropriateness of involving them
should itself be demonstrated. A number of variables go into such judgments, including the nature and degree of risk, the condition of the particular population involved, and the nature and level of the anticipated
benefits. (v) Relevant risks and benefits must be thoroughly arrayed in
documents and procedures used in the informed consent process.
3. Selection of Subjects.—Just as the principle of respect for persons finds
expression in the requirements for consent, and the principle of beneficence in risk/benefit assessment, the principle of justice gives rise to moral
Appendix A
233
requirements that there be fair procedures and outcomes in the selection
of research subjects.
Justice is relevant to the selection of subjects of research at two levels:
the social and the individual. Individual justice in the selection of subjects would require that researchers exhibit fairness: thus, they should
not offer potentially beneficial research only to some patients who are
in their favor or select only “undesirable” persons for risky research.
Social justice requires that distinction be drawn between classes of
subjects that ought, and ought not, to participate in any particular kind
of research, based on the ability of members of that class to bear burdens
and on the appropriateness of placing further burdens on already burdened
persons. Thus, it can be considered a matter of social justice that there
is an order of preference in the selection of classes of subjects (e.g.,
adults before children) and that some classes of potential subjects (e.g.,
the institutionalized mentally infirm or prisoners) may be involved as
research subjects, if at all, only on certain conditions.
Injustice may appear in the selection of subjects, even if individual subjects are selected fairly by investigators and treated fairly in the course of
research. Thus injustice arises from social, racial, sexual and cultural biases
institutionalized in society. Thus, even if individual researchers are treating
their research subjects fairly, and even if IRBs are taking care to assure that
subjects are selected fairly within a particular institution, unjust social
patterns may nevertheless appear in the overall distribution of the burdens
and benefits of research. Although individual institutions or investigators
may not be able to resolve a problem that is pervasive in their social setting,
they can consider distributive justice in selecting research subjects.
Some populations, especially institutionalized ones, are already burdened in many ways by their infirmities and environments. When research
is proposed that involves risks and does not include a therapeutic component, other less burdened classes of persons should be called upon first to
accept these risks of research, except where the research is directly related
to the specific conditions of the class involved. Also, even though public
funds for research may often flow in the same directions as public funds
for health care, it seems unfair that populations dependent on public
health care constitute a pool of preferred research subjects if more advantaged populations are likely to be the recipients of the benefits.
One special instance of injustice results from the involvement of vulnerable subjects. Certain groups, such as racial minorities, the economically
dis­advantaged, the very sick, and the institutionalized may continually
be sought as research subjects, owing to their ready availability in settings
where research is conducted. Given their dependent status and their
frequently compromised capacity for free consent, they should be protected against the danger of being involved in research solely for administrative convenience, or because they are easy to manipulate as a result
of their illness or socio­economic condition.
APPENDIX B
Title 45 Code of Federal
Regulations Part 46,
Subparts A–E
PART 46—PROTECTION OF HUMAN SUBJECTS
Contents
Subpart A—Basic HHS Policy for Protection of Human Research Subjects
§46.101 To what does this policy apply?
§46.102 Definitions for purposes of this policy.
§46.103 Assuring compliance with this policy—research conducted or
supported by any Federal department or agency.
§46.104 Exempt research.
§46.105–46.106 [Reserved]
§46.107 IRB membership.
§46.108 IRB functions and operations.
§46.109 IRB review of research.
§46.110 Expedited review procedures for certain kinds of research involving
no more than minimal risk, and for minor changes in approved research.
§46.111 Criteria for IRB approval of research.
§46.112 Review by institution.
§46.113 Suspension or termination of IRB approval of research.
§46.114 Cooperative research.
§46.115 IRB records.
§46.116 General requirements for informed consent.
§46.117 Documentation of informed consent.
§46.118 Applications and proposals lacking definite plans for involvement of
human subjects.
The content of this appendix is reprinted from a U.S. government regulation. From
“Protection of Human Subjects, 45 C.F.R. Part 46,” by U.S. Department of Health
and Human Services, 2018 (https://www.hhs.gov/ohrp/regulations-and-policy/
regulations/45-cfr-46/index.html). In the public domain.
235
236 Appendix B
§46.119 Research undertaken without the intention of involving human
subjects.
§46.120 Evaluation and disposition of applications and proposals for
research to be conducted or supported by a Federal department or agency.
§46.121 [Reserved]
§46.122 Use of Federal funds.
§46.123 Early termination of research support: Evaluation of applications
and proposals.
§46.124 Conditions.
Subpart B—Additional Protections for Pregnant Women, Human
Fetuses and Neonates Involved in Research
§46.201 To what do these regulations apply?
§46.202 Definitions.
§46.203 Duties of IRBs in connection with research involving pregnant
women, fetuses, and neonates.
§46.204 Research involving pregnant women or fetuses.
§46.205 Research involving neonates.
§46.206 Research involving, after delivery, the placenta, the dead fetus or
fetal material.
§46.207 Research not otherwise approvable which presents an opportunity
to understand, prevent, or alleviate a serious problem affecting the health or
welfare of pregnant women, fetuses, or neonates.
Subpart C—Additional Protections Pertaining to Biomedical and
Behavioral Research Involving Prisoners as Subjects
§46.301 Applicability.
§46.302 Purpose.
§46.303 Definitions.
§46.304 Composition of Institutional Review Boards where prisoners are
involved.
§46.305 Additional duties of the Institutional Review Boards where
prisoners are involved.
§46.306 Permitted research involving prisoners.
Subpart D—Additional Protections for Children Involved as Subjects
in Research
§46.401
§46.402
§46.403
§46.404
To what do these regulations apply?
Definitions.
IRB duties.
Research not involving greater than minimal risk.
Appendix B
237
§46.405 Research involving greater than minimal risk but presenting the
prospect of direct benefit to the individual subjects.
§46.406 Research involving greater than minimal risk and no prospect
of direct benefit to individual subjects, but likely to yield generalizable
knowledge about the subject’s disorder or condition.
§46.407 Research not otherwise approvable which presents an opportunity
to understand, prevent, or alleviate a serious problem affecting the health or
welfare of children.
§46.408 Requirements for permission by parents or guardians and for
assent by children.
§46.409 Wards.
Subpart E—Registration of Institutional Review Boards
§46.501
§46.502
§46.503
§46.504
§46.505
updated?
What IRBs must be registered?
What information must be provided when registering an IRB?
When must an IRB be registered?
How must an IRB be registered?
When must IRB registration information be renewed or
Authority: 5 U.S.C. 301; 42 U.S.C. 289(a); 42 U.S.C. 300v-1(b).
Editorial Note: The Department of Health and Human Services issued a notice of
waiver regarding the requirements set forth in part 46, relating to protection of human
subjects, as they pertain to demonstration projects, approved under section 1115 of the
Social Security Act, which test the use of cost—sharing, such as deductibles, copayment
and coinsurance, in the Medicaid program. For further information see 47 FR 9208,
Mar. 4, 1982.
Subpart A—Basic HHS Policy for Protection of Human Research
Subjects
Source: 82 FR 7259, 7273, Jan. 19, 2017, unless otherwise noted.
§46.101 To what does this policy apply?
(a) Except as detailed in §46.104, this policy applies to all research
involving human subjects conducted, supported, or otherwise subject to
regulation by any Federal department or agency that takes appropriate administrative action to make the policy applicable to such research. This includes
research conducted by Federal civilian employees or military personnel,
except that each department or agency head may adopt such procedural
modifications as may be appropriate from an administrative standpoint. It
also includes research conducted, supported, or otherwise subject to regulation by the Federal Government outside the United States. Institutions that
238 Appendix B
are engaged in research described in this paragraph and institutional review
boards (IRBs) reviewing research that is subject to this policy must comply
with this policy.
(b) [Reserved]
(c) Department or agency heads retain final judgment as to whether a
particular activity is covered by this policy and this judgment shall be exercised consistent with the ethical principles of the Belmont Report.62
(d) Department or agency heads may require that specific research activities or classes of research activities conducted, supported, or otherwise subject
to regulation by the Federal department or agency but not otherwise covered
by this policy comply with some or all of the requirements of this policy.
(e) Compliance with this policy requires compliance with pertinent federal
laws or regulations that provide additional protections for human subjects.
(f) This policy does not affect any state or local laws or regulations (including
tribal law passed by the official governing body of an American Indian or Alaska
Native tribe) that may otherwise be applicable and that provide additional
protections for human subjects.
(g) This policy does not affect any foreign laws or regulations that may
otherwise be applicable and that provide additional protections to human
subjects of research.
(h) When research covered by this policy takes place in foreign countries,
procedures normally followed in the foreign countries to protect human subjects may differ from those set forth in this policy. In these circumstances, if a
department or agency head determines that the procedures prescribed by the
institution afford protections that are at least equivalent to those provided in
this policy, the department or agency head may approve the substitution of
the foreign procedures in lieu of the procedural requirements provided in this
policy. Except when otherwise required by statute, Executive Order, or the
department or agency head, notices of these actions as they occur will be published in the Federal Register or will be otherwise published as provided in
department or agency procedures.
(i) Unless otherwise required by law, department or agency heads may
waive the applicability of some or all of the provisions of this policy to specific
research activities or classes of research activities otherwise covered by this
policy, provided the alternative procedures to be followed are consistent with
the principles of the Belmont Report.63 Except when otherwise required by
statute or Executive Order, the department or agency head shall forward
The National Commission for the Protection of Human Subjects of Biomedical and
Behavioral Research, Belmont Report. Washington, DC: U.S. Department of Health
and Human Services. 1979.
62
Appendix B
239
advance notices of these actions to the Office for Human Research Protections, Department of Health and Human Services (HHS), or any successor
office, or to the equivalent office within the appropriate Federal department
or agency, and shall also publish them in the Federal Register or in such
other manner as provided in department or agency procedures. The waiver
notice must include a statement that identifies the conditions under which
the waiver will be applied and a justification as to why the waiver is appropriate for the research, including how the decision is consistent with the
principles of the Belmont Report.
(j) Federal guidance on the requirements of this policy shall be issued only
after consultation, for the purpose of harmonization (to the extent appropriate), with other Federal departments and agencies that have adopted this
policy, unless such consultation is not feasible.
(k) [Reserved]
(l) Compliance dates and transition provisions:
(1) Pre-2018 Requirements. For purposes of this section, the pre-2018 Requirements means this subpart as published in the 2016 edition of the Code of
Federal Regulations.
(2) 2018 Requirements. For purposes of this section, the 2018 Requirements
means the Federal Policy for the Protection of Human Subjects requirements
contained in this subpart. The general compliance date for the 2018 Requirements is January 21, 2019. The compliance date for §46.114(b) (cooperative
research) of the 2018 Requirements is January 20, 2020.
(3) Research subject to pre-2018 requirements. The pre-2018 Requirements
shall apply to the following research, unless the research is transitioning to
comply with the 2018 Requirements in accordance with paragraph (l)(4) of
this section:
(i) Research initially approved by an IRB under the pre-2018 Requirements before January 21, 2019;
(ii) Research for which IRB review was waived pursuant to §46.101(i) of
the pre-2018 Requirements before January 21, 2019; and
(iii) Research for which a determination was made that the research was
exempt under §46.101(b) of the pre-2018 Requirements before January 21,
2019.
(4) Transitioning research. If, on or after July 19, 2018, an institution planning or engaged in research otherwise covered by paragraph (l)(3) of this
section determines that such research instead will transition to comply with
the 2018 Requirements, the institution or an IRB must document and date
such determination.
Id.
63
240 Appendix B
(i) If the determination to transition is documented between July 19,
2018, and January 20, 2019, the research shall:
(A) Beginning on the date of such documentation through January 20,
2019, comply with the pre-2018 Requirements, except that the research shall
comply with the following:
(1) Section 46.102(l) of the 2018 Requirements (definition of research)
(instead of §46.102(d) of the pre-2018 Requirements);
(2) Section 46.103(d) of the 2018 Requirements (revised certification
requirement that eliminates IRB review of application or proposal) (instead of
§46.103(f) of the pre-2018 Requirements); and
(3) Section 46.109(f)(1)(i) and (iii) of the 2018 Requirements (exceptions
to mandated continuing review) (instead of §46.103(b), as related to the
requirement for continuing review, and in addition to §46.109, of the pre2018 Requirements); and
(B) Beginning on January 21, 2019, comply with the 2018 Requirements.
(ii) If the determination to transition is documented on or after January 21,
2019, the research shall, beginning on the date of such documentation, comply
with the 2018 Requirements.
(5) Research subject to 2018 Requirements. The 2018 Requirements shall
apply to the following research:
(i) Research initially approved by an IRB on or after January 21, 2019;
(ii) Research for which IRB review is waived pursuant to paragraph (i) of
this section on or after January 21, 2019; and
(iii) Research for which a determination is made that the research is exempt
on or after January 21, 2019.
(m) Severability: Any provision of this part held to be invalid or unenforceable by its terms, or as applied to any person or circumstance, shall be construed
so as to continue to give maximum effect to the provision permitted by law,
unless such holding shall be one of utter invalidity or unenforceability, in which
event the provision shall be severable from this part and shall not affect the
remainder thereof or the application of the provision to other persons not similarly situated or to other dissimilar circumstances.
[82 FR 7259, 7273, Jan. 19, 2017, as amended at 83 FR 28518, June 19, 2018]
§46.102 Definitions for purposes of this policy.
(a) Certification means the official notification by the institution to the
supporting Federal department or agency component, in accordance with the
Appendix B
241
requirements of this policy, that a research project or activity involving human
subjects has been reviewed and approved by an IRB in accordance with an
approved assurance.
(b) Clinical trial means a research study in which one or more human subjects are prospectively assigned to one or more interventions (which may
include placebo or other control) to evaluate the effects of the interventions
on biomedical or behavioral health-related outcomes.
(c) Department or agency head means the head of any Federal department
or agency, for example, the Secretary of HHS, and any other officer or
employee of any Federal department or agency to whom the authority
provided by these regulations to the department or agency head has been
delegated.
(d) Federal department or agency refers to a federal department or agency
(the department or agency itself rather than its bureaus, offices or divisions)
that takes appropriate administrative action to make this policy applicable to
the research involving human subjects it conducts, supports, or otherwise
regulates (e.g., the U.S. Department of Health and Human Services, the U.S.
Department of Defense, or the Central Intelligence Agency).
(e)(1) Human subject means a living individual about whom an investigator (whether professional or student) conducting research:
(i) Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or
biospecimens; or
(ii) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.
(2) Intervention includes both physical procedures by which information
or biospecimens are gathered (e.g., venipuncture) and manipulations of the
subject or the subject’s environment that are performed for research
purposes.
(3) Interaction includes communication or interpersonal contact between
investigator and subject.
(4) Private information includes information about behavior that occurs in
a context in which an individual can reasonably expect that no observation
or recording is taking place, and information that has been provided for
specific purposes by an individual and that the individual can reasonably
expect will not be made public (e.g., a medical record).
(5) Identifiable private information is private information for which the
identity of the subject is or may readily be ascertained by the investigator or
associated with the information.
242 Appendix B
(6) An identifiable biospecimen is a biospecimen for which the identity of the
subject is or may readily be ascertained by the investigator or associated with
the biospecimen.
(7) Federal departments or agencies implementing this policy shall:
(i) Upon consultation with appropriate experts (including experts in data
matching and re-identification), reexamine the meaning of “identifiable private information,” as defined in paragraph (e)(5) of this section, and “identifiable biospecimen,” as defined in paragraph (e)(6) of this section. This
reexamination shall take place within 1 year and regularly thereafter (at least
every 4 years). This process will be conducted by collaboration among the
Federal departments and agencies implementing this policy. If appropriate
and permitted by law, such Federal departments and agencies may alter the
interpretation of these terms, including through the use of guidance.
(ii) Upon consultation with appropriate experts, assess whether there are
analytic technologies or techniques that should be considered by investigators
to generate “identifiable private information,” as defined in paragraph (e)(5)
of this section, or an “identifiable biospecimen,” as defined in paragraph (e)(6)
of this section. This assessment shall take place within 1 year and regularly
thereafter (at least every 4 years). This process will be conducted by collaboration among the Federal departments and agencies implementing this policy.
Any such technologies or techniques will be included on a list of technologies
or techniques that produce identifiable private information or identifiable
biospecimens. This list will be published in the Federal Register after notice
and an opportunity for public comment. The Secretary, HHS, shall maintain
the list on a publicly accessible Web site.
(f) Institution means any public or private entity, or department or agency
(including federal, state, and other agencies).
(g) IRB means an institutional review board established in accord with
and for the purposes expressed in this policy.
(h) IRB approval means the determination of the IRB that the research has
been reviewed and may be conducted at an institution within the constraints
set forth by the IRB and by other institutional and federal requirements.
(i) Legally authorized representative means an individual or judicial or other
body authorized under applicable law to consent on behalf of a prospective subject to the subject’s participation in the procedure(s) involved in the research.
If there is no applicable law addressing this issue, legally authorized representative means an individual recognized by institutional policy as acceptable for providing consent in the nonresearch context on behalf of the prospective subject
to the subject’s participation in the procedure(s) involved in the research.
(j) Minimal risk means that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than
those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.
Appendix B
243
(k) Public health authority means an agency or authority of the United
States, a state, a territory, a political subdivision of a state or territory, an
Indian tribe, or a foreign government, or a person or entity acting under a
grant of authority from or contract with such public agency, including the
employees or agents of such public agency or its contractors or persons or
entities to whom it has granted authority, that is responsible for public health
matters as part of its official mandate.
(l) Research means a systematic investigation, including research development, testing, and evaluation, designed to develop or contribute to generalizable knowledge. Activities that meet this definition constitute research for
purposes of this policy, whether or not they are conducted or supported under
a program that is considered research for other purposes. For example, some
demonstration and service programs may include research activities. For
purposes of this part, the following activities are deemed not to be research:
(1) Scholarly and journalistic activities (e.g., oral history, journalism, biography, literary criticism, legal research, and historical scholarship), including
the collection and use of information, that focus directly on the specific individuals about whom the information is collected.
(2) Public health surveillance activities, including the collection and testing
of information or biospecimens, conducted, supported, requested, ordered,
required, or authorized by a public health authority. Such activities are limited
to those necessary to allow a public health authority to identify, monitor, assess,
or investigate potential public health signals, onsets of disease outbreaks, or
conditions of public health importance (including trends, signals, risk factors,
patterns in diseases, or increases in injuries from using consumer products).
Such activities include those associated with providing timely situational
awareness and priority setting during the course of an event or crisis that
threatens public health (including natural or man-made disasters).
(3) Collection and analysis of information, biospecimens, or records by or
for a criminal justice agency for activities authorized by law or court order
solely for criminal justice or criminal investigative purposes.
(4) Authorized operational activities (as determined by each agency) in
support of intelligence, homeland security, defense, or other national security
missions.
(m) Written, or in writing, for purposes of this part, refers to writing on a
tangible medium (e.g., paper) or in an electronic format.
§46.103 Assuring compliance with this policy—research conducted or
supported by any Federal department or agency.
(a) Each institution engaged in research that is covered by this policy, with
the exception of research eligible for exemption under §46.104, and that is
conducted or supported by a Federal department or agency, shall provide
written assurance satisfactory to the department or agency head that it will
244 Appendix B
comply with the requirements of this policy. In lieu of requiring submission of
an assurance, individual department or agency heads shall accept the existence of a current assurance, appropriate for the research in question, on file
with the Office for Human Research Protections, HHS, or any successor
office, and approved for Federal-wide use by that office. When the existence of
an HHS-approved assurance is accepted in lieu of requiring submission of an
assurance, reports (except certification) required by this policy to be made to
department and agency heads shall also be made to the Office for Human
Research Protections, HHS, or any successor office. Federal departments and
agencies will conduct or support research covered by this policy only if the
institution has provided an assurance that it will comply with the requirements
of this policy, as provided in this section, and only if the institution has certified
to the department or agency head that the research has been reviewed and
approved by an IRB (if such certification is required by §46.103(d)).
(b) The assurance shall be executed by an individual authorized to act for
the institution and to assume on behalf of the institution the obligations
imposed by this policy and shall be filed in such form and manner as the
department or agency head prescribes.
(c) The department or agency head may limit the period during which
any assurance shall remain effective or otherwise condition or restrict the
assurance.
(d) Certification is required when the research is supported by a Federal
department or agency and not otherwise waived under §46.101(i) or
exempted under §46.104. For such research, institutions shall certify that
each proposed research study covered by the assurance and this section has
been reviewed and approved by the IRB. Such certification must be submitted as prescribed by the Federal department or agency component supporting the research. Under no condition shall research covered by this
section be initiated prior to receipt of the certification that the research has
been reviewed and approved by the IRB.
(e) For nonexempt research involving human subjects covered by this policy
(or exempt research for which limited IRB review takes place pursuant to
§46.104(d)(2)(iii), (d)(3)(i)(C), or (d)(7) or (8)) that takes place at an institution
in which IRB oversight is conducted by an IRB that is not operated by the institution, the institution and the organization operating the IRB shall document
the institution’s reliance on the IRB for oversight of the research and the responsibilities that each entity will undertake to ensure compliance with the requirements of this policy (e.g., in a written agreement between the institution and the
IRB, by implementation of an institution-wide policy directive providing the
allocation of responsibilities between the institution and an IRB that is not affiliated with the institution, or as set forth in a research protocol).
(Approved by the Office of Management and Budget under Control Number 0990-0260)
Appendix B
245
§46.104 Exempt research.
(a) Unless otherwise required by law or by department or agency heads,
research activities in which the only involvement of human subjects will be
in one or more of the categories in paragraph (d) of this section are exempt
from the requirements of this policy, except that such activities must comply
with the requirements of this section and as specified in each category.
(b) Use of the exemption categories for research subject to the requirements of subparts B, C, and D: Application of the exemption categories to
research subject to the requirements of 45 CFR part 46, subparts B, C, and D,
is as follows:
(1) Subpart B. Each of the exemptions at this section may be applied to
research subject to subpart B if the conditions of the exemption are met.
(2) Subpart C. The exemptions at this section do not apply to research
subject to subpart C, except for research aimed at involving a broader subject
population that only incidentally includes prisoners.
(3) Subpart D. The exemptions at paragraphs (d)(1), (4), (5), (6), (7), and
(8) of this section may be applied to research subject to subpart D if the conditions of the exemption are met. Paragraphs (d)(2)(i) and (ii) of this section
only may apply to research subject to subpart D involving educational tests or
the observation of public behavior when the investigator(s) do not participate
in the activities being observed. Paragraph (d)(2)(iii) of this section may not
be applied to research subject to subpart D.
(c) [Reserved]
(d) Except as described in paragraph (a) of this section, the following
categories of human subjects research are exempt from this policy:
(1) Research, conducted in established or commonly accepted educational
settings, that specifically involves normal educational practices that are not
likely to adversely impact students’ opportunity to learn required educational
content or the assessment of educators who provide instruction. This includes
most research on regular and special education instructional strategies, and
research on the effectiveness of or the comparison among instructional techniques, curricula, or classroom management methods.
(2) Research that only includes interactions involving educational tests
(cognitive, diagnostic, aptitude, achievement), survey procedures, interview
procedures, or observation of public behavior (including visual or auditory
recording) if at least one of the following criteria is met:
(i) The information obtained is recorded by the investigator in such a
manner that the identity of the human subjects cannot readily be ascertained,
directly or through identifiers linked to the subjects;
(ii) Any disclosure of the human subjects’ responses outside the research
would not reasonably place the subjects at risk of criminal or civil liability or
246 Appendix B
be damaging to the subjects’ financial standing, employability, educational
advancement, or reputation; or
(iii) The information obtained is recorded by the investigator in such a
manner that the identity of the human subjects can readily be ascertained,
directly or through identifiers linked to the subjects, and an IRB conducts a
limited IRB review to make the determination required by §46.111(a)(7).
(3)(i) Research involving benign behavioral interventions in conjunction
with the collection of information from an adult subject through verbal or
written responses (including data entry) or audiovisual recording if the subject
prospectively agrees to the intervention and information collection and at
least one of the following criteria is met:
(A) The information obtained is recorded by the investigator in such a
manner that the identity of the human subjects cannot readily be ascertained,
directly or through identifiers linked to the subjects;
(B) Any disclosure of the human subjects’ responses outside the research
would not reasonably place the subjects at risk of criminal or civil liability or
be damaging to the subjects’ financial standing, employability, educational
advancement, or reputation; or
(C) The information obtained is recorded by the investigator in such a
manner that the identity of the human subjects can readily be ascertained,
directly or through identifiers linked to the subjects, and an IRB conducts a
limited IRB review to make the determination required by §46.111(a)(7).
(ii) For the purpose of this provision, benign behavioral interventions are
brief in duration, harmless, painless, not physically invasive, not likely to
have a significant adverse lasting impact on the subjects, and the investigator
has no reason to think the subjects will find the interventions offensive or
embarrassing. Provided all such criteria are met, examples of such benign
behavioral interventions would include having the subjects play an online
game, having them solve puzzles under various noise conditions, or having
them decide how to allocate a nominal amount of received cash between
themselves and someone else.
(iii) If the research involves deceiving the subjects regarding the nature or
purposes of the research, this exemption is not applicable unless the subject
authorizes the deception through a prospective agreement to participate in
research in circumstances in which the subject is informed that he or she will
be unaware of or misled regarding the nature or purposes of the research.
(4) Secondary research for which consent is not required: Secondary
research uses of identifiable private information or identifiable biospecimens,
if at least one of the following criteria is met:
(i) The identifiable private information or identifiable biospecimens are
publicly available;
Appendix B
247
(ii) Information, which may include information about biospecimens, is
recorded by the investigator in such a manner that the identity of the human
subjects cannot readily be ascertained directly or through identifiers linked to
the subjects, the investigator does not contact the subjects, and the investigator will not re-identify subjects;
(iii) The research involves only information collection and analysis
involving the investigator’s use of identifiable health information when
that use is regulated under 45 CFR parts 160 and 164, subparts A and E, for
the purposes of “health care operations” or “research” as those terms are
defined at 45 CFR 164.501 or for “public health activities and purposes” as
described under 45 CFR 164.512(b); or
(iv) The research is conducted by, or on behalf of, a Federal department or
agency using government-generated or government-collected information
obtained for nonresearch activities, if the research generates identifiable
private information that is or will be maintained on information technology
that is subject to and in compliance with section 208(b) of the E-Government
Act of 2002, 44 U.S.C. 3501 note, if all of the identifiable private information
collected, used, or generated as part of the activity will be maintained in
systems of records subject to the Privacy Act of 1974, 5 U.S.C. 552a, and, if
applicable, the information used in the research was collected subject to the
Paperwork Reduction Act of 1995, 44 U.S.C. 3501 et seq.
(5) Research and demonstration projects that are conducted or supported
by a Federal department or agency, or otherwise subject to the approval of
department or agency heads (or the approval of the heads of bureaus or other
subordinate agencies that have been delegated authority to conduct the
research and demonstration projects), and that are designed to study, evaluate,
improve, or otherwise examine public benefit or service programs, including
procedures for obtaining benefits or services under those programs, possible
changes in or alternatives to those programs or procedures, or possible changes
in methods or levels of payment for benefits or services under those programs.
Such projects include, but are not limited to, internal studies by Federal
employees, and studies under contracts or consulting arrangements, cooperative agreements, or grants. Exempt projects also include waivers of otherwise mandatory requirements using authorities such as sections 1115 and
1115A of the Social Security Act, as amended.
(i) Each Federal department or agency conducting or supporting the
research and demonstration projects must establish, on a publicly accessible
Federal Web site or in such other manner as the department or agency head
may determine, a list of the research and demonstration projects that the
Federal department or agency conducts or supports under this provision. The
research or demonstration project must be published on this list prior to
commencing the research involving human subjects.
(ii) [Reserved]
248 Appendix B
(6) Taste and food quality evaluation and consumer acceptance studies:
(i) If wholesome foods without additives are consumed, or
(ii) If a food is consumed that contains a food ingredient at or below
the level and for a use found to be safe, or agricultural chemical or environmental contaminant at or below the level found to be safe, by the Food
and Drug Administration or approved by the Environmental Protection
Agency or the Food Safety and Inspection Service of the U.S. Department
of Agriculture.
(7) Storage or maintenance for secondary research for which broad
consent is required: Storage or maintenance of identifiable private information or identifiable biospecimens for potential secondary research use if an
IRB conducts a limited IRB review and makes the determinations required by
§46.111(a)(8).
(8) Secondary research for which broad consent is required: Research
involving the use of identifiable private information or identifiable biospecimens for secondary research use, if the following criteria are met:
(i) Broad consent for the storage, maintenance, and secondary research
use of the identifiable private information or identifiable biospecimens was
obtained in accordance with §46.116(a)(1) through (4), (a)(6), and (d);
(ii) Documentation of informed consent or waiver of documentation of
consent was obtained in accordance with §46.117;
(iii) An IRB conducts a limited IRB review and makes the determination
required by §46.111(a)(7) and makes the determination that the research to
be conducted is within the scope of the broad consent referenced in paragraph (d)(8)(i) of this section; and (iv) The investigator does not include
returning individual research results to subjects as part of the study plan. This
provision does not prevent an investigator from abiding by any legal requirements to return individual research results.
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.105–46.106 [Reserved]
§46.107 IRB membership.
(a) Each IRB shall have at least five members, with varying backgrounds
to promote complete and adequate review of research activities commonly
conducted by the institution. The IRB shall be sufficiently qualified through
the experience and expertise of its members (professional competence), and
the diversity of its members, including race, gender, and cultural backgrounds
and sensitivity to such issues as community attitudes, to promote respect for
its advice and counsel in safeguarding the rights and welfare of human subjects. The IRB shall be able to ascertain the acceptability of proposed research
Appendix B
249
in terms of institutional commitments (including policies and resources) and
regulations, applicable law, and standards of professional conduct and practice. The IRB shall therefore include persons knowledgeable in these areas. If
an IRB regularly reviews research that involves a category of subjects that is
vulnerable to coercion or undue influence, such as children, prisoners, individuals with impaired decision-making capacity, or economically or educationally disadvantaged persons, consideration shall be given to the inclusion
of one or more individuals who are knowledgeable about and experienced in
working with these categories of subjects.
(b) Each IRB shall include at least one member whose primary concerns
are in scientific areas and at least one member whose primary concerns are in
nonscientific areas.
(c) Each IRB shall include at least one member who is not otherwise affiliated with the institution and who is not part of the immediate family of a
person who is affiliated with the institution.
(d) No IRB may have a member participate in the IRB’s initial or continuing review of any project in which the member has a conflicting interest,
except to provide information requested by the IRB.
(e) An IRB may, in its discretion, invite individuals with competence in
special areas to assist in the review of issues that require expertise beyond or
in addition to that available on the IRB. These individuals may not vote with
the IRB.
§46.108 IRB functions and operations.
(a) In order to fulfill the requirements of this policy each IRB shall:
(1) Have access to meeting space and sufficient staff to support the IRB’s
review and recordkeeping duties;
(2) Prepare and maintain a current list of the IRB members identified by
name; earned degrees; representative capacity; indications of experience such
as board certifications or licenses sufficient to describe each member’s chief
anticipated contributions to IRB deliberations; and any employment or other
relationship between each member and the institution, for example, full-time
employee, part-time employee, member of governing panel or board, stockholder, paid or unpaid consultant;
(3) Establish and follow written procedures for:
(i) Conducting its initial and continuing review of research and for reporting its findings and actions to the investigator and the institution;
(ii) Determining which projects require review more often than annually
and which projects need verification from sources other than the investigators that no material changes have occurred since previous IRB review; and
250 Appendix B
(iii) Ensuring prompt reporting to the IRB of proposed changes in a
research activity, and for ensuring that investigators will conduct the research
activity in accordance with the terms of the IRB approval until any proposed
changes have been reviewed and approved by the IRB, except when necessary to eliminate apparent immediate hazards to the subject.
(4) Establish and follow written procedures for ensuring prompt reporting to the IRB; appropriate institutional officials; the department or agency
head; and the Office for Human Research Protections, HHS, or any successor
office, or the equivalent office within the appropriate Federal department or
agency of
(i) Any unanticipated problems involving risks to subjects or others or any
serious or continuing noncompliance with this policy or the requirements or
determinations of the IRB; and
(ii) Any suspension or termination of IRB approval.
(b) Except when an expedited review procedure is used (as described in
§46.110), an IRB must review proposed research at convened meetings at
which a majority of the members of the IRB are present, including at least
one member whose primary concerns are in nonscientific areas. In order for
the research to be approved, it shall receive the approval of a majority of
those members present at the meeting.
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.109 IRB review of research.
(a) An IRB shall review and have authority to approve, require modifications in (to secure approval), or disapprove all research activities covered by this
policy, including exempt research activities under §46.104 for which limited IRB
review is a condition of exemption (under §46.104(d)(2)(iii), (d)(3)(i)(C), and
(d)(7), and (8)).
(b) An IRB shall require that information given to subjects (or legally
authorized representatives, when appropriate) as part of informed consent is
in accordance with §46.116. The IRB may require that information, in addition to that specifically mentioned in §46.116, be given to the subjects when
in the IRB’s judgment the information would meaningfully add to the protection of the rights and welfare of subjects.
(c) An IRB shall require documentation of informed consent or may waive
documentation in accordance with §46.117.
(d) An IRB shall notify investigators and the institution in writing of its decision to approve or disapprove the proposed research activity, or of modifications
required to secure IRB approval of the research activity. If the IRB decides
to disapprove a research activity, it shall include in its written notification a
Appendix B
251
statement of the reasons for its decision and give the investigator an opportunity to respond in person or in writing.
(e) An IRB shall conduct continuing review of research requiring review
by the convened IRB at intervals appropriate to the degree of risk, not less
than once per year, except as described in §46.109(f).
(f)(1) Unless an IRB determines otherwise, continuing review of research
is not required in the following circumstances:
(i) Research eligible for expedited review in accordance with §46.110;
(ii) Research reviewed by the IRB in accordance with the limited IRB
review described in §46.104(d)(2)(iii), (d)(3)(i)(C), or (d)(7) or (8);
(iii) Research that has progressed to the point that it involves only one or
both of the following, which are part of the IRB-approved study:
(A) Data analysis, including analysis of identifiable private information
or identifiable biospecimens, or
(B) Accessing follow-up clinical data from procedures that subjects would
undergo as part of clinical care.
(2) [Reserved]
(g) An IRB shall have authority to observe or have a third party observe
the consent process and the research.
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.110 Expedited review procedures for certain kinds of research
involving no more than minimal risk, and for minor changes in
approved research.
(a) The Secretary of HHS has established, and published as a Notice in
the Federal Register, a list of categories of research that may be reviewed by
the IRB through an expedited review procedure. The Secretary will evaluate
the list at least every 8 years and amend it, as appropriate, after consultation
with other federal departments and agencies and after publication in the
Federal Register for public comment. A copy of the list is available from the
Office for Human Research Protections, HHS, or any successor office.
(b)(1) An IRB may use the expedited review procedure to review the
following:
(i) Some or all of the research appearing on the list described in paragraph
(a) of this section, unless the reviewer determines that the study involves
more than minimal risk;
(ii) Minor changes in previously approved research during the period for
which approval is authorized; or
252 Appendix B
(iii) Research for which limited IRB review is a condition of exemption
under §46.104(d)(2)(iii), (d)(3)(i)(C), and (d)(7) and (8).
(2) Under an expedited review procedure, the review may be carried out
by the IRB chairperson or by one or more experienced reviewers designated
by the chairperson from among members of the IRB. In reviewing the
research, the reviewers may exercise all of the authorities of the IRB except
that the reviewers may not disapprove the research. A research activity may
be disapproved only after review in accordance with the nonexpedited procedure set forth in §46.108(b).
(c) Each IRB that uses an expedited review procedure shall adopt a method
for keeping all members advised of research proposals that have been
approved under the procedure.
(d) The department or agency head may restrict, suspend, terminate, or
choose not to authorize an institution’s or IRB’s use of the expedited review
procedure.
§46.111 Criteria for IRB approval of research.
(a) In order to approve research covered by this policy the IRB shall determine that all of the following requirements are satisfied:
(1) Risks to subjects are minimized:
(i) By using procedures that are consistent with sound research design
and that do not unnecessarily expose subjects to risk, and
(ii) Whenever appropriate, by using procedures already being performed
on the subjects for diagnostic or treatment purposes.
(2) Risks to subjects are reasonable in relation to anticipated benefits, if
any, to subjects, and the importance of the knowledge that may reasonably be
expected to result. In evaluating risks and benefits, the IRB should consider
only those risks and benefits that may result from the research (as distinguished from risks and benefits of therapies subjects would receive even if not
participating in the research). The IRB should not consider possible longrange effects of applying knowledge gained in the research (e.g., the possible
effects of the research on public policy) as among those research risks that fall
within the purview of its responsibility.
(3) Selection of subjects is equitable. In making this assessment the IRB
should take into account the purposes of the research and the setting in which
the research will be conducted. The IRB should be particularly cognizant of
the special problems of research that involves a category of subjects who are
vulnerable to coercion or undue influence, such as children, prisoners, individuals with impaired decision-making capacity, or economically or educationally disadvantaged persons.
Appendix B
253
(4) Informed consent will be sought from each prospective subject or the
subject’s legally authorized representative, in accordance with, and to the
extent required by, §46.116.
(5) Informed consent will be appropriately documented or appropriately
waived in accordance with §46.117.
(6) When appropriate, the research plan makes adequate provision for
monitoring the data collected to ensure the safety of subjects.
(7) When appropriate, there are adequate provisions to protect the privacy
of subjects and to maintain the confidentiality of data.
(i) The Secretary of HHS will, after consultation with the Office of Management and Budget’s privacy office and other Federal departments and agencies
that have adopted this policy, issue guidance to assist IRBs in assessing what
provisions are adequate to protect the privacy of subjects and to maintain the
confidentiality of data.
(ii) [Reserved]
(8) For purposes of conducting the limited IRB review required by
§46.104(d)(7), the IRB need not make the determinations at paragraphs (a)(1)
through (7) of this section, and shall make the following determinations:
(i) Broad consent for storage, maintenance, and secondary research use of
identifiable private information or identifiable biospecimens is obtained in
accordance with the requirements of §46.116(a)(1)-(4), (a)(6), and (d);
(ii) Broad consent is appropriately documented or waiver of documentation is appropriate, in accordance with §46.117; and
(iii) If there is a change made for research purposes in the way the identifiable private information or identifiable biospecimens are stored or maintained,
there are adequate provisions to protect the privacy of subjects and to maintain
the confidentiality of data.
(b) When some or all of the subjects are likely to be vulnerable to coercion
or undue influence, such as children, prisoners, individuals with impaired
decision-making capacity, or economically or educationally disadvantaged
persons, additional safeguards have been included in the study to protect the
rights and welfare of these subjects.
§46.112 Review by institution.
Research covered by this policy that has been approved by an IRB may be
subject to further appropriate review and approval or disapproval by officials
of the institution. However, those officials may not approve the research if it
has not been approved by an IRB.
254 Appendix B
§46.113 Suspension or termination of IRB approval of research.
An IRB shall have authority to suspend or terminate approval of research
that is not being conducted in accordance with the IRB’s requirements or that
has been associated with unexpected serious harm to subjects. Any suspension or termination of approval shall include a statement of the reasons for
the IRB’s action and shall be reported promptly to the investigator, appropriate institutional officials, and the department or agency head.
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.114 Cooperative research.
(a) Cooperative research projects are those projects covered by this policy
that involve more than one institution. In the conduct of cooperative research
projects, each institution is responsible for safeguarding the rights and welfare
of human subjects and for complying with this policy.
(b)(1) Any institution located in the United States that is engaged in cooperative research must rely upon approval by a single IRB for that portion of
the research that is conducted in the United States. The reviewing IRB will be
identified by the Federal department or agency supporting or conducting the
research or proposed by the lead institution subject to the acceptance of the
Federal department or agency supporting the research.
(2) The following research is not subject to this provision:
(i) Cooperative research for which more than single IRB review is required
by law (including tribal law passed by the official governing body of an American Indian or Alaska Native tribe); or
(ii) Research for which any Federal department or agency support­ing or
conducting the research determines and documents that the use of a single
IRB is not appropriate for the particular context.
(c) For research not subject to paragraph (b) of this section, an institution
participating in a cooperative project may enter into a joint review arrangement, rely on the review of another IRB, or make similar arrangements for
avoiding duplication of effort.
§46.115 IRB records.
(a) An institution, or when appropriate an IRB, shall prepare and maintain adequate documentation of IRB activities, including the following:
(1) Copies of all research proposals reviewed, scientific evaluations, if any,
that accompany the proposals, approved sample consent forms, progress
reports submitted by investigators, and reports of injuries to subjects.
Appendix B
255
(2) Minutes of IRB meetings, which shall be in sufficient detail to show
attendance at the meetings; actions taken by the IRB; the vote on these
actions including the number of members voting for, against, and abstaining;
the basis for requiring changes in or disapproving research; and a written
summary of the discussion of controverted issues and their resolution.
(3) Records of continuing review activities, including the rationale for
conducting continuing review of research that otherwise would not require
continuing review as described in §46.109(f)(1).
(4) Copies of all correspondence between the IRB and the investigators.
(5) A list of IRB members in the same detail as described in §46.108(a)(2).
(6) Written procedures for the IRB in the same detail as described in
§46.108(a)(3) and (4).
(7) Statements of significant new findings provided to subjects, as required
by §46.116(c)(5).
(8) The rationale for an expedited reviewer’s determination under
§46.110(b)(1)(i) that research appearing on the expedited review list described
in §46.110(a) is more than minimal risk.
(9) Documentation specifying the responsibilities that an institution and
an organization operating an IRB each will undertake to ensure compliance
with the requirements of this policy, as described in §46.103(e).
(b) The records required by this policy shall be retained for at least
3 years, and records relating to research that is conducted shall be retained
for at least 3 years after completion of the research. The institution or IRB
may maintain the records in printed form, or electronically. All records
shall be accessible for inspection and copying by authorized representatives
of the Federal department or agency at reasonable times and in a reasonable manner.
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.116 General requirements for informed consent.
(a) General. General requirements for informed consent, whether written
or oral, are set forth in this paragraph and apply to consent obtained in accordance with the requirements set forth in paragraphs (b) through (d) of this
section. Broad consent may be obtained in lieu of informed consent obtained
in accordance with paragraphs (b) and (c) of this section only with respect to
the storage, maintenance, and secondary research uses of identifiable private
information and identifiable biospecimens. Waiver or alteration of consent in
research involving public benefit and service programs conducted by or subject to the approval of state or local officials is described in paragraph (e) of this
256 Appendix B
section. General waiver or alteration of informed consent is described in paragraph (f) of this section. Except as provided elsewhere in this policy:
(1) Before involving a human subject in research covered by this policy,
an investigator shall obtain the legally effective informed consent of the
subject or the subject’s legally authorized representative.
(2) An investigator shall seek informed consent only under circumstances
that provide the prospective subject or the legally authorized representative
sufficient opportunity to discuss and consider whether or not to participate
and that minimize the possibility of coercion or undue influence.
(3) The information that is given to the subject or the legally authorized
representative shall be in language understandable to the subject or the
legally authorized representative.
(4) The prospective subject or the legally authorized representative must
be provided with the information that a reasonable person would want to
have in order to make an informed decision about whether to participate, and
an opportunity to discuss that information.
(5) Except for broad consent obtained in accordance with paragraph (d) of
this section:
(i) Informed consent must begin with a concise and focused presentation
of the key information that is most likely to assist a prospective subject or
legally authorized representative in understanding the reasons why one
might or might not want to participate in the research. This part of the
informed consent must be organized and presented in a way that facilitates
comprehension.
(ii) Informed consent as a whole must present information in sufficient
detail relating to the research, and must be organized and presented in a way
that does not merely provide lists of isolated facts, but rather facilitates the
prospective subject’s or legally authorized representative’s understanding of
the reasons why one might or might not want to participate.
(6) No informed consent may include any exculpatory language through
which the subject or the legally authorized representative is made to waive or
appear to waive any of the subject’s legal rights, or releases or appears to
release the investigator, the sponsor, the institution, or its agents from liability
for negligence.
(b) Basic elements of informed consent. Except as provided in paragraph (d), (e),
or (f) of this section, in seeking informed consent the following information
shall be provided to each subject or the legally authorized representative:
(1) A statement that the study involves research, an explanation of the
purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed, and identification of any
procedures that are experimental;
Appendix B
257
(2) A description of any reasonably foreseeable risks or discomforts to the
subject;
(3) A description of any benefits to the subject or to others that may reasonably be expected from the research;
(4) A disclosure of appropriate alternative procedures or courses of treatment, if any, that might be advantageous to the subject;
(5) A statement describing the extent, if any, to which confidentiality of
records identifying the subject will be maintained;
(6) For research involving more than minimal risk, an explanation as to
whether any compensation and an explanation as to whether any medical
treatments are available if injury occurs and, if so, what they consist of, or
where further information may be obtained;
(7) An explanation of whom to contact for answers to pertinent questions
about the research and research subjects’ rights, and whom to contact in the
event of a research-related injury to the subject;
(8) A statement that participation is voluntary, refusal to participate will
involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled; and
(9) One of the following statements about any research that involves the
collection of identifiable private information or identifiable biospecimens:
(i) A statement that identifiers might be removed from the identifiable
private information or identifiable biospecimens and that, after such removal,
the information or biospecimens could be used for future research studies or
distributed to another investigator for future research studies without additional informed consent from the subject or the legally authorized representative, if this might be a possibility; or
(ii) A statement that the subject’s information or biospecimens collected as
part of the research, even if identifiers are removed, will not be used or
distributed for future research studies.
(c) Additional elements of informed consent. Except as provided in paragraph
(d), (e), or (f) of this section, one or more of the following elements of information, when appropriate, shall also be provided to each subject or the legally
authorized representative:
(1) A statement that the particular treatment or procedure may involve
risks to the subject (or to the embryo or fetus, if the subject is or may become
pregnant) that are currently unforeseeable;
(2) Anticipated circumstances under which the subject’s participation
may be terminated by the investigator without regard to the subject’s or the
legally authorized representative’s consent;
258 Appendix B
(3) Any additional costs to the subject that may result from participation
in the research;
(4) The consequences of a subject’s decision to withdraw from the research
and procedures for orderly termination of participation by the subject;
(5) A statement that significant new findings developed during the course
of the research that may relate to the subject’s willingness to continue participation will be provided to the subject;
(6) The approximate number of subjects involved in the study;
(7) A statement that the subject’s biospecimens (even if identifiers are
removed) may be used for commercial profit and whether the subject will
or will not share in this commercial profit;
(8) A statement regarding whether clinically relevant research results,
including individual research results, will be disclosed to subjects, and if so,
under what conditions; and
(9) For research involving biospecimens, whether the research will (if
known) or might include whole genome sequencing (i.e., sequencing of a
human germline or somatic specimen with the intent to generate the genome
or exome sequence of that specimen).
(d) Elements of broad consent for the storage, maintenance, and secondary research
use of identifiable private information or identifiable biospecimens. Broad consent
for the storage, maintenance, and secondary research use of identifiable private
information or identifiable biospecimens (collected for either research studies
other than the proposed research or non­research purposes) is permitted as an
alternative to the informed consent requirements in paragraphs (b) and (c) of
this section. If the subject or the legally authorized representative is asked to
provide broad consent, the following shall be provided to each subject or the
subject’s legally authorized representative:
(1) The information required in paragraphs (b)(2), (b)(3), (b)(5), and
(b)(8) and, when appropriate, (c)(7) and (9) of this section;
(2) A general description of the types of research that may be conducted with
the identifiable private information or identifiable biospecimens. This description
must include sufficient information such that a reasonable person would expect
that the broad consent would permit the types of research conducted;
(3) A description of the identifiable private information or identifiable
biospecimens that might be used in research, whether sharing of identifiable
private information or identifiable biospecimens might occur, and the types
of institutions or researchers that might conduct research with the identifiable private information or identifiable biospecimens;
(4) A description of the period of time that the identifiable private information or identifiable biospecimens may be stored and maintained (which
Appendix B
259
period of time could be indefinite), and a description of the period of time that
the identifiable private information or identifiable biospecimens may be used
for research purposes (which period of time could be indefinite);
(5) Unless the subject or legally authorized representative will be provided
details about specific research studies, a statement that they will not be
informed of the details of any specific research studies that might be conducted using the subject’s identifiable private information or identifiable biospecimens, including the purposes of the research, and that they might have
chosen not to consent to some of those specific research studies;
(6) Unless it is known that clinically relevant research results, including
individual research results, will be disclosed to the subject in all circumstances, a statement that such results may not be disclosed to the subject; and
(7) An explanation of whom to contact for answers to questions about
the subject’s rights and about storage and use of the subject’s identifiable
private information or identifiable biospecimens, and whom to contact in the
event of a research-related harm.
(e) Waiver or alteration of consent in research involving public benefit and service
programs conducted by or subject to the approval of state or local officials—
(1) Waiver. An IRB may waive the requirement to obtain informed consent for
research under paragraphs (a) through (c) of this section, provided the IRB
satisfies the requirements of paragraph (e)(3) of this section. If an individual
was asked to provide broad consent for the storage, maintenance, and secondary
research use of identifiable private information or identifiable biospecimens
in accordance with the requirements at paragraph (d) of this section, and
refused to consent, an IRB cannot waive consent for the storage, maintenance, or secondary research use of the identifiable private information or
identifiable biospecimens.
(2) Alteration. An IRB may approve a consent procedure that omits some,
or alters some or all, of the elements of informed consent set forth in paragraphs (b) and (c) of this section provided the IRB satisfies the requirements
of paragraph (e)(3) of this section. An IRB may not omit or alter any of the
requirements described in paragraph (a) of this section. If a broad consent
procedure is used, an IRB may not omit or alter any of the elements required
under paragraph (d) of this section.
(3) Requirements for waiver and alteration. In order for an IRB to waive or alter
consent as described in this subsection, the IRB must find and document that:
(i) The research or demonstration project is to be conducted by or subject
to the approval of state or local government officials and is designed to study,
evaluate, or otherwise examine:
(A) Public benefit or service programs;
(B) Procedures for obtaining benefits or services under those programs;
260 Appendix B
(C) Possible changes in or alternatives to those programs or procedures; or
(D) Possible changes in methods or levels of payment for benefits or
services under those programs; and
(ii) The research could not practicably be carried out without the waiver
or alteration.
(f) General waiver or alteration of consent—(1) Waiver. An IRB may waive
the requirement to obtain informed consent for research under paragraphs
(a) through (c) of this section, provided the IRB satisfies the requirements
of paragraph (f)(3) of this section. If an individual was asked to provide broad
consent for the storage, maintenance, and secondary research use of identifiable private information or identifiable biospecimens in accordance with the
requirements at paragraph (d) of this section, and refused to consent, an IRB
cannot waive consent for the storage, maintenance, or secondary research
use of the identifiable private information or identifiable biospecimens.
(2) Alteration. An IRB may approve a consent procedure that omits some,
or alters some or all, of the elements of informed consent set forth in paragraphs (b) and (c) of this section provided the IRB satisfies the requirements
of paragraph (f)(3) of this section. An IRB may not omit or alter any of the
requirements described in paragraph (a) of this section. If a broad consent
procedure is used, an IRB may not omit or alter any of the elements required
under paragraph (d) of this section.
(3) Requirements for waiver and alteration. In order for an IRB to waive or alter
consent as described in this subsection, the IRB must find and document that:
(i) The research involves no more than minimal risk to the subjects;
(ii) The research could not practicably be carried out without the requested
waiver or alteration;
(iii) If the research involves using identifiable private information or
identifiable biospecimens, the research could not practicably be carried out
without using such information or biospecimens in an identifiable format;
(iv) The waiver or alteration will not adversely affect the rights and
welfare of the subjects; and
(v) Whenever appropriate, the subjects or legally authorized representatives
will be provided with additional pertinent information after participation.
(g) Screening, recruiting, or determining eligibility. An IRB may approve a
research proposal in which an investigator will obtain information or biospecimens for the purpose of screening, recruiting, or determining the eligibility of prospective subjects without the informed consent of the prospective
subject or the subject’s legally authorized representative, if either of the following conditions are met:
(1) The investigator will obtain information through oral or written communication with the prospective subject or legally authorized representative, or
Appendix B
261
(2) The investigator will obtain identifiable private information or identifiable biospecimens by accessing records or stored identifiable biospecimens.
(h) Posting of clinical trial consent form. (1) For each clinical trial conducted
or supported by a Federal department or agency, one IRB-approved informed
consent form used to enroll subjects must be posted by the awardee or the
Federal department or agency component conducting the trial on a publicly
available Federal Web site that will be established as a repository for such
informed consent forms.
(2) If the Federal department or agency supporting or conducting the
clinical trial determines that certain information should not be made publicly
available on a Federal Web site (e.g., confidential commercial information),
such Federal department or agency may permit or require redactions to the
information posted.
(3) The informed consent form must be posted on the Federal Web site
after the clinical trial is closed to recruitment, and no later than 60 days after
the last study visit by any subject, as required by the protocol.
(i) Preemption. The informed consent requirements in this policy are not
intended to preempt any applicable Federal, state, or local laws (including
tribal laws passed by the official governing body of an American Indian or
Alaska Native tribe) that require additional information to be disclosed in
order for informed consent to be legally effective.
(j) Emergency medical care. Nothing in this policy is intended to limit the
authority of a physician to provide emergency medical care, to the extent the
physician is permitted to do so under applicable Federal, state, or local law
(including tribal law passed by the official governing body of an American
Indian or Alaska Native tribe).
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.117 Documentation of informed consent.
(a) Except as provided in paragraph (c) of this section, informed consent
shall be documented by the use of a written informed consent form approved
by the IRB and signed (including in an electronic format) by the subject or the
subject’s legally authorized representative. A written copy shall be given to
the person signing the informed consent form.
(b) Except as provided in paragraph (c) of this section, the informed
consent form may be either of the following:
(1) A written informed consent form that meets the requirements of
§46.116. The investigator shall give either the subject or the subject’s legally
authorized representative adequate opportunity to read the informed consent
262 Appendix B
form before it is signed; alternatively, this form may be read to the subject or
the subject’s legally authorized representative.
(2) A short form written informed consent form stating that the elements
of informed consent required by §46.116 have been presented orally to the
subject or the subject’s legally authorized representative, and that the key
information required by §46.116(a)(5)(i) was presented first to the subject,
before other information, if any, was provided. The IRB shall approve a
written summary of what is to be said to the subject or the legally authorized
representative. When this method is used, there shall be a witness to the oral
presentation. Only the short form itself is to be signed by the subject or the
subject’s legally authorized representative. However, the witness shall sign
both the short form and a copy of the summary, and the person actually
obtaining consent shall sign a copy of the summary. A copy of the summary
shall be given to the subject or the subject’s legally authorized representative,
in addition to a copy of the short form.
(c)(1) An IRB may waive the requirement for the investigator to obtain a
signed informed consent form for some or all subjects if it finds any of the
following:
(i) That the only record linking the subject and the research would be the
informed consent form and the principal risk would be potential harm resulting
from a breach of confidentiality. Each subject (or legally authorized representative) will be asked whether the subject wants documentation linking the subject with the research, and the subject’s wishes will govern;
(ii) That the research presents no more than minimal risk of harm to subjects
and involves no procedures for which written consent is normally required
outside of the research context; or
(iii) If the subjects or legally authorized representatives are members of a
distinct cultural group or community in which signing forms is not the norm,
that the research presents no more than minimal risk of harm to subjects and
provided there is an appropriate alternative mechanism for documenting that
informed consent was obtained.
(2) In cases in which the documentation requirement is waived, the IRB
may require the investigator to provide subjects or legally authorized representatives with a written statement regarding the research.
(Approved by the Office of Management and Budget under Control Number 0990-0260)
§46.118 Applications and proposals lacking definite plans for
involvement of human subjects.
Certain types of applications for grants, cooperative agreements, or contracts
are submitted to Federal departments or agencies with the knowledge that
subjects may be involved within the period of support, but definite plans
Appendix B
263
would not normally be set forth in the application or proposal. These include
activities such as institutional type grants when selection of specific projects is
the institution’s responsibility; research training grants in which the activities
involving subjects remain to be selected; and projects in which human subjects’ involvement will depend upon completion of instruments, prior animal
studies, or purification of compounds. Except for research waived under
§46.101(i) or exempted under §46.104, no human subjects may be involved
in any project supported by these awards until the project has been reviewed
and approved by the IRB, as provided in this policy, and certification submitted, by the institution, to the Federal department or agency component supporting the research.
§46.119 Research undertaken without the intention of involving
human subjects.
Except for research waived under §46.101(i) or exempted under §46.104,
in the event research is undertaken without the intention of involving human
subjects, but it is later proposed to involve human subjects in the research,
the research shall first be reviewed and approved by an IRB, as provided in
this policy, a certification submitted by the institution to the Federal department or agency component supporting the research, and final approval given
to the proposed change by the Federal department or agency component.
§46.120 Evaluation and disposition of applications and proposals for
research to be conducted or supported by a Federal department or
agency.
(a) The department or agency head will evaluate all applications and proposals involving human subjects submitted to the Federal department or
agency through such officers and employees of the Federal department or
agency and such experts and consultants as the department or agency head
determines to be appropriate. This evaluation will take into consideration the
risks to the subjects, the adequacy of protection against these risks, the potential benefits of the research to the subjects and others, and the importance of
the knowledge gained or to be gained.
(b) On the basis of this evaluation, the department or agency head may
approve or disapprove the application or proposal, or enter into negotiations
to develop an approvable one.
§46.121 [Reserved]
§46.122 Use of Federal funds.
Federal funds administered by a Federal department or agency may not be
expended for research involving human subjects unless the requirements of
this policy have been satisfied.
264 Appendix B
§46.123 Early termination of research support: Evaluation of
applications and proposals.
(a) The department or agency head may require that Federal department
or agency support for any project be terminated or suspended in the manner
prescribed in applicable program requirements, when the department or
agency head finds an institution has materially failed to comply with the
terms of this policy.
(b) In making decisions about supporting or approving applications or
proposals covered by this policy the department or agency head may take into
account, in addition to all other eligibility requirements and program criteria,
factors such as whether the applicant has been subject to a termination or
suspension under paragraph (a) of this section and whether the applicant or
the person or persons who would direct or has/have directed the scientific
and technical aspects of an activity has/have, in the judgment of the department or agency head, materially failed to discharge responsibility for the
protection of the rights and welfare of human subjects (whether or not the
research was subject to federal regulation).
§46.124 Conditions.
With respect to any research project or any class of research projects the
department or agency head of either the conducting or the supporting Federal
department or agency may impose additional conditions prior to or at the time
of approval when in the judgment of the department or agency head additional conditions are necessary for the protection of human subjects.
Subpart B—Additional Protections for Pregnant Women, Human
Fetuses and Neonates Involved in Research
Source: 66 FR 56778, Nov. 13, 2001, unless otherwise noted.
§46.201 To what do these regulations apply?
(a) Except as provided in paragraph (b) of this section, this subpart applies
to all research involving pregnant women, human fetuses, neonates of uncertain viability, or nonviable neonates conducted or supported by the Department of Health and Human Services (DHHS). This includes all research
conducted in DHHS facilities by any person and all research conducted in any
facility by DHHS employees.
(b) The exemptions at §46.101(b)(1) through (6) are applicable to this
subpart.
(c) The provisions of §46.101(c) through (i) are applicable to this subpart.
Reference to State or local laws in this subpart and in §46.101(f) is intended
Appendix B
265
to include the laws of federally recognized American Indian and Alaska Native
Tribal Governments.
(d) The requirements of this subpart are in addition to those imposed
under the other subparts of this part.
§46.202 Definitions.
The definitions in §46.102 shall be applicable to this subpart as well. In
addition, as used in this subpart:
(a) Dead fetus means a fetus that exhibits neither heartbeat, spontaneous respiratory activity, spontaneous movement of voluntary muscles,
nor pulsation of the umbilical cord.
(b) Delivery means complete separation of the fetus from the woman by
expulsion or extraction or any other means.
(c) Fetus means the product of conception from implantation until
delivery.
(d) Neonate means a newborn.
(e) Nonviable neonate means a neonate after delivery that, although living,
is not viable.
(f) Pregnancy encompasses the period of time from implantation until
delivery. A woman shall be assumed to be pregnant if she exhibits any of the
pertinent presumptive signs of pregnancy, such as missed menses, until the
results of a pregnancy test are negative or until delivery.
(g) Secretary means the Secretary of Health and Human Services and
any other officer or employee of the Department of Health and Human
Services to whom authority has been delegated.
(h) Viable, as it pertains to the neonate, means being able, after delivery,
to survive (given the benefit of available medical therapy) to the point of
independently maintaining heartbeat and respiration. The Secretary may
from time to time, taking into account medical advances, publish in the
Federal Register guidelines to assist in determining whether a neonate is
viable for purposes of this subpart. If a neonate is viable then it may be
included in research only to the extent permitted and in accordance with the
requirements of subparts A and D of this part.
§46.203 Duties of IRBs in connection with research involving
pregnant women, fetuses, and neonates.
In addition to other responsibilities assigned to IRBs under this part, each
IRB shall review research covered by this subpart and approve only research
266 Appendix B
which satisfies the conditions of all applicable sections of this subpart and the
other subparts of this part.
§46.204 Research involving pregnant women or fetuses.
Pregnant women or fetuses may be involved in research if all of the following
conditions are met:
(a) Where scientifically appropriate, preclinical studies, including studies
on pregnant animals, and clinical studies, including studies on nonpregnant
women, have been conducted and provide data for assessing potential risks
to pregnant women and fetuses;
(b) The risk to the fetus is caused solely by interventions or procedures
that hold out the prospect of direct benefit for the woman or the fetus; or, if
there is no such prospect of benefit, the risk to the fetus is not greater than
minimal and the purpose of the research is the development of important
biomedical knowledge which cannot be obtained by any other means;
(c) Any risk is the least possible for achieving the objectives of the research;
(d) If the research holds out the prospect of direct benefit to the pregnant
woman, the prospect of a direct benefit both to the pregnant woman and the
fetus, or no prospect of benefit for the woman nor the fetus when risk to the
fetus is not greater than minimal and the purpose of the research is the development of important biomedical knowledge that cannot be obtained by any
other means, her consent is obtained in accord with the informed consent
provisions of subpart A of this part;
(e) If the research holds out the prospect of direct benefit solely to the
fetus then the consent of the pregnant woman and the father is obtained in
accord with the informed consent provisions of subpart A of this part, except
that the father’s consent need not be obtained if he is unable to consent
because of unavailability, incompetence, or temporary incapacity or the pregnancy resulted from rape or incest.
(f) Each individual providing consent under paragraph (d) or (e) of this
section is fully informed regarding the reasonably foreseeable impact of the
research on the fetus or neonate;
(g) For children as defined in §46.402(a) who are pregnant, assent
and permission are obtained in accord with the provisions of subpart D
of this part;
(h) No inducements, monetary or otherwise, will be offered to terminate
a pregnancy;
(i) Individuals engaged in the research will have no part in any decisions
as to the timing, method, or procedures used to terminate a pregnancy; and
(j) Individuals engaged in the research will have no part in determining
the viability of a neonate.
Appendix B
267
§46.205 Research involving neonates.
(a) Neonates of uncertain viability and nonviable neonates may be involved
in research if all of the following conditions are met:
(1) Where scientifically appropriate, preclinical and clinical studies have
been conducted and provide data for assessing potential risks to neonates.
(2) Each individual providing consent under paragraph (b)(2) or (c)(5) of
this section is fully informed regarding the reasonably foreseeable impact of
the research on the neonate.
(3) Individuals engaged in the research will have no part in determining
the viability of a neonate.
(4) The requirements of paragraph (b) or (c) of this section have been
met as applicable.
(b) Neonates of uncertain viability. Until it has been ascertained whether
or not a neonate is viable, a neonate may not be involved in research covered
by this subpart unless the following additional conditions are met:
(1) The IRB determines that:
(i) The research holds out the prospect of enhancing the probability of
survival of the neonate to the point of viability, and any risk is the least
possible for achieving that objective, or
(ii) The purpose of the research is the development of important biomedical knowledge which cannot be obtained by other means and there will
be no added risk to the neonate resulting from the research; and
(2) The legally effective informed consent of either parent of the neonate
or, if neither parent is able to consent because of unavailability, incompetence, or temporary incapacity, the legally effective informed consent of
either parent’s legally authorized representative is obtained in accord with
subpart A of this part, except that the consent of the father or his legally
authorized representative need not be obtained if the pregnancy resulted
from rape or incest.
(c) Nonviable neonates. After delivery nonviable neonate may not be
involved in research covered by this subpart unless all of the following additional conditions are met:
(1) Vital functions of the neonate will not be artificially maintained;
(2) The research will not terminate the heartbeat or respiration of the
neonate;
(3) There will be no added risk to the neonate resulting from the research;
(4) The purpose of the research is the development of important biomedical knowledge that cannot be obtained by other means; and
268 Appendix B
(5) The legally effective informed consent of both parents of the neonate
is obtained in accord with subpart A of this part, except that the waiver and
alteration provisions of §46.116(c) and (d) do not apply. However, if either
parent is unable to consent because of unavailability, incompetence, or
temporary incapacity, the informed consent of one parent of a nonviable neonate will suffice to meet the requirements of this paragraph (c)(5), except
that the consent of the father need not be obtained if the pregnancy resulted
from rape or incest. The consent of a legally authorized representative of
either or both of the parents of a nonviable neonate will not suffice to meet
the requirements of this paragraph (c)(5).
(d) Viable neonates. A neonate, after delivery, that has been determined
to be viable may be included in research only to the extent permitted by and
in accord with the requirements of subparts A and D of this part.
§46.206 Research involving, after delivery, the placenta, the dead
fetus or fetal material.
(a) Research involving, after delivery, the placenta; the dead fetus;
macerated fetal material; or cells, tissue, or organs excised from a dead fetus,
shall be conducted only in accord with any applicable Federal, State, or local
laws and regulations regarding such activities.
(b) If information associated with material described in paragraph (a) of
this section is recorded for research purposes in a manner that living individuals can be identified, directly or through identifiers linked to those individuals, those individuals are research subjects and all pertinent subparts of
this part are applicable.
§46.207 Research not otherwise approvable which presents an
opportunity to understand, prevent, or alleviate a serious problem
affecting the health or welfare of pregnant women, fetuses, or
neonates.
The Secretary will conduct or fund research that the IRB does not believe
meets the requirements of §46.204 or §46.205 only if:
(a) The IRB finds that the research presents a reasonable opportunity to
further the understanding, prevention, or alleviation of a serious problem
affecting the health or welfare of pregnant women, fetuses or neonates; and
(b) The Secretary, after consultation with a panel of experts in pertinent
disciplines (for example: science, medicine, ethics, law) and following opportunity for public review and comment, including a public meeting announced
in the Federal Register, has determined either:
(1) That the research in fact satisfies the conditions of §46.204, as applicable; or
Appendix B
269
(2) The following:
(i) The research presents a reasonable opportunity to further the understanding, prevention, or alleviation of a serious problem affecting the health
or welfare of pregnant women, fetuses or neonates;
(ii) The research will be conducted in accord with sound ethical principles; and
(iii) Informed consent will be obtained in accord with the informed consent provisions of subpart A and other applicable subparts of this part.
Subpart C—Additional Protections Pertaining to Biomedical and
Behavioral Research Involving Prisoners as Subjects
Source: 43 FR 53655, Nov. 16, 1978, unless otherwise noted.
§46.301 Applicability.
(a) The regulations in this subpart are applicable to all biomedical and
behavioral research conducted or supported by the Department of Health
and Human Services involving prisoners as subjects.
(b) Nothing in this subpart shall be construed as indicating that compliance with the procedures set forth herein will authorize research involving
prisoners as subjects, to the extent such research is limited or barred by
applicable State or local law.
(c) The requirements of this subpart are in addition to those imposed
under the other subparts of this part.
§46.302 Purpose.
Inasmuch as prisoners may be under constraints because of their incarceration
which could affect their ability to make a truly voluntary and uncoerced decision whether or not to participate as subjects in research, it is the purpose of
this subpart to provide additional safeguards for the protection of prisoners
involved in activities to which this subpart is applicable.
§46.303 Definitions.
As used in this subpart:
(a) Secretary means the Secretary of Health and Human Services and any
other officer or employee of the Department of Health and Human Services
to whom authority has been delegated.
(b) DHHS means the Department of Health and Human Services.
270 Appendix B
(c) Prisoner means any individual involuntarily confined or detained in a
penal institution. The term is intended to encompass individuals sentenced
to such an institution under a criminal or civil statute, individuals detained
in other facilities by virtue of statutes or commitment procedures which provide alternatives to criminal prosecution or incarceration in a penal institution, and individuals detained pending arraignment, trial, or sentencing.
(d) Minimal risk is the probability and magnitude of physical or psychological
harm that is normally encountered in the daily lives, or in the routine medical,
dental, or psychological examination of healthy persons.
§46.304 Composition of Institutional Review Boards where prisoners
are involved.
In addition to satisfying the requirements in §46.107 of this part, an Institutional Review Board, carrying out responsibilities under this part with
respect to research covered by this subpart, shall also meet the following specific requirements:
(a) A majority of the Board (exclusive of prisoner members) shall have no
association with the prison(s) involved, apart from their membership on the
Board.
(b) At least one member of the Board shall be a prisoner, or a prisoner
representative with appropriate background and experience to serve in that
capacity, except that where a particular research project is reviewed by more
than one Board only one Board need satisfy this requirement.
[43 FR 53655, Nov. 16, 1978, as amended at 46 FR 8386, Jan. 26, 1981]
§46.305 Additional duties of the Institutional Review Boards where
prisoners are involved.
(a) In addition to all other responsibilities prescribed for Institutional
Review Boards under this part, the Board shall review research covered by
this subpart and approve such research only if it finds that:
(1) The research under review represents one of the categories of research
permissible under §46.306(a)(2);
(2) Any possible advantages accruing to the prisoner through his or her
participation in the research, when compared to the general living conditions,
medical care, quality of food, amenities and opportunity for earnings in the
prison, are not of such a magnitude that his or her ability to weigh the risks
of the research against the value of such advantages in the limited choice
environment of the prison is impaired;
(3) The risks involved in the research are commensurate with risks that
would be accepted by nonprisoner volunteers;
Appendix B
271
(4) Procedures for the selection of subjects within the prison are fair to all
prisoners and immune from arbitrary intervention by prison authorities or
prisoners. Unless the principal investigator provides to the Board justification
in writing for following some other procedures, control subjects must be
selected randomly from the group of available prisoners who meet the characteristics needed for that particular research project;
(5) The information is presented in language which is understandable to
the subject population;
(6) Adequate assurance exists that parole boards will not take into account a
prisoner’s participation in the research in making decisions regarding parole,
and each prisoner is clearly informed in advance that participation in the
research will have no effect on his or her parole; and
(7) Where the Board finds there may be a need for follow-up examination
or care of participants after the end of their participation, adequate provision has been made for such examination or care, taking into account the
varying lengths of individual prisoners’ sentences, and for informing participants of this fact.
(b) The Board shall carry out such other duties as may be assigned by the
Secretary.
(c) The institution shall certify to the Secretary, in such form and manner
as the Secretary may require, that the duties of the Board under this section
have been fulfilled.
§46.306 Permitted research involving prisoners.
(a) Biomedical or behavioral research conducted or supported by DHHS
may involve prisoners as subjects only if:
(1) The institution responsible for the conduct of the research has certified
to the Secretary that the Institutional Review Board has approved the research
under §46.305 of this subpart; and
(2) In the judgment of the Secretary the proposed research involves solely
the following:
(i) Study of the possible causes, effects, and processes of incarceration, and
of criminal behavior, provided that the study presents no more than minimal
risk and no more than inconvenience to the subjects;
(ii) Study of prisons as institutional structures or of prisoners as incarcerated persons, provided that the study presents no more than minimal risk and
no more than inconvenience to the subjects;
(iii) Research on conditions particularly affecting prisoners as a class (for
example, vaccine trials and other research on hepatitis which is much more
272 Appendix B
prevalent in prisons than elsewhere; and research on social and psychological problems such as alcoholism, drug addiction and sexual assaults)
provided that the study may proceed only after the Secretary has consulted
with appropriate experts including experts in penology medicine and ethics,
and published notice, in the Federal Register, of his intent to approve such
research; or
(iv) Research on practices, both innovative and accepted, which have the
intent and reasonable probability of improving the health or well-being of the
subject. In cases in which those studies require the assignment of prisoners in
a manner consistent with protocols approved by the IRB to control groups
which may not benefit from the research, the study may proceed only after
the Secretary has consulted with appropriate experts, including experts in
penology medicine and ethics, and published notice, in the Federal Register,
of his intent to approve such research.
(b) Except as provided in paragraph (a) of this section, biomedical or
behavioral research conducted or supported by DHHS shall not involve prisoners as subjects.
Subpart D—Additional Protections for Children Involved as
Subjects in Research
Source: 48 FR 9818, Mar. 8, 1983, unless otherwise noted.
§46.401 To what do these regulations apply?
(a) This subpart applies to all research involving children as subjects,
conducted or supported by the Department of Health and Human Services.
(1) This includes research conducted by Department employees, except
that each head of an Operating Division of the Department may adopt such
nonsubstantive, procedural modifications as may be appropriate from an
administrative standpoint.
(2) It also includes research conducted or supported by the Department of
Health and Human Services outside the United States, but in appropriate circumstances, the Secretary may, under paragraph (e) of §46.101 of Subpart A,
waive the applicability of some or all of the requirements of these regulations
for research of this type.
(b) Exemptions at §46.101(b)(1) and (b)(3) through (b)(6) are applicable
to this subpart. The exemption at §46.101(b)(2) regarding educational tests
is also applicable to this subpart. However, the exemption at §46.101(b)(2)
for research involving survey or interview procedures or observations of
public behavior does not apply to research covered by this subpart, except
Appendix B
273
for research involving observation of public behavior when the investigator(s) do not participate in the activities being observed.
(c) The exceptions, additions, and provisions for waiver as they appear in
paragraphs (c) through (i) of §46.101 of Subpart A are applicable to this subpart.
[48 FR 9818, Mar. 8, 1983; 56 FR 28032, June 18, 1991; 56 FR 29757, June 28, 1991]
§46.402 Definitions.
The definitions in §46.102 of Subpart A shall be applicable to this subpart
as well. In addition, as used in this subpart:
(a) Children are persons who have not attained the legal age for consent to
treatments or procedures involved in the research, under the applicable law
of the jurisdiction in which the research will be conducted.
(b) Assent means a child’s affirmative agreement to participate in research.
Mere failure to object should not, absent affirmative agreement, be construed
as assent.
(c) Permission means the agreement of parent(s) or guardian to the participation of their child or ward in research.
(d) Parent means a child’s biological or adoptive parent.
(e) Guardian means an individual who is authorized under applicable
State or local law to consent on behalf of a child to general medical care.
§46.403 IRB duties.
In addition to other responsibilities assigned to IRBs under this part, each
IRB shall review research covered by this subpart and approve only research
which satisfies the conditions of all applicable sections of this subpart.
§46.404 Research not involving greater than minimal risk.
HHS will conduct or fund research in which the IRB finds that no greater
than minimal risk to children is presented, only if the IRB finds that adequate
provisions are made for soliciting the assent of the children and the permission of their parents or guardians, as set forth in §46.408.
§46.405 Research involving greater than minimal risk but presenting
the prospect of direct benefit to the individual subjects.
HHS will conduct or fund research in which the IRB finds that more than
minimal risk to children is presented by an intervention or procedure that
274 Appendix B
holds out the prospect of direct benefit for the individual subject, or by a
monitoring procedure that is likely to contribute to the subject’s well-being,
only if the IRB finds that:
(a) The risk is justified by the anticipated benefit to the subjects;
(b) The relation of the anticipated benefit to the risk is at least as favorable
to the subjects as that presented by available alternative approaches; and
(c) Adequate provisions are made for soliciting the assent of the children
and permission of their parents or guardians, as set forth in §46.408.
§46.406 Research involving greater than minimal risk and no
prospect of direct benefit to individual subjects, but likely to yield
generalizable knowledge about the subject’s disorder or condition.
HHS will conduct or fund research in which the IRB finds that more than
minimal risk to children is presented by an intervention or procedure that
does not hold out the prospect of direct benefit for the individual subject,
or by a monitoring procedure which is not likely to contribute to the wellbeing of the subject, only if the IRB finds that:
(a) The risk represents a minor increase over minimal risk;
(b) The intervention or procedure presents experiences to subjects that
are reasonably commensurate with those inherent in their actual or expected
medical, dental, psychological, social, or educational situations;
(c) The intervention or procedure is likely to yield generalizable knowledge about the subjects’ disorder or condition which is of vital importance for
the understanding or amelioration of the subjects’ disorder or condition; and
(d) Adequate provisions are made for soliciting assent of the children and
permission of their parents or guardians, as set forth in §46.408.
§46.407 Research not otherwise approvable which presents an
opportunity to understand, prevent, or alleviate a serious problem
affecting the health or welfare of children.
HHS will conduct or fund research that the IRB does not believe meets the
requirements of §46.404, §46.405, or §46.406 only if:
(a) The IRB finds that the research presents a reasonable opportunity to
further the understanding, prevention, or alleviation of a serious problem
affecting the health or welfare of children; and
(b) The Secretary, after consultation with a panel of experts in pertinent disciplines (for example: science, medicine, education, ethics, law) and following
opportunity for public review and comment, has determined either:
(1) That the research in fact satisfies the conditions of §46.404, §46.405,
or §46.406, as applicable, or
Appendix B
275
(2) The following:
(i) The research presents a reasonable opportunity to further the understanding, prevention, or alleviation of a serious problem affecting the health
or welfare of children;
(ii) The research will be conducted in accordance with sound ethical
principles;
(iii) Adequate provisions are made for soliciting the assent of children and
the permission of their parents or guardians, as set forth in §46.408.
§46.408 Requirements for permission by parents or guardians and for
assent by children.
(a) In addition to the determinations required under other applicable sections of this subpart, the IRB shall determine that adequate provisions are made
for soliciting the assent of the children, when in the judgment of the IRB the
children are capable of providing assent. In determining whether children are
capable of assenting, the IRB shall take into account the ages, maturity, and
psychological state of the children involved. This judgment may be made for all
children to be involved in research under a particular protocol, or for each
child, as the IRB deems appropriate. If the IRB determines that the capability of
some or all of the children is so limited that they cannot reasonably be consulted or that the intervention or procedure involved in the research holds out
a prospect of direct benefit that is important to the health or well-being of the
children and is available only in the context of the research, the assent of the
children is not a necessary condition for proceeding with the research. Even
where the IRB determines that the subjects are capable of assenting, the IRB
may still waive the assent requirement under circumstances in which consent
may be waived in accord with §46.116 of Subpart A.
(b) In addition to the determinations required under other applicable
sections of this subpart, the IRB shall determine, in accordance with and to
the extent that consent is required by §46.116 of Subpart A, that adequate
provisions are made for soliciting the permission of each child’s parents or
guardian. Where parental permission is to be obtained, the IRB may find that
the permission of one parent is sufficient for research to be conducted under
§46.404 or §46.405. Where research is covered by §§46.406 and 46.407 and
permission is to be obtained from parents, both parents must give their
permission unless one parent is deceased, unknown, incompetent, or not reasonably available, or when only one parent has legal responsibility for the
care and custody of the child.
(c) In addition to the provisions for waiver contained in §46.116 of Subpart A, if the IRB determines that a research protocol is designed for conditions or for a subject population for which parental or guardian permission is
not a reasonable requirement to protect the subjects (for example, neglected
276 Appendix B
or abused children), it may waive the consent requirements in Subpart A of
this part and paragraph (b) of this section, provided an appropriate mechanism for protecting the children who will participate as subjects in the research
is substituted, and provided further that the waiver is not inconsistent with
Federal, state or local law. The choice of an appropriate mechanism would
depend upon the nature and purpose of the activities described in the protocol, the risk and anticipated benefit to the research subjects, and their age,
maturity, status, and condition.
(d) Permission by parents or guardians shall be documented in accordance
with and to the extent required by §46.117 of Subpart A.
(e) When the IRB determines that assent is required, it shall also determine whether and how assent must be documented.
§46.409 Wards.
(a) Children who are wards of the state or any other agency, institution,
or entity can be included in research approved under §46.406 or §46.407
only if such research is:
(1) Related to their status as wards; or
(2) Conducted in schools, camps, hospitals, institutions, or similar settings
in which the majority of children involved as subjects are not wards.
(b) If the research is approved under paragraph (a) of this section, the IRB
shall require appointment of an advocate for each child who is a ward, in
addition to any other individual acting on behalf of the child as guardian or in
loco parentis. One individual may serve as advocate for more than one child.
The advocate shall be an individual who has the background and experience to
act in, and agrees to act in, the best interests of the child for the duration of the
child’s participation in the research and who is not associated in any way
(except in the role as advocate or member of the IRB) with the research, the
investigator(s), or the guardian organization.
Subpart E—Registration of Institutional Review Boards
Source: 74 FR 2405, Jan. 15, 2009, unless otherwise noted.
§46.501 What IRBs must be registered?
Each IRB that is designated by an institution under an assurance of
compliance approved for federalwide use by the Office for Human Research
Protections (OHRP) under §46.103(a) and that reviews research involving
human subjects conducted or supported by the Department of Health and
Human Services (HHS) must be registered with HHS. An individual authorized
Appendix B
277
to act on behalf of the institution or organization operating the IRB must submit the registration information.
§46.502 What information must be provided when registering an IRB?
The following information must be provided to HHS when registering
an IRB:
(a) The name, mailing address, and street address (if different from the
mailing address) of the institution or organization operating the IRB(s); and
the name, mailing address, phone number, facsimile number, and electronic
mail address of the senior officer or head official of that institution or organization who is responsible for overseeing activities performed by the IRB.
(b) The name, mailing address, phone number, facsimile number, and electronic mail address of the contact person providing the registration information.
(c) The name, if any, assigned to the IRB by the institution or organization,
and the IRB’s mailing address, street address (if different from the mailing
address), phone number, facsimile number, and electronic mail address.
(d) The name, phone number, and electronic mail address of the IRB
chairperson.
(e)(1) The approximate numbers of:
(i) All active protocols; and
(ii) Active protocols conducted or supported by HHS.
(2) For purpose of this regulation, an “active protocol” is any protocol for
which the IRB conducted an initial review or a continuing review at a convened meeting or under an expedited review procedure during the preceding twelve months.
(f) The approximate number of full-time equivalent positions devoted to
the IRB’s administrative activities.
§46.503 When must an IRB be registered?
An IRB must be registered before it can be designated under an assurance
approved for federalwide use by OHRP under §46.103(a). IRB registration
becomes effective when reviewed and accepted by OHRP. The registration
will be effective for 3 years.
§46.504 How must an IRB be registered?
Each IRB must be registered electronically through http://ohrp.cit.nih.gov/
efile unless an institution or organization lacks the ability to register its IRB(s)
electronically. If an institution or organization lacks the ability to register an
278 Appendix B
IRB electronically, it must send its IRB registration information in writing
to OHRP.
§46.505 When must IRB registration information be renewed or
updated?
(a) Each IRB must renew its registration every 3 years.
(b) The registration information for an IRB must be updated within 90 days
after changes occur regarding the contact person who provided the IRB registration information or the IRB chairperson. The updated registration information must be submitted in accordance with §46.504.
(c) Any renewal or update that is submitted to, and accepted by, OHRP
begins a new 3-year effective period.
(d) An institution’s or organization’s decision to disband a registered
IRB which it is operating also must be reported to OHRP in writing within
30 days after permanent cessation of the IRB’s review of HHS-conducted or
-supported research.
APPENDIX C
Title 34 Code of Federal
Regulations Part 98
Title 34: Education
PART 98—STUDENT RIGHTS IN RESEARCH, EXPERIMENTAL
PROGRAMS, AND TESTING
Contents
§98.1 Applicability of part.
§98.2 Definitions.
§98.3 Access to instructional material used in a research or experimentation
program.
§98.4 Protection of students’ privacy in examination, testing, or treatment.
§98.5 Information and investigation office.
§98.6 Reports.
§98.7 Filing a complaint.
§98.8 Notice of the complaint.
§98.9 Investigation and findings.
§98.10 Enforcement of the findings.
Authority: Sec. 514(a) of Pub. L. 93–380, 88 Stat. 574 (20 U.S.C. 1232h(a));
sec. 1250 of Pub. L. 95–561, 92 Stat. 2355–2356 (20 U.S.C. 1232h(b)); and sec. 408(a)
(1) of Pub. L. 90–247, 88 Stat. 559–560, as amended (20 U.S.C. 1221e-3(a)(1)); sec.
414(a) of Pub. L. 96–88, 93 Stat. 685 (20 U.S.C. 3474(a)), unless otherwise noted.
Source: 49 FR 35321, Sept. 6, 1984, unless otherwise noted.
The content of this appendix is reprinted from a U.S. government regulation
(https://studentprivacy.ed.gov/content/ppra). In the public domain.
279
280 Appendix C
§98.1 Applicability of part.
This part applies to any program administered by the Secretary of Education that:
(a)(1) Was transferred to the Department by the Department of Education
Organization Act (DEOA); and
(2) Was administered by the Education Division of the Department of Health,
Education, and Welfare on the day before the effective date of the DEOA; or
(b) Was enacted after the effective date of the DEOA, unless the law
enacting the new Federal program has the effect of making section 439 of
the General Education Provisions Act inapplicable.
(c) The following chart lists the funded programs to which part 98 does
not apply as of February 16, 1984.
Name of program
Authorizing statute
1. High School Equivalency
Program and College
Assistance Migrant Program
Section 418A of the Higher
Education Act of 1965 as
amended by the Education
Amendments of 1980 (Pub.
L. 96–374, 20 U.S.C.
1070d-2)
The Rehabilitation Act of
1973 as amended by Pub. L.
95–602 (29 U.S.C. 700, et
seq.)
Title IV of the Housing Act of
1950 as amended (12 U.S.C.
1749, et seq.)
2. Programs administered
by the Commissioner of the
Rehabilitative Services
Administration
3. College housing
Implementing
regulations
part 206.
parts 351–356, 361,
362, 365, 366,
369–375, 378, 379,
385–390, and 395.
part 614.
(Authority: 20 U.S.C. 1221e-3(a)(1), 1230, 1232h, 3487, 3507)
§98.2 Definitions.
(a) The following terms used in this part are defined in 34 CFR part 77;
“Department,” “Recipient,” “Secretary.”
(b) The following definitions apply to this part:
Act means the General Education Provisions Act.
Office means the information and investigation office specified in §98.5.
(Authority: 20 U.S.C. 1221e-3(a)(1))
§98.3 Access to instructional material used in a research or
experimentation program.
(a) All instructional material—including teachers’ manuals, films, tapes, or
other supplementary instructional material—which will be used in connection
with any research or experimentation program or project shall be available
Appendix C
281
for inspection by the parents or guardians of the children engaged in such
program or project.
(b) For the purpose of this part research or experimentation program or project
means any program or project in any program under §98.1 (a) or (b) that is
designed to explore or develop new or unproven teaching methods or
techniques.
(c) For the purpose of the section children means persons not above age 21
who are enrolled in a program under §98.1 (a) or (b) not above the elementary
or secondary education level, as determined under State law.
(Authority: 20 U.S.C. 1221e–3(a)(1), 1232h(a))
§98.4 Protection of students’ privacy in examination, testing, or
treatment.
(a) No student shall be required, as part of any program specified in §98.1 (a)
or (b), to submit without prior consent to psychiatric examination, testing, or
treatment, or psychological examination, testing, or treatment, in which the primary purpose is to reveal information concerning one or more of the following:
(1) Political affiliations;
(2) Mental and psychological problems potentially embarrassing to the
student or his or her family;
(3) Sex behavior and attitudes;
(4) Illegal, anti-social, self-incriminating and demeaning behavior;
(5) Critical appraisals of other individuals with whom the student has
close family relationships;
(6) Legally recognized privileged and analogous relationships, such as
those of lawyers, physicians, and ministers; or
(7) Income, other than that required by law to determine eligibility for participation in a program or for receiving financial assistance under a program.
(b) As used in paragraph (a) of this section, prior consent means:
(1) Prior consent of the student if the student is an adult or emancipated
minor; or
(2) Prior written consent of the parent or guardian, if the student is an
unemancipated minor.
(c) As used in paragraph (a) of this section:
(1) Psychiatric or psychological examination or test means a method of
obtaining information, including a group activity, that is not directly related to
academic instruction and that is designed to elicit information about attitudes, habits, traits, opinions, beliefs or feelings; and
282 Appendix C
(2) Psychiatric or psychological treatment means an activity involving the
planned, systematic use of methods or techniques that are not directly
related to academic instruction and that is designed to affect behavioral,
emotional, or attitudinal characteristics of an individual or group.
(Authority: 20 U.S.C. 1232h(b))
§98.5 Information and investigation office.
(a) The Secretary has designated an office to provide information about
the requirements of section 439 of the Act, and to investigate, process, and
review complaints that may be filed concerning alleged violations of the
provisions of the section.
(b) The following is the name and address of the office designated under paragraph (a) of this section: Family Educational Rights and Privacy Act Office, U.S.
Department of Education, 400 Maryland Avenue, SW., Washington, DC 20202.
(Authority: 20 U.S.C. 1231e-3(a)(1), 1232h)
§98.6 Reports.
The Secretary may require the recipient to submit reports containing
information necessary to resolve complaints under section 439 of the Act
and the regulations in this part.
(Authority: 20 U.S.C. 1221e-3(a)(1), 1232h)
§98.7 Filing a complaint.
(a) Only a student or a parent or guardian of a student directly affected by
a violation under Section 439 of the Act may file a complaint under this part.
The complaint must be submitted in writing to the Office.
(b) The complaint filed under paragraph (a) of this section must—
(1) Contain specific allegations of fact giving reasonable cause to believe
that a violation of either §98.3 or §98.4 exists; and
(2) Include evidence of attempted resolution of the complaint at the local
level (and at the State level if a State complaint resolution process exists),
including the names of local and State officials contacted and significant dates
in the attempted resolution process.
(c) The Office investigates each complaint which the Office receives
that meets the requirements of this section to determine whether the
recipient or contractor failed to comply with the provisions of section 439
of the Act.
(Approved by the Office of Management and Budget under control number 1880–0507)
(Authority: 20 U.S.C. 1221e-3(a)(1), 1232h)
Appendix C
283
§98.8 Notice of the complaint.
(a) If the Office receives a complaint that meets the requirements of §98.7,
it provides written notification to the complainant and the recipient or contractor against which the violation has been alleged that the complaint has
been received.
(b) The notice to the recipient or contractor under paragraph (a) of this
section must:
(1) Include the substance of the alleged violation; and
(2) Inform the recipient or contractor that the Office will investigate the
complaint and that the recipient or contractor may submit a written response
to the complaint.
(Authority: 20 U.S.C. 1221e-3(A)(1), 1232h)
§98.9 Investigation and findings.
(a) The Office may permit the parties to submit further written or oral
arguments or information.
(b) Following its investigations, the Office provides to the complainant and
recipient or contractor written notice of its findings and the basis for its findings.
(c) If the Office finds that the recipient or contractor has not complied
with section 439 of the Act, the Office includes in its notice under paragraph
(b) of this section:
(1) A statement of the specific steps that the Secretary recommends the
recipient or contractor take to comply; and
(2) Provides a reasonable period of time, given all of the circumstances of
the case, during which the recipient or contractor may comply voluntarily.
(Authority: 20 U.S.C. 1221e-3(a)(1), 1232h)
§98.10 Enforcement of the findings.
(a) If the recipient or contractor does not comply during the period of time
set under §98.9(c), the Secretary may either:
(1) For a recipient, take an action authorized under 34 CFR part 78,
including:
(i) Issuing a notice of intent to terminate funds under 34 CFR 78.21;
(ii) Issuing a notice to withhold funds under 34 CFR 78.21, 200.94(b), or
298.45(b), depending upon the applicable program under which the notice
is issued; or
(iii) Issuing a notice to cease and desist under 34 CFR 78.31, 200.94(c) or
298.45(c), depending upon the program under which the notice is issued; or
284 Appendix C
(2) For a contractor, direct the contracting officer to take an appropriate
action authorized under the Federal Acquisition Regulations, including either:
(i) Issuing a notice to suspend operations under 48 CFR 12.5; or
(ii) Issuing a notice to terminate for default, either in whole or in part
under 48 CFR 49.102.
(b) If, after an investigation under §98.9, the Secretary finds that a recipient
or contractor has complied voluntarily with section 439 of the Act, the Secretary provides the complainant and the recipient or contractor written notice
of the decision and the basis for the decision.
(Authority: 20 U.S.C. 1221e-3(a)(1), 1232h)
APPENDIX D
Title 34 Code of Federal
Regulations Part 99
Title 34: Education
PART 99—FAMILY EDUCATIONAL RIGHTS AND PRIVACY
Contents
Subpart A—General
§99.1 To which educational agencies or institutions do these regulations
apply?
§99.2 What is the purpose of these regulations?
§99.3 What definitions apply to these regulations?
§99.4 What are the rights of parents?
§99.5 What are the rights of students?
§99.6 [Reserved]
§99.7 What must an educational agency or institution include in its
annual notification?
§99.8 What provisions apply to records of a law enforcement unit?
Subpart B—What Are the Rights of Inspection and Review of Education
Records?
§99.10 What rights exist for a parent or eligible student to inspect and review
education records?
§99.11 May an educational agency or institution charge a fee for copies of
education records?
§99.12 What limitations exist on the right to inspect and review records?
The content of this appendix is reprinted from a U.S. government regulation
(https://www2.ed.gov/policy/gen/guid/fpco/pdf/ferparegs.pdf). In the public domain.
285
286 Appendix D
Subpart C—What Are the Procedures for Amending Education
Records?
§99.20 How can a parent or eligible student request amendment of the
student’s education records?
§99.21 Under what conditions does a parent or eligible student have the
right to a hearing?
§99.22 What minimum requirements exist for the conduct of a hearing?
Subpart D—May an Educational Agency or Institution Disclose
Personally Identifiable Information From Education Records?
§99.30 Under what conditions is prior consent required to disclose
information?
§99.31 Under what conditions is prior consent not required to disclose
information?
§99.32 What recordkeeping requirements exist concerning requests and
disclosures?
§99.33 What limitations apply to the redisclosure of information?
§99.34 What conditions apply to disclosure of information to other educational agencies or institutions?
§99.35 What conditions apply to disclosure of information for Federal or
State program purposes?
§99.36 What conditions apply to disclosure of information in health and
safety emergencies?
§99.37 What conditions apply to disclosing directory information?
§99.38 What conditions apply to disclosure of information as permitted
by State statute adopted after November 19, 1974, concerning the juvenile
justice system?
§99.39 What definitions apply to the nonconsensual disclosure of
records by postsecondary educational institutions in connection with
disciplinary proceedings concerning crimes of violence or non-forcible
sex offenses?
Subpart E—What Are the Enforcement Procedures?
§99.60 What functions has the Secretary delegated to the Office and to the
Office of Administrative Law Judges?
§99.61 What responsibility does an educational agency or institution, a
recipient of Department funds, or a third party outside of an educational
agency or institution have concerning conflict with State or local laws?
§99.62 What information must an educational agency or institution or
other recipient of Department funds submit to the Office?
§99.63 Where are complaints filed?
§99.64 What is the investigation procedure?
§99.65 What is the content of the notice of investigation issued by the Office?
Appendix D
287
§99.66 What are the responsibilities of the Office in the enforcement
process?
§99.67 How does the Secretary enforce decisions?
Appendix A to Part 99—Crimes of Violence Definitions
Authority: 20 U.S.C. 1232g, unless otherwise noted.
Source: 53 FR 11943, Apr. 11, 1988, unless otherwise noted.
Subpart A—General
§99.1 To which educational agencies or institutions do these
regulations apply?
(a) Except as otherwise noted in §99.10, this part applies to an educational agency or institution to which funds have been made available under
any program administered by the Secretary, if—
(1) The educational institution provides educational services or instruction, or both, to students; or
(2) The educational agency is authorized to direct and control public elementary or secondary, or postsecondary educational institutions.
(b) This part does not apply to an educational agency or institution solely
because students attending that agency or institution receive non-monetary
benefits under a program referenced in paragraph (a) of this section, if no funds
under that program are made available to the agency or institution.
(c) The Secretary considers funds to be made available to an educational
agency or institution of funds under one or more of the programs referenced
in paragraph (a) of this section—
(1) Are provided to the agency or institution by grant, cooperative agreement, contract, subgrant, or subcontract; or
(2) Are provided to students attending the agency or institution and the
funds may be paid to the agency or institution by those students for educational purposes, such as under the Pell Grant Program and the Guaranteed
Student Loan Program (titles IV-A-1 and IV-B, respectively, of the Higher
Education Act of 1965, as amended).
(d) If an educational agency or institution receives funds under one or
more of the programs covered by this section, the regulations in this part
apply to the recipient as a whole, including each of its components (such as a
department within a university).
(Authority: 20 U.S.C. 1232g)
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59295, Nov. 21, 1996; 65 FR 41852,
July 6, 2000]
288 Appendix D
§99.2 What is the purpose of these regulations?
The purpose of this part is to set out requirements for the protection of
privacy of parents and students under section 444 of the General Education
Provisions Act, as amended.
(Authority: 20 U.S.C. 1232g)
Note to §99.2: 34 CFR 300.610 through 300.626 contain requirements regarding
the confidentiality of information relating to children with disabilities who receive
evaluations, services or other benefits under Part B of the Individuals with Disabilities
Education Act (IDEA). 34 CFR 303.402 and 303.460 identify the confidentiality of
information requirements regarding children and infants and toddlers with disabilities
and their families who receive evaluations, services, or other benefits under Part C of
IDEA. 34 CFR 300.610 through 300.627 contain the confidentiality of information
requirements that apply to personally identifiable data, information, and records collected or maintained pursuant to Part B of the IDEA.
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59295, Nov. 21, 1996; 73 FR 74851,
Dec. 9, 2008]
§99.3 What definitions apply to these regulations?
The following definitions apply to this part:
Act means the Family Educational Rights and Privacy Act of 1974, as
amended, enacted as section 444 of the General Education Provisions Act.
(Authority: 20 U.S.C. 1232g)
Attendance includes, but is not limited to—
(a) Attendance in person or by paper correspondence, videoconference,
satellite, Internet, or other electronic information and telecommunications technologies for students who are not physically present in the classroom; and
(b) The period during which a person is working under a work-study
program.
(Authority: 20 U.S.C. 1232g)
Authorized representative means any entity or individual designated by a State
or local educational authority or an agency headed by an official listed in
§99.31(a)(3) to conduct—with respect to Federal- or State-supported education programs—any audit or evaluation, or any compliance or enforcement
activity in connection with Federal legal requirements that relate to these
programs.
(Authority: 20 U.S.C. 1232g(b)(1)(C), (b)(3), and (b)(5))
Biometric record, as used in the definition of personally identifiable information, means a record of one or more measurable biological or behavioral
characteristics that can be used for automated recognition of an individual.
Appendix D
289
Examples include fingerprints; retina and iris patterns; voiceprints; DNA
sequence; facial characteristics; and handwriting.
(Authority: 20 U.S.C. 1232g)
Dates of attendance. (a) The term means the period of time during which a
student attends or attended an educational agency or institution. Examples
of dates of attendance include an academic year, a spring semester, or a first
quarter.
(b) The term does not include specific daily records of a student’s attendance at an educational agency or institution.
(Authority: 20 U.S.C. 1232g(a)(5)(A))
Directory information means information contained in an education record
of a student that would not generally be considered harmful or an invasion of
privacy if disclosed.
(a) Directory information includes, but is not limited to, the student’s
name; address; telephone listing; electronic mail address; photograph; date
and place of birth; major field of study; grade level; enrollment status (e.g.,
undergraduate or graduate, full-time or part-time); dates of attendance; participation in officially recognized activities and sports; weight and height of
members of athletic teams; degrees, honors, and awards received; and the
most recent educational agency or institution attended.
(b) Directory information does not include a student’s—
(1) Social security number; or
(2) Student identification (ID) number, except as provided in paragraph (c)
of this definition.
(c) In accordance with paragraphs (a) and (b) of this definition, directory
information includes—
(1) A student ID number, user ID, or other unique personal identifier used
by a student for purposes of accessing or communicating in electronic systems,
but only if the identifier cannot be used to gain access to education records
except when used in conjunction with one or more factors that authenticate the
user’s identity, such as a personal identification number (PIN), password or
other factor known or possessed only by the authorized user; and
(2) A student ID number or other unique personal identifier that is displayed on a student ID badge, but only if the identifier cannot be used to gain
access to education records except when used in conjunction with one or more
factors that authenticate the user’s identity, such as a PIN, password, or other
factor known or possessed only by the authorized user.
(Authority: 20 U.S.C. 1232g(a)(5)(A))
290 Appendix D
Disciplinary action or proceeding means the investigation, adjudication, or
imposition of sanctions by an educational agency or institution with respect
to an infraction or violation of the internal rules of conduct applicable to
students of the agency or institution.
Disclosure means to permit access to or the release, transfer, or other communication of personally identifiable information contained in education records
by any means, including oral, written, or electronic means, to any party except
the party identified as the party that provided or created the record.
(Authority: 20 U.S.C. 1232g(b)(1) and (b)(2))
Early childhood education program means—
(a) A Head Start program or an Early Head Start program carried out under
the Head Start Act (42 U.S.C. 9831 et seq.), including a migrant or seasonal Head
Start program, an Indian Head Start program, or a Head Start program or an
Early Head Start program that also receives State funding;
(b) A State licensed or regulated child care program; or
(c) A program that—
(1) Serves children from birth through age six that addresses the children’s cognitive (including language, early literacy, and early mathematics),
social, emotional, and physical development; and
(2) Is—
(i) A State prekindergarten program;
(ii) A program authorized under section 619 or part C of the Individuals
with Disabilities Education Act; or
(iii) A program operated by a local educational agency.
Educational agency or institution means any public or private agency or institution to which this part applies under §99.1(a).
(Authority: 20 U.S.C. 1232g(a)(3))
Education program means any program that is principally engaged in the provision of education, including, but not limited to, early childhood education,
elementary and secondary education, postsecondary education, special education, job training, career and technical education, and adult education, and any
program that is administered by an educational agency or institution.
(Authority: 20 U.S.C. 1232g(b)(3), (b)(5))
Education records. (a) The term means those records that are:
(1) Directly related to a student; and
(2) Maintained by an educational agency or institution or by a party acting for the agency or institution.
Appendix D
291
(b) The term does not include:
(1) Records that are kept in the sole possession of the maker, are used only
as a personal memory aid, and are not accessible or revealed to any other
person except a temporary substitute for the maker of the record.
(2) Records of the law enforcement unit of an educational agency or institution, subject to the provisions of §99.8.
(3)(i) Records relating to an individual who is employed by an educational agency or institution, that:
(A) Are made and maintained in the normal course of business;
(B) Relate exclusively to the individual in that individual’s capacity as an
employee; and
(C) Are not available for use for any other purpose.
(ii) Records relating to an individual in attendance at the agency or institution who is employed as a result of his or her status as a student are education records and not excepted under paragraph (b)(3)(i) of this definition.
(4) Records on a student who is 18 years of age or older, or is attending an
institution of postsecondary education, that are:
(i) Made or maintained by a physician, psychiatrist, psychologist, or other
recognized professional or paraprofessional acting in his or her professional
capacity or assisting in a paraprofessional capacity;
(ii) Made, maintained, or used only in connection with treatment of the
student; and
(iii) Disclosed only to individuals providing the treatment. For the purpose
of this definition, “treatment” does not include remedial educational activities
or activities that are part of the program of instruction at the agency or institution; and
(5) Records created or received by an educational agency or institution
after an individual is no longer a student in attendance and that are not
directly related to the individual’s attendance as a student.
(6) Grades on peer-graded papers before they are collected and recorded
by a teacher.
(Authority: 20 U.S.C. 1232g(a)(4))
Eligible student means a student who has reached 18 years of age or is
attending an institution of postsecondary education.
(Authority: 20 U.S.C. 1232g(d))
Institution of postsecondary education means an institution that provides
education to students beyond the secondary school level; “secondary school
292 Appendix D
level” means the educational level (not beyond grade 12) at which secondary
education is provided as determined under State law.
(Authority: 20 U.S.C. 1232g(d))
Parent means a parent of a student and includes a natural parent, a
guardian, or an individual acting as a parent in the absence of a parent or
a guardian.
(Authority: 20 U.S.C. 1232g)
Party means an individual, agency, institution, or organization.
(Authority: 20 U.S.C. 1232g(b)(4)(A))
Personally Identifiable Information
The term includes, but is not limited to—
(a) The student’s name;
(b) The name of the student’s parent or other family members;
(c) The address of the student or student’s family;
(d) A personal identifier, such as the student’s social security number,
student number, or biometric record;
(e) Other indirect identifiers, such as the student’s date of birth, place of
birth, and mother’s maiden name;
(f) Other information that, alone or in combination, is linked or linkable
to a specific student that would allow a reasonable person in the school community, who does not have personal knowledge of the relevant circumstances, to identify the student with reasonable certainty; or
(g) Information requested by a person who the educational agency or
institution reasonably believes knows the identity of the student to whom the
education record relates.
(Authority: 20 U.S.C. 1232g)
Record means any information recorded in any way, including, but not
limited to, handwriting, print, computer media, video or audio tape, film, microfilm, and microfiche.
(Authority: 20 U.S.C. 1232g)
Secretary means the Secretary of the U.S. Department of Education or an
official or employee of the Department of Education acting for the Secretary
under a delegation of authority.
(Authority: 20 U.S.C. 1232g)
Appendix D
293
Student, except as otherwise specifically provided in this part, means
any individual who is or has been in attendance at an educational agency
or institution and regarding whom the agency or institution maintains
education records.
(Authority: 20 U.S.C. 1232g(a)(6))
[53 FR 11943, Apr. 11, 1988, as amended at 60 FR 3468, Jan. 17, 1995; 61 FR 59295,
Nov. 21, 1996; 65 FR 41852, July 6, 2000; 73 FR 74851, Dec. 9, 2008; 76 FR 75641,
Dec. 2, 2011]
§99.4 What are the rights of parents?
An educational agency or institution shall give full rights under the Act to
either parent, unless the agency or institution has been provided with evidence that there is a court order, State statute, or legally binding document
relating to such matters as divorce, separation, or custody that specifically
revokes these rights.
(Authority: 20 U.S.C. 1232g)
§99.5 What are the rights of students?
(a)(1) When a student becomes an eligible student, the rights accorded to,
and consent required of, parents under this part transfer from the parents
to the student.
(2) Nothing in this section prevents an educational agency or institution
from disclosing education records, or personally identifiable information from
education records, to a parent without the prior written consent of an eligible
student if the disclosure meets the conditions in §99.31(a)(8), §99.31(a)(10),
§99.31(a)(15), or any other provision in §99.31(a).
(b) The Act and this part do not prevent educational agencies or institutions from giving students rights in addition to those given to parents.
(c) An individual who is or has been a student at an educational institution and who applies for admission at another component of that institution
does not have rights under this part with respect to records maintained by
that other component, including records maintained in connection with the
student’s application for admission, unless the student is accepted and attends
that other component of the institution.
(Authority: 20 U.S.C. 1232g(d))
[53 FR 11943, Apr. 11, 1988, as amended at 58 FR 3188, Jan. 7, 1993; 65 FR 41853,
July 6, 2000; 73 FR 74852, Dec. 9, 2008]
294 Appendix D
§99.6 [Reserved]
§99.7 What must an educational agency or institution include in its
annual notification?
(a)(1) Each educational agency or institution shall annually notify parents
of students currently in attendance, or eligible students currently in attendance, of their rights under the Act and this part.
(2) The notice must inform parents or eligible students that they have the
right to—
(i) Inspect and review the student’s education records;
(ii) Seek amendment of the student’s education records that the parent or
eligible student believes to be inaccurate, misleading, or otherwise in violation of the student’s privacy rights;
(iii) Consent to disclosures of personally identifiable information contained in the student’s education records, except to the extent that the Act
and §99.31 authorize disclosure without consent; and
(iv) File with the Department a complaint under §§99.63 and 99.64 concerning alleged failures by the educational agency or institution to comply
with the requirements of the Act and this part.
(3) The notice must include all of the following:
(i) The procedure for exercising the right to inspect and review education
records.
(ii) The procedure for requesting amendment of records under §99.20.
(iii) If the educational agency or institution has a policy of disclosing education records under §99.31(a)(1), a specification of criteria for determining
who constitutes a school official and what constitutes a legitimate educational interest.
(b) An educational agency or institution may provide this notice by any
means that are reasonably likely to inform the parents or eligible students of
their rights.
(1) An educational agency or institution shall effectively notify parents or
eligible students who are disabled.
(2) An agency or institution of elementary or secondary education shall
effectively notify parents who have a primary or home language other than
English.
(Approved by the Office of Management and Budget under control number 1880-0508)
(Authority: 20 U.S.C. 1232g (e) and (f))
[61 FR 59295, Nov. 21, 1996]
Appendix D
295
§99.8 What provisions apply to records of a law enforcement unit?
(a)(1) Law enforcement unit means any individual, office, department, division, or other component of an educational agency or institution, such as a unit
of commissioned police officers or non-commissioned security guards, that is
officially authorized or designated by that agency or institution to—
(i) Enforce any local, State, or Federal law, or refer to appropriate authorities a matter for enforcement of any local, State, or Federal law against any
individual or organization other than the agency or institution itself; or
(ii) Maintain the physical security and safety of the agency or institution.
(2) A component of an educational agency or institution does not lose its
status as a law enforcement unit if it also performs other, non-law enforcement
functions for the agency or institution, including investigation of incidents or
conduct that constitutes or leads to a disciplinary action or proceedings against
the student.
(b)(1) Records of a law enforcement unit means those records, files, documents, and other materials that are—
(i) Created by a law enforcement unit;
(ii) Created for a law enforcement purpose; and
(iii) Maintained by the law enforcement unit.
(2) Records of a law enforcement unit does not mean—
(i) Records created by a law enforcement unit for a law enforcement purpose that are maintained by a component of the educational agency or institution other than the law enforcement unit; or
(ii) Records created and maintained by a law enforcement unit exclusively
for a non-law enforcement purpose, such as a disciplinary action or proceeding conducted by the educational agency or institution.
(c)(1) Nothing in the Act prohibits an educational agency or institution
from contacting its law enforcement unit, orally or in writing, for the purpose
of asking that unit to investigate a possible violation of, or to enforce, any
local, State, or Federal law.
(2) Education records, and personally identifiable information contained
in education records, do not lose their status as education records and remain
subject to the Act, including the disclosure provisions of §99.30, while in the
possession of the law enforcement unit.
(d) The Act neither requires nor prohibits the disclosure by an educational
agency or institution of its law enforcement unit records.
(Authority: 20 U.S.C. 1232g(a)(4)(B)(ii))
[60 FR 3469, Jan. 17, 1995]
296 Appendix D
Subpart B—What Are the Rights of Inspection and Review of
Education Records?
§99.10 What rights exist for a parent or eligible student to inspect and
review education records?
(a) Except as limited under §99.12, a parent or eligible student must be
given the opportunity to inspect and review the student’s education records.
This provision applies to—
(1) Any educational agency or institution; and
(2) Any State educational agency (SEA) and its components.
(i) For the purposes of subpart B of this part, an SEA and its components
constitute an educational agency or institution.
(ii) An SEA and its components are subject to subpart B of this part if the
SEA maintains education records on students who are or have been in attendance at any school of an educational agency or institution subject to the Act
and this part.
(b) The educational agency or institution, or SEA or its component, shall
comply with a request for access to records within a reasonable period of
time, but not more than 45 days after it has received the request.
(c) The educational agency or institution, or SEA or its component shall
respond to reasonable requests for explanations and interpretations of the
records.
(d) If circumstances effectively prevent the parent or eligible student from
exercising the right to inspect and review the student’s education records, the
educational agency or institution, or SEA or its component, shall—
(1) Provide the parent or eligible student with a copy of the records
requested; or
(2) Make other arrangements for the parent or eligible student to inspect
and review the requested records.
(e) The educational agency or institution, or SEA or its component shall
not destroy any education records if there is an outstanding request to inspect
and review the records under this section.
(f) While an education agency or institution is not required to give an eligible
student access to treatment records under paragraph (b)(4) of the definition of
Education records in §99.3, the student may have those records reviewed by a
physician or other appropriate professional of the student’s choice.
(Authority: 20 U.S.C. 1232g(a)(1) (A) and (B))
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59296, Nov. 21, 1996]
Appendix D
297
§99.11 May an educational agency or institution charge a fee for
copies of education records?
(a) Unless the imposition of a fee effectively prevents a parent or eligible
student from exercising the right to inspect and review the student’s education records, an educational agency or institution may charge a fee for a copy
of an education record which is made for the parent or eligible student.
(b) An educational agency or institution may not charge a fee to search for
or to retrieve the education records of a student.
(Authority: 20 U.S.C. 1232g(a)(1))
§99.12 What limitations exist on the right to inspect and review
records?
(a) If the education records of a student contain information on more than
one student, the parent or eligible student may inspect and review or be
informed of only the specific information about that student.
(b) A postsecondary institution does not have to permit a student to
inspect and review education records that are:
(1) Financial records, including any information those records contain, of
his or her parents;
(2) Confidential letters and confidential statements of recommendation
placed in the education records of the student before January 1, 1975, as long as
the statements are used only for the purposes for which they were specifically
intended; and
(3) Confidential letters and confidential statements of recommendation
placed in the student’s education records after January 1, 1975, if:
(i) The student has waived his or her right to inspect and review those
letters and statements; and
(ii) Those letters and statements are related to the student’s:
(A) Admission to an educational institution;
(B) Application for employment; or
(C) Receipt of an honor or honorary recognition.
(c)(1) A waiver under paragraph (b)(3)(i) of this section is valid only if:
(i) The educational agency or institution does not require the waiver as a
condition for admission to or receipt of a service or benefit from the agency or
institution; and
(ii) The waiver is made in writing and signed by the student, regardless
of age.
298 Appendix D
(2) If a student has waived his or her rights under paragraph (b)(3)(i) of
this section, the educational institution shall:
(i) Give the student, on request, the names of the individuals who provided the letters and statements of recommendation; and
(ii) Use the letters and statements of recommendation only for the purpose for which they were intended.
(3)(i) A waiver under paragraph (b)(3)(i) of this section may be revoked
with respect to any actions occurring after the revocation.
(ii) A revocation under paragraph (c)(3)(i) of this section must be in writing.
(Authority: 20 U.S.C. 1232g(a)(1) (A), (B), (C), and (D))
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59296, Nov. 21, 1996]
Subpart C—What Are the Procedures for Amending Education
Records?
§99.20 How can a parent or eligible student request amendment of the
student’s education records?
(a) If a parent or eligible student believes the education records relating to
the student contain information that is inaccurate, misleading, or in violation
of the student’s rights of privacy, he or she may ask the educational agency or
institution to amend the record.
(b) The educational agency or institution shall decide whether to amend
the record as requested within a reasonable time after the agency or institution receives the request.
(c) If the educational agency or institution decides not to amend the
record as requested, it shall inform the parent or eligible student of its decision and of his or her right to a hearing under §99.21.
(Authority: 20 U.S.C. 1232g(a)(2))
[53 FR 11943, Apr. 11, 1988; 53 FR 19368, May 27, 1988, as amended at 61 FR
59296, Nov. 21, 1996]
§99.21 Under what conditions does a parent or eligible student have
the right to a hearing?
(a) An educational agency or institution shall give a parent or eligible
student, on request, an opportunity for a hearing to challenge the content of the
student’s education records on the grounds that the information contained in
the education records is inaccurate, misleading, or in violation of the privacy
rights of the student.
Appendix D
299
(b)(1) If, as a result of the hearing, the educational agency or institution
decides that the information is inaccurate, misleading, or otherwise in violation of the privacy rights of the student, it shall:
(i) Amend the record accordingly; and
(ii) Inform the parent or eligible student of the amendment in writing.
(2) If, as a result of the hearing, the educational agency or institution
decides that the information in the education record is not inaccurate, misleading, or otherwise in violation of the privacy rights of the student, it shall
inform the parent or eligible student of the right to place a statement in the
record commenting on the contested information in the record or stating why
he or she disagrees with the decision of the agency or institution, or both.
(c) If an educational agency or institution places a statement in the education records of a student under paragraph (b)(2) of this section, the agency or
institution shall:
(1) Maintain the statement with the contested part of the record for as
long as the record is maintained; and
(2) Disclose the statement whenever it discloses the portion of the record
to which the statement relates.
(Authority: 20 U.S.C. 1232g(a)(2))
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59296, Nov. 21, 1996]
§99.22 What minimum requirements exist for the conduct of a
hearing?
The hearing required by §99.21 must meet, at a minimum, the following
requirements:
(a) The educational agency or institution shall hold the hearing within a
reasonable time after it has received the request for the hearing from the parent or eligible student.
(b) The educational agency or institution shall give the parent or eligible
student notice of the date, time, and place, reasonably in advance of the
hearing.
(c) The hearing may be conducted by any individual, including an official of the educational agency or institution, who does not have a direct interest in the outcome of the hearing.
(d) The educational agency or institution shall give the parent or eligible
student a full and fair opportunity to present evidence relevant to the issues
raised under §99.21. The parent or eligible student may, at their own expense,
be assisted or represented by one or more individuals of his or her own choice,
including an attorney.
300 Appendix D
(e) The educational agency or institution shall make its decision in writing
within a reasonable period of time after the hearing.
(f) The decision must be based solely on the evidence presented at the
hearing, and must include a summary of the evidence and the reasons for the
decision.
(Authority: 20 U.S.C. 1232g(a)(2))
Subpart D—May an Educational Agency or Institution
Disclose Personally Identifiable Information From
Education Records?
§99.30 Under what conditions is prior consent required to disclose
information?
(a) The parent or eligible student shall provide a signed and dated written
consent before an educational agency or institution discloses personally identifiable information from the student’s education records, except as provided
in §99.31.
(b) The written consent must:
(1) Specify the records that may be disclosed;
(2) State the purpose of the disclosure; and
(3) Identify the party or class of parties to whom the disclosure may
be made.
(c) When a disclosure is made under paragraph (a) of this section:
(1) If a parent or eligible student so requests, the educational agency or
institution shall provide him or her with a copy of the records disclosed; and
(2) If the parent of a student who is not an eligible student so requests, the
agency or institution shall provide the student with a copy of the records
disclosed.
(d) “Signed and dated written consent” under this part may include a
record and signature in electronic form that—
(1) Identifies and authenticates a particular person as the source of the
electronic consent; and
(2) Indicates such person’s approval of the information contained in the
electronic consent.
(Authority: 20 U.S.C. 1232g (b)(1) and (b)(2)(A))
[53 FR 11943, Apr. 11, 1988, as amended at 58 FR 3189, Jan. 7, 1993; 69 FR 21671,
Apr. 21, 2004]
Appendix D
301
§99.31 Under what conditions is prior consent not required to disclose
information?
(a) An educational agency or institution may disclose personally identifiable information from an education record of a student without the consent
required by §99.30 if the disclosure meets one or more of the following
conditions:
(1)(i)(A) The disclosure is to other school officials, including teachers,
within the agency or institution whom the agency or institution has determined to have legitimate educational interests.
(B) A contractor, consultant, volunteer, or other party to whom an
agency or institution has outsourced institutional services or functions may
be considered a school official under this paragraph provided that the outside party—
(1) Performs an institutional service or function for which the agency or institution would otherwise use employees;
(2) Is under the direct control of the agency or institution with respect to
the use and maintenance of education records; and
(3) Is subject to the requirements of §99.33(a) governing the use and
redisclosure of personally identifiable information from education records.
(ii) An educational agency or institution must use reasonable methods to
ensure that school officials obtain access to only those education records in
which they have legitimate educational interests. An educational agency or
institution that does not use physical or technological access controls must
ensure that its administrative policy for controlling access to education records
is effective and that it remains in compliance with the legitimate educational
interest requirement in paragraph (a)(1)(i)(A) of this section.
(2) The disclosure is, subject to the requirements of §99.34, to officials of
another school, school system, or institution of postsecondary education
where the student seeks or intends to enroll, or where the student is already
enrolled so long as the disclosure is for purposes related to the student’s
enrollment or transfer.
Note: Section 4155(b) of the No Child Left Behind Act of 2001, 20 U.S.C. 7165(b),
requires each State to assure the Secretary of Education that it has a procedure in place
to facilitate the transfer of disciplinary records with respect to a suspension or expulsion of a student by a local educational agency to any private or public elementary or
secondary school in which the student is subsequently enrolled or seeks, intends, or is
instructed to enroll.
(3) The disclosure is, subject to the requirements of §99.35, to authorized
representatives of—
(i) The Comptroller General of the United States;
(ii) The Attorney General of the United States;
302 Appendix D
(iii) The Secretary; or
(iv) State and local educational authorities.
(4)(i) The disclosure is in connection with financial aid for which the
student has applied or which the student has received, if the information is
necessary for such purposes as to:
(A) Determine eligibility for the aid;
(B) Determine the amount of the aid;
(C) Determine the conditions for the aid; or
(D) Enforce the terms and conditions of the aid.
(ii) As used in paragraph (a)(4)(i) of this section, financial aid means a
payment of funds provided to an individual (or a payment in kind of tangible
or intangible property to the individual) that is conditioned on the individual’s attendance at an educational agency or institution.
(Authority: 20 U.S.C. 1232g(b)(1)(D))
(5)(i) The disclosure is to State and local officials or authorities to whom
this information is specifically—
(A) Allowed to be reported or disclosed pursuant to State statute adopted
before November 19, 1974, if the allowed reporting or disclosure concerns the
juvenile justice system and the system’s ability to effectively serve the student
whose records are released; or
(B) Allowed to be reported or disclosed pursuant to State statute adopted
after November 19, 1974, subject to the requirements of §99.38.
(ii) Paragraph (a)(5)(i) of this section does not prevent a State from further
limiting the number or type of State or local officials to whom disclosures may
be made under that paragraph.
(6)(i) The disclosure is to organizations conducting studies for, or on
behalf of, educational agencies or institutions to:
(A) Develop, validate, or administer predictive tests;
(B) Administer student aid programs; or
(C) Improve instruction.
(ii) Nothing in the Act or this part prevents a State or local educational
authority or agency headed by an official listed in paragraph (a)(3) of this
section from entering into agreements with organizations conducting studies
under paragraph (a)(6)(i) of this section and redisclosing personally identifiable information from education records on behalf of educational agencies
and institutions that disclosed the information to the State or local educational authority or agency headed by an official listed in paragraph (a)(3) of
this section in accordance with the requirements of §99.33(b).
Appendix D
303
(iii) An educational agency or institution may disclose personally identifiable information under paragraph (a)(6)(i) of this section, and a State or local
educational authority or agency headed by an official listed in paragraph (a)(3)
of this section may redisclose personally identifiable information under paragraph (a)(6)(i) and (a)(6)(ii) of this section, only if—
(A) The study is conducted in a manner that does not permit personal
identification of parents and students by individuals other than representatives of the organization that have legitimate interests in the information;
(B) The information is destroyed when no longer needed for the purposes
for which the study was conducted; and
(C) The educational agency or institution or the State or local educational
authority or agency headed by an official listed in paragraph (a)(3) of this
section enters into a written agreement with the organization that—
(1) Specifies the purpose, scope, and duration of the study or studies and
the information to be disclosed;
(2) Requires the organization to use personally identifiable information
from education records only to meet the purpose or purposes of the study as
stated in the written agreement;
(3) Requires the organization to conduct the study in a manner that does
not permit personal identification of parents and students, as defined in this
part, by anyone other than representatives of the organization with legitimate
interests;
and
(4) Requires the organization to destroy all personally identifiable information when the information is no longer needed for the purposes for which
the study was conducted and specifies the time period in which the information must be destroyed.
(iv) An educational agency or institution or State or local educational
authority or Federal agency headed by an official listed in paragraph (a)(3) of
this section is not required to initiate a study or agree with or endorse the
conclusions or results of the study.
(v) For the purposes of paragraph (a)(6) of this section, the term organization includes, but is not limited to, Federal, State, and local agencies, and
independent organizations.
(7) The disclosure is to accrediting organizations to carry out their
accrediting functions.
(8) The disclosure is to parents, as defined in §99.3, of a dependent student, as
defined in section 152 of the Internal Revenue Code of 1986.
304 Appendix D
(9)(i) The disclosure is to comply with a judicial order or lawfully issued
subpoena.
(ii) The educational agency or institution may disclose information under
paragraph (a)(9)(i) of this section only if the agency or institution makes a
reasonable effort to notify the parent or eligible student of the order or subpoena in advance of compliance, so that the parent or eligible student may
seek protective action, unless the disclosure is in compliance with—
(A) A Federal grand jury subpoena and the court has ordered that the
existence or the contents of the subpoena or the information furnished in
response to the subpoena not be disclosed;
(B) Any other subpoena issued for a law enforcement purpose and the
court or other issuing agency has ordered that the existence or the contents
of the subpoena or the information furnished in response to the subpoena
not be disclosed; or
(C) An ex parte court order obtained by the United States Attorney General
(or designee not lower than an Assistant Attorney General) concerning investigations or prosecutions of an offense listed in 18 U.S.C. 2332b(g)(5)(B) or an
act of domestic or international terrorism as defined in 18 U.S.C. 2331.
(iii)(A) If an educational agency or institution initiates legal action against a
parent or student, the educational agency or institution may disclose to the
court, without a court order or subpoena, the education records of the student
that are relevant for the educational agency or institution to proceed with the
legal action as plaintiff.
(B) If a parent or eligible student initiates legal action against an educational
agency or institution, the educational agency or institution may disclose to the
court, without a court order or subpoena, the student’s education records that
are relevant for the educational agency or institution to defend itself.
(10) The disclosure is in connection with a health or safety emergency,
under the conditions described in §99.36.
(11) The disclosure is information the educational agency or institution
has designated as “directory information,” under the conditions described
in §99.37.
(12) The disclosure is to the parent of a student who is not an eligible
student or to the student.
(13) The disclosure, subject to the requirements in §99.39, is to a victim of an
alleged perpetrator of a crime of violence or a non-forcible sex offense. The
disclosure may only include the final results of the disciplinary proceeding
conducted by the institution of postsecondary education with respect to that
alleged crime or offense. The institution may disclose the final results of the
disciplinary proceeding, regardless of whether the institution concluded a
violation was committed.
Appendix D
305
(14)(i) The disclosure, subject to the requirements in §99.39, is in connection with a disciplinary proceeding at an institution of postsecondary education. The institution must not disclose the final results of the disciplinary
proceeding unless it determines that—
(A) The student is an alleged perpetrator of a crime of violence or
non-forcible sex offense; and
(B) With respect to the allegation made against him or her, the student
has committed a violation of the institution’s rules or policies.
(ii) The institution may not disclose the name of any other student,
including a victim or witness, without the prior written consent of the other
student.
(iii) This section applies only to disciplinary proceedings in which the final
results were reached on or after October 7, 1998.
(15)(i) The disclosure is to a parent of a student at an institution of postsecondary education regarding the student’s violation of any Federal, State,
or local law, or of any rule or policy of the institution, governing the use or
possession of alcohol or a controlled substance if—
(A) The institution determines that the student has committed a disciplinary violation with respect to that use or possession; and
(B) The student is under the age of 21 at the time of the disclosure to the
parent.
(ii) Paragraph (a)(15) of this section does not supersede any provision of
State law that prohibits an institution of postsecondary education from disclosing information.
(16) The disclosure concerns sex offenders and other individuals required
to register under section 170101 of the Violent Crime Control and Law
Enforcement Act of 1994, 42 U.S.C. 14071, and the information was provided
to the educational agency or institution under 42 U.S.C. 14071 and applicable
Federal guidelines.
(b)(1) De-identified records and information. An educational agency or institution, or a party that has received education records or information from education records under this part, may release the records or information without
the consent required by §99.30 after the removal of all personally identifiable
information provided that the educational agency or institution or other
party has made a reasonable determination that a student’s identity is not
personally identifiable, whether through single or multiple releases, and taking into account other reasonably available information.
(2) An educational agency or institution, or a party that has received
education records or information from education records under this part,
may release de-identified student level data from education records for the
306 Appendix D
purpose of education research by attaching a code to each record that may
allow the recipient to match information received from the same source,
provided that—
(i) An educational agency or institution or other party that releases
de-identified data under paragraph (b)(2) of this section does not disclose any
information about how it generates and assigns a record code, or that would
allow a recipient to identify a student based on a record code;
(ii) The record code is used for no purpose other than identifying a
de-identified record for purposes of education research and cannot be used to
ascertain personally identifiable information about a student; and
(iii) The record code is not based on a student’s social security number or
other personal information.
(c) An educational agency or institution must use reasonable methods to
identify and authenticate the identity of parents, students, school officials,
and any other parties to whom the agency or institution discloses personally
identifiable information from education records.
(d) Paragraphs (a) and (b) of this section do not require an educational
agency or institution or any other party to disclose education records or information from education records to any party except for parties under paragraph (a)(12) of this section.
(Authority: 20 U.S.C. 1232g(a)(5)(A), (b), (h), (i), and (j)).
[53 FR 11943, Apr. 11, 1988; 53 FR 19368, May 27, 1988, as amended at 58 FR 3189,
Jan. 7, 1993; 61 FR 59296, Nov. 21, 1996; 65 FR 41853, July 6, 2000; 73 FR 74852, Dec.
9, 2008; 74 FR 401, Jan. 6, 2009; 76 FR 75641, Dec. 2, 2011]
§99.32 What recordkeeping requirements exist concerning requests
and disclosures?
(a)(1) An educational agency or institution must maintain a record of each
request for access to and each disclosure of personally identifiable information from the education records of each student, as well as the names of State
and local educational authorities and Federal officials and agencies listed in
§99.31(a)(3) that may make further disclosures of personally identifiable
information from the student’s education records without consent under
§99.33(b).
(2) The agency or institution shall maintain the record with the education
records of the student as long as the records are maintained.
(3) For each request or disclosure the record must include:
(i) The parties who have requested or received personally identifiable information from the education records; and
Appendix D
307
(ii) The legitimate interests the parties had in requesting or obtaining the
information.
(4) An educational agency or institution must obtain a copy of the record
of further disclosures maintained under paragraph (b)(2) of this section and
make it available in response to a parent’s or eligible student’s request to
review the record required under paragraph (a)(1) of this section.
(5) An educational agency or institution must record the following information when it discloses personally identifiable information from education
records under the health or safety emergency exception in §99.31(a)(10)
and §99.36:
(i) The articulable and significant threat to the health or safety of a
student or other individuals that formed the basis for the disclosure; and
(ii) The parties to whom the agency or institution disclosed the information.
(b)(1) Except as provided in paragraph (b)(2) of this section, if an educational agency or institution discloses personally identifiable information
from education records with the understanding authorized under §99.33(b),
the record of the disclosure required under this section must include:
(i) The names of the additional parties to which the receiving party may disclose the information on behalf of the educational agency or institution; and
(ii) The legitimate interests under §99.31 which each of the additional
parties has in requesting or obtaining the information.
(2)(i) A State or local educational authority or Federal official or agency
listed in §99.31(a)(3) that makes further disclosures of information from
education records under §99.33(b) must record the names of the additional
parties to which it discloses information on behalf of an educational agency
or institution and their legitimate interests in the information under §99.31
if the information was received from:
(A) An educational agency or institution that has not recorded the further
disclosures under paragraph (b)(1) of this section; or
(B) Another State or local educational authority or Federal official or
agency listed in §99.31(a)(3).
(ii) A State or local educational authority or Federal official or agency that
records further disclosures of information under paragraph (b)(2)(i) of this
section may maintain the record by the student’s class, school, district, or other
appropriate grouping rather than by the name of the student.
(iii) Upon request of an educational agency or institution, a State or local
educational authority or Federal official or agency listed in §99.31(a)(3) that
maintains a record of further disclosures under paragraph (b)(2)(i) of this
section must provide a copy of the record of further disclosures to the
308 Appendix D
educational agency or institution within a reasonable period of time not to
exceed 30 days.
(c) The following parties may inspect the record relating to each student:
(1) The parent or eligible student.
(2) The school official or his or her assistants who are responsible for the
custody of the records.
(3) Those parties authorized in §99.31(a) (1) and (3) for the purposes of
auditing the recordkeeping procedures of the educational agency or institution.
(d) Paragraph (a) of this section does not apply if the request was from, or
the disclosure was to:
(1) The parent or eligible student;
(2) A school official under §99.31(a)(1);
(3) A party with written consent from the parent or eligible student;
(4) A party seeking directory information; or
(5) A party seeking or receiving records in accordance with §99.31(a)
(9)(ii)(A) through (C).
(Approved by the Office of Management and Budget under control number 1880-0508)
(Authority: 20 U.S.C. 1232g(b)(1) and (b)(4)(A))
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59297, Nov. 21, 1996; 73 FR
74853, Dec. 9, 2008]
§99.33 What limitations apply to the redisclosure of information?
(a)(1) An educational agency or institution may disclose personally identifiable information from an education record only on the condition that the
party to whom the information is disclosed will not disclose the information to any other party without the prior consent of the parent or eligible
student.
(2) The officers, employees, and agents of a party that receives information under paragraph (a)(1) of this section may use the information, but only for
the purposes for which the disclosure was made.
(b)(1) Paragraph (a) of this section does not prevent an educational
agency or institution from disclosing personally identifiable information
with the understanding that the party receiving the information may make
further disclosures of the information on behalf of the educational agency or
institution if—
Appendix D
309
(i) The disclosures meet the requirements of §99.31; and
(ii)(A) The educational agency or institution has complied with the requirements of §99.32(b); or
(B) A State or local educational authority or Federal official or agency
listed in §99.31(a)(3) has complied with the requirements of §99.32(b)(2).
(2) A party that receives a court order or lawfully issued subpoena and rediscloses personally identifiable information from education records on behalf of
an educational agency or institution in response to that order or subpoena under
§99.31(a)(9) must provide the notification required under §99.31(a)(9)(ii).
(c) Paragraph (a) of this section does not apply to disclosures under
§§99.31(a)(8), (9), (11), (12), (14), (15), and (16), and to information
that postsecondary institutions are required to disclose under the Jeanne
Clery Disclosure of Campus Security Policy and Campus Crime Statistics Act,
20 U.S.C. 1092(f) (Clery Act), to the accuser and accused regarding the outcome of any campus disciplinary proceeding brought alleging a sexual offense.
(d) An educational agency or institution must inform a party to whom
disclosure is made of the requirements of paragraph (a) of this section except
for disclosures made under §§99.31(a)(8), (9), (11), (12), (14), (15), and (16),
and to information that postsecondary institutions are required to disclose
under the Clery Act to the accuser and accused regarding the outcome of any
campus disciplinary proceeding brought alleging a sexual offense.
(Authority: 20 U.S.C. 1232g(b)(4)(B))
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59297, Nov. 21, 1996; 65 FR 41853,
July 6, 2000; 73 FR 74853, Dec. 9, 2008; 76 FR 75642, Dec. 2, 2011]
§99.34 What conditions apply to disclosure of information to other
educational agencies or institutions?
(a) An educational agency or institution that discloses an education record
under §99.31(a)(2) shall:
(1) Make a reasonable attempt to notify the parent or eligible student at
the last known address of the parent or eligible student, unless:
(i) The disclosure is initiated by the parent or eligible student; or
(ii) The annual notification of the agency or institution under §99.7 includes
a notice that the agency or institution forwards education records to other
agencies or institutions that have requested the records and in which the
student seeks or intends to enroll or is already enrolled so long as the disclosure is for purposes related to the student’s enrollment or transfer;
(2) Give the parent or eligible student, upon request, a copy of the record that
was disclosed; and
310 Appendix D
(3) Give the parent or eligible student, upon request, an opportunity for a
hearing under subpart C.
(b) An educational agency or institution may disclose an education record of
a student in attendance to another educational agency or institution if:
(1) The student is enrolled in or receives services from the other agency or
institution; and
(2) The disclosure meets the requirements of paragraph (a) of this section.
(Authority: 20 U.S.C. 1232g(b)(1)(B))
[53 FR 11943, Apr. 11, 1988, as amended at 61 FR 59297, Nov. 21, 1996; 73 FR 74854,
Dec. 9, 2008]
§99.35 What conditions apply to disclosure of information for Federal
or State program purposes?
(a)(1) Authorized representatives of the officials or agencies headed by
officials listed in §99.31(a)(3) may have access to education records in connection with an audit or evaluation of Federal or State supported education
programs, or for the enforcement of or compliance with Federal legal requirements that relate to those programs.
(2) The State or local educational authority or agency headed by an official
listed in §99.31(a)(3) is responsible for using reasonable methods to ensure to
the greatest extent practicable that any entity or individual designated as its
authorized representative—
(i) Uses personally identifiable information only to carry out an audit or
evaluation of Federal- or State-supported education programs, or for the
enforcement of or compliance with Federal legal requirements related to
these programs;
(ii) Protects the personally identifiable information from further disclosures or other uses, except as authorized in paragraph (b)(1) of this section; and
(iii) Destroys the personally identifiable information in accordance with
the requirements of paragraphs (b) and (c) of this section.
(3) The State or local educational authority or agency headed by an official
listed in §99.31(a)(3) must use a written agreement to designate any authorized
representative, other than an employee. The written agreement must—
(i) Designate the individual or entity as an authorized representative;
(ii) Specify—
(A) The personally identifiable information from education records to be
disclosed;
(B) That the purpose for which the personally identifiable information
from education records is disclosed to the authorized representative is to carry
Appendix D
311
out an audit or evaluation of Federal- or State-supported education programs,
or to enforce or to comply with Federal legal requirements that relate to those
programs; and
(C) A description of the activity with sufficient specificity to make clear
that the work falls within the exception of §99.31(a)(3), including a description of how the personally identifiable information from education records will
be used;
(iii) Require the authorized representative to destroy personally identifiable information from education records when the information is no longer
needed for the purpose specified;
(iv) Specify the time period in which the information must be destroyed;
and
(v) Establish policies and procedures, consistent with the Act and other
Federal and State confidentiality and privacy provisions, to protect personally
identifiable information from education records from further disclosure
(except back to the disclosing entity) and unauthorized use, including limiting use of personally identifiable information from education records to only
authorized representatives with legitimate interests in the audit or evaluation
of a Federal- or State-supported education program or for compliance or
enforcement of Federal legal requirements related to these programs.
(b) Information that is collected under paragraph (a) of this section must—
(1) Be protected in a manner that does not permit personal identification
of individuals by anyone other than the State or local educational authority
or agency headed by an official listed in §99.31(a)(3) and their authorized
representatives, except that the State or local educational authority or agency
headed by an official listed in §99.31(a)(3) may make further disclosures of
personally identifiable information from education records on behalf of the
educational agency or institution in accordance with the requirements of
§99.33(b); and
(2) Be destroyed when no longer needed for the purposes listed in paragraph (a) of this section.
(c) Paragraph (b) of this section does not apply if:
(1) The parent or eligible student has given written consent for the disclosure under §99.30; or
(2) The collection of personally identifiable information is specifically
authorized by Federal law.
(Authority: 20 U.S.C. 1232g(b)(1)(C), (b)(3), and (b)(5))
[53 FR 11943, Apr. 11, 1988, as amended at 73 FR 74854, Dec. 9, 2008; 76 FR 75642,
Dec. 2, 2011]
312 Appendix D
§99.36 What conditions apply to disclosure of information in health
and safety emergencies?
(a) An educational agency or institution may disclose personally identifiable information from an education record to appropriate parties, including
parents of an eligible student, in connection with an emergency if knowledge
of the information is necessary to protect the health or safety of the student
or other individuals.
(b) Nothing in this Act or this part shall prevent an educational agency or
institution from—
(1) Including in the education records of a student appropriate information concerning disciplinary action taken against the student for conduct
that posed a significant risk to the safety or well-being of that student,
other students, or other members of the school community;
(2) Disclosing appropriate information maintained under paragraph (b)(1) of
this section to teachers and school officials within the agency or institution
who the agency or institution has determined have legitimate educational
interests in the behavior of the student; or
(3) Disclosing appropriate information maintained under paragraph (b)(1) of
this section to teachers and school officials in other schools who have been determined to have legitimate educational interests in the behavior of the student.
(c) In making a determination under paragraph (a) of this section, an educational agency or institution may take into account the totality of the circumstances pertaining to a threat to the health or safety of a student or
other individuals. If the educational agency or institution determines that
there is an articulable and significant threat to the health or safety of a student
or other individuals, it may disclose information from education records to
any person whose knowledge of the information is necessary to protect the
health or safety of the student or other individuals. If, based on the information available at the time of the determination, there is a rational basis
for the determination, the Department will not substitute its judgment for
that of the educational agency or institution in evaluating the circumstances
and making its determination.
(Authority: 20 U.S.C. 1232g (b)(1)(I) and (h))
[53 FR 11943, Apr. 11, 1988; 53 FR 19368, May 27, 1988, as amended at 61 FR
59297, Nov. 21, 1996; 73 FR 74854, Dec. 9, 2008]
§99.37 What conditions apply to disclosing directory information?
(a) An educational agency or institution may disclose directory information if it has given public notice to parents of students in attendance and
eligible students in attendance at the agency or institution of:
Appendix D
313
(1) The types of personally identifiable information that the agency or
institution has designated as directory information;
(2) A parent’s or eligible student’s right to refuse to let the agency or institution designate any or all of those types of information about the student as
directory information; and
(3) The period of time within which a parent or eligible student has to
notify the agency or institution in writing that he or she does not want any or
all of those types of information about the student designated as directory
information.
(b) An educational agency or institution may disclose directory information about former students without complying with the notice and opt out
conditions in paragraph (a) of this section. However, the agency or institution
must continue to honor any valid request to opt out of the disclosure of directory information made while a student was in attendance unless the student
rescinds the opt out request.
(c) A parent or eligible student may not use the right under paragraph (a)(2)
of this section to opt out of directory information disclosures to—
(1) Prevent an educational agency or institution from disclosing or
requiring a student to disclose the student’s name, identifier, or institutional
email address in a class in which the student is enrolled; or
(2) Prevent an educational agency or institution from requiring a student to wear, to display publicly, or to disclose a student ID card or badge
that exhibits information that may be designated as directory information
under §99.3 and that has been properly designated by the educational
agency or institution as directory information in the public notice provided
under paragraph (a)(1) of this section.
(d) In its public notice to parents and eligible students in attendance at the
agency or institution that is described in paragraph (a) of this section, an educational agency or institution may specify that dis­closure of directory information
will be limited to specific parties, for specific purposes, or both. When an educational agency or institution specifies that disclosure of directory information will
be limited to specific parties, for specific purposes, or both, the educational
agency or institution must limit its directory information disclosures to those
specified in its public notice that is described in paragraph (a) of this section.
(e) An educational agency or institution may not disclose or confirm
directory information without meeting the written consent requirements in
§99.30 if a student’s social security number or other non-directory information is used alone or combined with other data elements to identify or help
identify the student or the student’s records.
(Authority: 20 U.S.C. 1232g(a)(5) (A) and (B))
[53 FR 11943, Apr. 11, 1988, as amended at 73 FR 74854, Dec. 9, 2008; 76 FR 75642,
Dec. 2, 2011]
314 Appendix D
§99.38 What conditions apply to disclosure of information as
permitted by State statute adopted after November 19, 1974,
concerning the juvenile justice system?
(a) If reporting or disclosure allowed by State statute concerns the
juvenile justice system and the system’s ability to effectively serve, prior to
adjudication, the student whose records are released, an educational agency
or institution may disclose education records under §99.31(a)(5)(i)(B).
(b) The officials and authorities to whom the records are disclosed shall
certify in writing to the educational agency or institution that the information will not be disclosed to any other party, except as provided under
State law, without the prior written consent of the parent of the student.
(Authority: 20 U.S.C. 1232g(b)(1)(J))
[61 FR 59297, Nov. 21, 1996]
§99.39 What definitions apply to the nonconsensual disclosure of
records by postsecondary educational institutions in connection with
disciplinary proceedings concerning crimes of violence or non-forcible
sex offenses?
As used in this part:
Alleged perpetrator of a crime of violence is a student who is alleged to have
committed acts that would, if proven, constitute any of the following offenses
or attempts to commit the following offenses that are defined in appendix A
to this part:
Arson
Assault offenses
Burglary
Criminal homicide—manslaughter by negligence
Criminal homicide—murder and nonnegligent manslaughter
Destruction/damage/vandalism of property
Kidnapping/abduction
Robbery
Forcible sex offenses.
Alleged perpetrator of a nonforcible sex offense means a student who is alleged
to have committed acts that, if proven, would constitute statutory rape or
incest. These offenses are defined in appendix A to this part.
Final results means a decision or determination, made by an honor court
or council, committee, commission, or other entity authorized to resolve disciplinary matters within the institution. The disclosure of final results must
Appendix D
315
include only the name of the student, the violation committed, and any sanction imposed by the institution against the student.
Sanction imposed means a description of the disciplinary action taken by the
institution, the date of its imposition, and its duration.
Violation committed means the institutional rules or code sections that were
violated and any essential findings supporting the institution’s conclusion
that the violation was committed.
(Authority: 20 U.S.C. 1232g(b)(6))
[65 FR 41853, July 6, 2000]
Subpart E—What Are the Enforcement Procedures?
§99.60 What functions has the Secretary delegated to the Office and
to the Office of Administrative Law Judges?
(a) For the purposes of this subpart, Office means the Office of the Chief
Privacy Officer, U.S. Department of Education.
(b) The Secretary designates the Office to:
(1) Investigate, process, and review complaints and violations under the
Act and this part; and
(2) Provide technical assistance to ensure compliance with the Act and
this part.
(c) The Secretary designates the Office of Administrative Law Judges to
act as the Review Board required under the Act to enforce the Act with
respect to all applicable programs. The term applicable program is defined in
section 400 of the General Education Provisions Act.
(Authority: 20 U.S.C. 1232g (f) and (g), 1234)
[53 FR 11943, Apr. 11, 1988, as amended at 58 FR 3189, Jan. 7, 1993; 82 FR 6253,
Jan. 19, 2017]
§99.61 What responsibility does an educational agency or institution,
a recipient of Department funds, or a third party outside of an
educational agency or institution have concerning conflict with State
or local laws?
If an educational agency or institution determines that it cannot comply
with the Act or this part due to a conflict with State or local law, it must notify
the Office within 45 days, giving the text and citation of the conflicting law. If
another recipient of Department funds under any program administered by
the Secretary or a third party to which personally identifiable information
from education records has been non-consensually disclosed determines that
316 Appendix D
it cannot comply with the Act or this part due to a conflict with State or local
law, it also must notify the Office within 45 days, giving the text and citation
of the conflicting law.
(Authority: 20 U.S.C. 1232g(f))
[76 FR 75642, Dec. 2, 2011]
§99.62 What information must an educational agency or institution
or other recipient of Department funds submit to the Office?
The Office may require an educational agency or institution, other recipient of Department funds under any program administered by the Secretary to
which personally identifiable information from education records is nonconsensually disclosed, or any third party outside of an educational agency
or institution to which personally identifiable information from education
records is non-consensually disclosed to submit reports, information on
policies and procedures, annual notifications, training materials, or other
information necessary to carry out the Office’s enforcement responsibilities
under the Act or this part.
(Authority: 20 U.S.C. 1232g(b)(4)(B), (f), and (g))
[76 FR 75643, Dec. 2, 2011]
§99.63 Where are complaints filed?
A parent or eligible student may file a written complaint with the Office
regarding an alleged violation under the Act and this part. The Office’s
address is: Family Policy Compliance Office, U.S. Department of Education,
400 Maryland Avenue, SW., Washington, DC 20202.
(Authority: 20 U.S.C. 1232g(g))
[65 FR 41854, July 6, 2000, as amended at 73 FR 74854, Dec. 9, 2008]
§99.64 What is the investigation procedure?
(a) A complaint must contain specific allegations of fact giving reasonable cause to believe that a violation of the Act or this part has occurred.
A complaint does not have to allege that a violation is based on a policy or
practice of the educational agency or institution, other recipient of Department funds under any program administered by the Secretary, or any
third party outside of an educational agency or institution.
(b) The Office investigates a timely complaint filed by a parent or eligible
student, or conducts its own investigation when no complaint has been filed
or a complaint has been withdrawn, to determine whether an educational
agency or institution or other recipient of Department funds under any
Appendix D
317
program administered by the Secretary has failed to comply with a provision
of the Act or this part. If the Office determines that an educational agency or
institution or other recipient of Department funds under any program administered by the Secretary has failed to comply with a provision of the Act or this
part, it may also determine whether the failure to comply is based on a policy
or practice of the agency or institution or other recipient. The Office also
investigates a timely complaint filed by a parent or eligible student, or conducts its own investigation when no complaint has been filed or a complaint
has been withdrawn, to determine whether a third party outside of the educational agency or institution has failed to comply with the provisions of
§99.31(a)(6)(iii)(B) or has improperly redisclosed personally identifiable
information from education records in violation of §99.33.
(Authority: 20 U.S.C. 1232g(b)(4)(B), (f) and (g))
(c) A timely complaint is defined as an allegation of a violation of the Act
that is submitted to the Office within 180 days of the date of the alleged violation or of the date that the complainant knew or reasonably should have
known of the alleged violation.
(d) The Office may extend the time limit in this section for good cause
shown.
(Authority: 20 U.S.C. 1232g(b)(4)(B), (f) and (g))
[53 FR 11943, Apr. 11, 1988, as amended at 58 FR 3189, Jan. 7, 1993; 65 FR 41854,
July 6, 2000; 73 FR 74854, Dec. 9, 2008; 76 FR 75643, Dec. 2, 2011]
§99.65 What is the content of the notice of investigation issued by
the Office?
(a) The Office notifies in writing the complainant, if any, and the educational agency or institution, the recipient of Department funds under any
program administered by the Secretary, or the third party outside of an educational agency or institution if it initiates an investigation under §99.64(b).
The written notice—
(1) Includes the substance of the allegations against the educational agency
or institution, other recipient, or third party; and
(2) Directs the agency or institution, other recipient, or third party to submit a written response and other relevant information, as set forth in §99.62,
within a specified period of time, including information about its policies and
practices regarding education records.
(b) The Office notifies the complainant if it does not initiate an investigation because the complaint fails to meet the requirements of §99.64.
(Authority: 20 U.S.C. 1232g(g))
[73 FR 74855, Dec. 9, 2008, as amended at 76 FR 75643, Dec. 2, 2011]
318 Appendix D
§99.66 What are the responsibilities of the Office in the enforcement
process?
(a) The Office reviews a complaint, if any, information submitted by the
educational agency or institution, other recipient of Department funds under
any program administered by the Secretary, or third party outside of an
educational agency or institution, and any other relevant information. The
Office may permit the parties to submit further written or oral arguments
or information.
(b) Following its investigation, the Office provides to the complainant, if
any, and the educational agency or institution, other recipient, or third party
a written notice of its findings and the basis for its findings.
(c) If the Office finds that an educational agency or institution or other
recipient has not complied with a provision of the Act or this part, it may also
find that the failure to comply was based on a policy or practice of the agency
or institution or other recipient. A notice of findings issued under paragraph (b)
of this section to an educational agency or institution, or other recipient that
has not complied with a provision of the Act or this part—
(1) Includes a statement of the specific steps that the agency or institution or
other recipient must take to comply; and
(2) Provides a reasonable period of time, given all of the circumstances of
the case, during which the educational agency or institution or other recipient may comply voluntarily.
(d) If the Office finds that a third party outside of an educational agency
or institution has not complied with the provisions of §99.31(a)(6)(iii)(B) or
has improperly redisclosed personally identifiable information from education records in violation of §99.33, the Office’s notice of findings issued under
paragraph (b) of this section—
(1) Includes a statement of the specific steps that the third party outside of
the educational agency or institution must take to comply; and
(2) Provides a reasonable period of time, given all of the circumstances of
the case, during which the third party may comply voluntarily.
(Authority: 20 U.S.C. 1232g(b)(4)(B), (f), and (g))
[76 FR 75643, Dec. 2, 2011]
§99.67 How does the Secretary enforce decisions?
(a) If an educational agency or institution or other recipient of Department funds under any program administered by the Secretary does not
comply during the period of time set under §99.66(c), the Secretary may
take any legally available enforcement action in accordance with the Act,
Appendix D
319
including, but not limited to, the following enforcement actions available in
accordance with part D of the General Education Provisions Act—
(1) Withhold further payments under any applicable program;
(2) Issue a complaint to compel compliance through a cease and desist
order; or
(3) Terminate eligibility to receive funding under any applicable program.
(b) If, after an investigation under §99.66, the Secretary finds that an
educational agency or institution, other recipient, or third party has complied voluntarily with the Act or this part, the Secretary provides the complainant and the agency or institution, other recipient, or third party with
written notice of the decision and the basis for the decision.
(c) If the Office finds that a third party, outside the educational agency or
institution, violates §99.31(a)(6)(iii)(B), then the educational agency or
institution from which the personally identifiable information originated
may not allow the third party found to be responsible for the violation of
§99.31(a)(6)(iii)(B) access to personally identifiable information from education records for at least five years.
(d) If the Office finds that a State or local educational authority, a Federal
agency headed by an official listed in §99.31(a)(3), or an authorized representative of a State or local educational authority or a Federal agency headed by an
official listed in §99.31(a)(3), improperly rediscloses personally identifiable
information from education records, then the educational agency or institution
from which the personally identifiable information originated may not allow the
third party found to be responsible for the improper redisclosure access to personally identifiable information from education records for at least five years.
(e) If the Office finds that a third party, outside the educational agency or
institution, improperly rediscloses personally identifiable information from
education records in violation of §99.33 or fails to provide the notification
required under §99.33(b)(2), then the educational agency or institution from
which the personally identifiable information originated may not allow the
third party found to be responsible for the violation access to personally identifiable information from education records for at least five years.
(Authority: 20 U.S.C. 1232g(b)(4)(B) and (f); 20 U.S.C. 1234c)
[76 FR 75643, Dec. 2, 2011]
Appendix A to Part 99—Crimes of Violence Definitions
Arson
Any willful or malicious burning or attempt to burn, with or without intent
to defraud, a dwelling house, public building, motor vehicle or aircraft, personal property of another, etc.
320 Appendix D
Assault Offenses
An unlawful attack by one person upon another.
Note: By definition there can be no “attempted” assaults, only “completed”
assaults.
(a) Aggravated Assault. An unlawful attack by one person upon another for
the purpose of inflicting severe or aggravated bodily injury. This type of assault
usually is accompanied by the use of a weapon or by means likely to produce
death or great bodily harm. (It is not necessary that injury result from an aggravated assault when a gun, knife, or other weapon is used which could and probably would result in serious injury if the crime were successfully completed.)
(b) Simple Assault. An unlawful physical attack by one person upon another
where neither the offender displays a weapon, nor the victim suffers obvious
severe or aggravated bodily injury involving apparent broken bones, loss of
teeth, possible internal injury, severe laceration, or loss of consciousness.
(c) Intimidation. To unlawfully place another person in reasonable fear of
bodily harm through the use of threatening words or other conduct, or both, but
without displaying a weapon or subjecting the victim to actual physical attack.
Note: This offense includes stalking.
Burglary
The unlawful entry into a building or other structure with the intent to
commit a felony or a theft.
Criminal Homicide—Manslaughter by Negligence
The killing of another person through gross negligence.
Criminal Homicide—Murder and Nonnegligent Manslaughter
The willful (nonnegligent) killing of one human being by another.
Destruction/Damage/Vandalism of Property
To willfully or maliciously destroy, damage, deface, or otherwise injure real
or personal property without the consent of the owner or the person having
custody or control of it.
Kidnapping/Abduction
The unlawful seizure, transportation, or detention of a person, or any
combination of these actions, against his or her will, or of a minor without the
consent of his or her custodial parent(s) or legal guardian.
Note: Kidnapping/Abduction includes hostage taking.
Robbery
The taking of, or attempting to take, anything of value under confrontational circumstances from the control, custody, or care of a person or persons
by force or threat of force or violence or by putting the victim in fear.
Note: Carjackings are robbery offenses where a motor vehicle is taken through
force or threat of force.
Appendix D
321
Sex Offenses, Forcible
Any sexual act directed against another person, forcibly or against that
person’s will, or both; or not forcibly or against the person’s will where the
victim is incapable of giving consent.
(a) Forcible Rape (Except “Statutory Rape”). The carnal knowledge of a
person, forcibly or against that person’s will, or both; or not forcibly or against
the person’s will where the victim is incapable of giving consent because of
his or her temporary or permanent mental or physical incapacity (or because
of his or her youth).
(b) Forcible Sodomy. Oral or anal sexual intercourse with another person, forcibly or against that person’s will, or both; or not forcibly or against the person’s
will where the victim is incapable of giving consent because of his or her youth
or because of his or her temporary or permanent mental or physical incapacity.
(c) Sexual Assault With An Object. To use an object or instrument to unlawfully penetrate, however slightly, the genital or anal opening of the body of
another person, forcibly or against that person’s will, or both; or not forcibly
or against the person’s will where the victim is incapable of giving consent
because of his or her youth or because of his or her temporary or permanent
mental or physical incapacity.
Note: An “object” or “instrument” is anything used by the offender other than
the offender’s genitalia. Examples are a finger, bottle, handgun, stick, etc.
(d) Forcible Fondling. The touching of the private body parts of another
person for the purpose of sexual gratification, forcibly or against that person’s
will, or both; or not forcibly or against the person’s will where the victim is
incapable of giving consent because of his or her youth or because of his or
her temporary or permanent mental or physical incapacity.
Note: Forcible Fondling includes “Indecent Liberties” and “Child Molesting.”
Nonforcible Sex Offenses (Except “Prostitution Offenses”)
Unlawful, nonforcible sexual intercourse.
(a) Incest. Nonforcible sexual intercourse between persons who are related
to each other within the degrees wherein marriage is prohibited by law.
(b) Statutory Rape. Nonforcible sexual intercourse with a person who is
under the statutory age of consent.
(Authority: 20 U.S.C. 1232g(b)(6) and 18 U.S.C. 16)
[65 FR 41854, July 6, 2000]
APPENDIX E
Additional Resources
CERTIFICATES OF CONFIDENTIALITY
• For information on the scope and applicability of the National Institutes of
Health (NIH) Certificate of Confidentiality policy and the process for
obtaining one: https://grants.nih.gov/policy/humansubjects/coc.htm
• For information on human research participant protections, including the
process for obtaining a privacy certificate for research funded by the
National Institute of Justice, which is not a signatory to the Common Rule:
https://nij.ojp.gov/funding/human-subjects-and-privacy-protection
CLINICAL TRIALS
• For information on NIH clinical trial requirements, including the NIH
definition of “clinical trial” and a decision tool to determine whether a
proposed study meets the definition; information on registration and
reporting; details about good clinical practice (GCP) training; information
on posting of consent forms; and more: https://grants.nih.gov/policy/
clinical-trials.htm
• For information on clinical trial requirements for posting informed consent
forms under the revised Common Rule: https://www.hhs.gov/ohrp/
regulations-and-policy/informed-consent-posting/index.html
DATA MANAGEMENT PLANS
• For a comprehensive list of data sharing policies, data management plan
requirements, and templates for both federally and privately funded
research: https://dmptool.org/public_templates
323
324 Appendix E
DATA AND SAFETY MONITORING
The sites listed below provide comprehensive information on NIH policy for
data and safety monitoring, including decision tools to determine whether
a study requires a data and safety monitoring plan (DSMP) or a data and
safety monitoring board (DSMB); guidance on developing DSMPs; roles and
responsibilities of, and the process for convening, a DSMB; and reporting
requirements:
• https://www.nimh.nih.gov/funding/clinical-research/nimh-policy-governingthe-monitoring-of-clinical-trials.shtml
• https://www.drugabuse.gov/research/clinical-research/guidelines-developingdata-safety-monitoring
REPORTING REQUIREMENTS
• For guidance on adverse events that are considered unanticipated and the
timeline for reporting such incidents to the IRB, the Office for Human
Research Protections, and the funding agency: https://www.hhs.gov/ohrp/
regulations-and-policy/guidance/reviewing-unanticipated-problems/
index.html
HIPAA PRIVACY RULE
• For an overview of how research involving protected health information is
regulated by the Health Insurance Portability and Accountability Act of
1996 (HIPAA) Privacy Rule and additional resources on the implementation of the Privacy Rule: https://www.hhs.gov/hipaa/for-professionals/
special-topics/research/index.html
SINGLE IRB POLICY FOR MULTISITE STUDIES
• For background information and additional resources on the NIH Single
IRB (sIRB) Policy for multisite studies in the United States: https://grants.
nih.gov/policy/humansubjects/single-irb-policy-multi-site-research.htm
• For information on the sIRB requirement under the revised Common Rule,
including exceptions to the requirement: https://www.hhs.gov/ohrp/
regulations-and-policy/single-irb-requirement/index.html
INDEX
A
AA (ambulatory assessment) methodology,
178. See also Internet and mobile
technologies in research
Access and usability (in DHC-R), 39
in case evaluating commercial
product and vendor supporting
research, 40
in case using Fitbit-supported intervention,
44–45
Accessibility, in data sharing, 88–91, 93
Acquired immunodeficiency syndrome
(AIDS) research, 14–15, 130
Administrative considerations, for
intervention studies, 142
Adolescents
with depression. See Treatment for
Adolescents With Depression Study
[TADS]
research with. See Minors as research
subjects
Adverse events, 30
Afkinich, J. L., 211–212
AIDS research, 14–15, 130
Ambulatory assessment (AA) methodology,
178. See also Internet and mobile
technologies in research
Amending education records, procedures
for, 298–300
American Academy of Pediatrics, 211
American Bar Association Commission
on Law and Aging, 67
American Psychiatric Association, 76–77
American Psychological Association (APA)
on conflicts of commitments, 74
on conflicts of interests, 74–76
consent guidance from, 67
data sharing recommended by
journals of, 84
Ethics Code of. See Ethical Principles
of Psychologists and Code of Conduct
multicultural guidelines of, 214
race and ethnicity guidelines of, 214
Anonymization of data, 16–17, 136
APA. See American Psychological
Association
APA Ethics Code. See Ethical Principles
of Psychologists and Code of Conduct
Appelbaum, P. S., 58, 63
Appreciation of information’s significance, 61
Aristotle, 9
Armstrong, D., 80
Assent, by minors in research, 207–210
Assessment of decision-making capacity,
62–64, 66–67
Association of Internet Researchers, 213
Autonomy, 9. See also Informed consent;
Voluntariness
Belmont Report on, 6
in decision making, 115
and emerging technologies, 60
with internet and mobile technologies,
179–180
in neurobiological research, 165, 167
in research in and with communities, 130
325
326 Index
B
“Bad apples,” 103–104
“Bad barrels,” 103–105
Bagot, K. S., 213
“Basic Experimental Studies involving
Humans (BESH) Required,” 31
BD (bipolar disorder) case study, 145–148
Beauchamp, T., 64
Behavioral research
Belmont Report principles for, 5.
See also Belmont Report [National
Commission]
involving prisoners as research subjects,
269–272
professional licenses of researchers in,
75
protection of human participants in, 24
revised Common Rule for, 14
risk to participants in, 36
technologies in, 37
voluntariness in, 58
Belmont Report (National Commission),
3, 5–10
applied to intervention studies, 142
beneficence principle in, 7
on comprehension of information, 59
and internet and mobile technologies
in research, 179–181
justice principle in, 7–8
as key ethics document, 56, 57
origins of principles in, 8–10
and research in and with communities,
196
respect for persons principle in, 6
on risks and benefits, 64
tensions between principles of, 13
text of, 221–233
universality of principles in, 126
Beneficence, 5
Belmont Report on, 7, 226–227
federal regulations regarding, 11–12
historical origin of, 9
in international and global research, 126
in internet and mobile technologies in
research, 180, 184–185
and privacy/data confidentiality, 35–37.
See also Risk–benefit assessment
and right to privacy, 16
and risk-benefit assessment, 35–37
tensions between other principles and, 13
Benefits of research. See Risk–benefit
assessment
“Benign behavioral interventions,” 32
Bentham, Jeremy, 9
Berkman, B. E., 213
“BESH (Basic Experimental Studies
involving Humans) Required,” 31
Biederman, Joseph, 78–79
Biomedical research. See also
Neurobiological research
Belmont Report principles for, 5.
See also Belmont Report [National
Commission]
conflicts of interests in, 74
involving prisoners as research subjects,
269–272
protection of human participants in, 24
risk to participants in, 35–36
Bipolar disorder (BD) case study, 145–148
Blachman-Demner, D. R., 211–212
Blanket consent, 61
Boynton, M. H., 60
Brain imaging, incidental findings from,
165–168
British Psychological Society, 178
Broad consent, 61, 212
Brody, J. L., 214
Burdens of research, 7–9, 65. See also Justice
Bystander effects, 78
Bystander rights, 38
C
Care, ethics of, 17
Casey, B. J., 208
CBPR (community-based participatory
research), 191, 198–201
Center for American Indian Health, 201
Certificates of Confidentiality (CoCs)
requirements for, 28–29
in research with internet and mobile
technologies, 180
resources on, 323
Cheating, in conducting research, 99.
See also Research misconduct
Children. See also Minors as research
subjects
federal regulations on research with,
272–276
in international and global research, 130
legally authorized representatives for, 65
neurobiological research with, 164–165
Childress, J., 64
Choice, expression of, 62
Citation practices, 101
Clark, D. B., 210
Clinical practice, clinical research vs., 141
Clinical trials
definitions of, 30
requirements for, 30–32
resources on, 323
revised Common Rule on, 32
ClinicalTrials.gov, 142
CoCs. See Certificates of Confidentiality
Coercive institutional research
environments, 113–123
benefits and risks of participation in,
114–115
Index
case study of, 121–122
conflict among stakeholders in, 118
data ownership in, 117–118
inclusiveness in, 118–119
informed consent in, 115–117
reporting and publication in, 119–120
sustainability of findings/products in,
120–121
CoIs. See Conflicts of interests
Common Rule, 24. See also Title 45 Code
of Federal Regulations Part 46
agency signatories to, 24n1
on broad consent, 61
on clinical trials, 32
and digital health research, 38, 39
establishment of, 13
and group harm, 15
on multisite studies, 32–33
personal information governed by, 36
requirements specified by, 24
revision of, 13–14
on technological research environments,
182
on vulnerable populations, 181
Communities, research in and with.
See Research in and with communities
Community-based participatory research
(CBPR), 191, 198–201
Compensation for participants
in international and global research,
131
in research with minors, 211–212
Comprehension. See Decision-making
capacity
Confidentiality. See also Certificates
of Confidentiality (CoCs)
of data, 35–37
in research with internet and mobile
technologies, 180, 181, 183–185
in research with minors, 210
Confidentiality practices, defined, 36
Conflict of commitments, 74
Conflicts of interests (CoIs), 73–80
actual or apparent, addressing, 73–74
ethical guidelines and other standards
on, 74–77
in Harvard mood disorders study case,
78–79
preventive steps concerning, 77–78
reasons for not reporting, 78
with research in institutional settings,
118–120
types of, 74
in University of Pennsylvania genetic
trials case, 79
“Conflicts of Interests and Commitments”
(APA), 74
Connected and Open Research Ethics
Project, 213–214
327
Consent. See also Informed consent
blanket, 61
broad, 61, 212
dynamic, 60
in international and global research,
130
Consent forms, 59, 65–66
Cosgrove, L., 76–77
Crimes of violence, definitions of,
319–321
Crowdsourcing
in digital health research, 49–50
in research with internet and mobile
technologies, 181–182
Cultural expertise
in international and global research,
132–133
in research in and with communities,
193–194
Cultural humility
defined, 129
in international or global research, 126,
128–130
Cultural influences on research
misconduct, 105–106, 108
Curation of data, 88
D
Data
accessibility of, 88–89
anonymization of, 16–17, 136
in community-based participatory
research, 201
confidentiality of, 35–37
curation of, 88
in data repositories, 61, 135
deidentified, 88, 118
hacking of, 136
ownership of, 108
in research in and with communities,
194–195
reuse of, 85–86, 88, 89, 94. See also
Data sharing
validity of, with online studies, 181–182,
186
Data and safety monitoring
with Databrary, 91–92
for mental health intervention research,
143, 147, 150–151
regulation of, 29
in research with internet and mobile
technologies, 183–185
resources on, 324
Data and safety monitoring boards
(DSMBs), 29, 143
Data and safety monitoring plans (DSMPs),
29, 143
Databrary case study, 89–94
328 Index
Data collection
ethical issues in, 87–88
expansion of electronic methods for, 181
in research with internet and mobile
technologies, 184, 187
Data errors
discovered during reuse, 94
research misconduct vs., 100
Data management
in case evaluating commercial product
and vendor supporting research, 42
in case using Fitbit-supported
intervention, 46
confidentiality in, 36–37
defined, 36
in DHC-R, 39
with online mental health interventions,
151–152
of personal health information, 38
Data management plans, 30
for international and global research,
135
resources on, 323
Data ownership
in international and global research, 135
with research in institutional settings,
117–118
Data Preservation Alliance for the Social
Sciences, 91–92
Data sets
curation of, 88
limited, 28
Data sharing, 83–95
Databrary case study, 89–94
ethical considerations in, 86–89
ethical imperative for, 83–86
with mental health intervention
research, 142
Deception
in conducting research, 99
in disclosure of information, 60
Decisionally-impaired individuals, 144
Decision-making capacity (comprehension)
assessment and determination of, 62–64,
66–67
impaired, research with persons with,
64–65, 144
for informed consent, 6, 58, 61–66
for mental health intervention research,
144
optimizing, 65–66
principles relevant to, 57
in research with minors, 208
subconstructs of, 61–62
Declaration of Helsinki, 126, 154
Deidentified data, 88, 118
Deontological tradition, 8–9
Depression in adolescents. See Treatment
for Adolescents With Depression
Study (TADS)
Detrimental research practices, 100
Digital Health Checklist for Researchers
(DHC-R), 39
Digital Health Framework, 37, 40, 47
Digital health research, 37–50
case evaluating commercial product and
vendor supporting research, 40–43
case using Fitbit-supported intervention,
43–47
discussion, 47–50
studies, platforms, and tools in, 37–39
“Digital technologies,” 37. See also Internet
and mobile technologies in research
Directory information, 27
Disadvantaged populations
Belmont Report on, 7–8
and burdens of research, 8, 9
Disclosure of information
about brain imaging incidental findings,
165–167
Belmont Report on, 6
incomplete, 6
and informed consent, 6, 57–61
personally identifiable, from education
records, 300–315
relevant to decision-making capacity,
65–66
that might indicate potential conflicts
of interest, 73–74, 78
Dishonest behavior
in conducting research. See also Research
misconduct
in disclosure of information, 60
willingness to engage in, 77–78
Dissemination of findings
from clinical trials funded by NIH, 142
from community-based participatory
research, 200–201
from institutional research, 120
from research in and with communities,
195
Distributive justice, 9
Donovan, Sharon, 80
DSMBs (data and safety monitoring
boards), 29, 143
DSMPs (data and safety monitoring plans),
29, 143
Dynamic consent, 60
E
Education
for conducting digital health research,
48–49
to prevent research misconduct, 108
Education records
access to, 25–27
disclosure of personally identifiable
information from, 300–315
Index
procedures for amending, 298–300
in research with minors, 210
rights of inspection and review of,
296–298
Emancipated minors, 209
Enforcement procedures, in Title 34
Code of Federal Regulations Part 99,
315–319
Enhancement research, 170, 171
Environmental factors, in research
misconduct, 104–105, 108
Ethical challenges. See also Preventing
ethical conundrums
in data sharing, 83
in international and global research,
126–136
in internet and mobile technologies
in research, 177, 183–186
in neurobiological research, 163–164
in research in and with communities,
192–195
in research with minors, 211–215
Ethical conduct of research, 3–18
and academic developments in ethics
and morality, 17–18
Belmont Report on, 5–10. See also
Belmont Report
emerging issues in, 14–16
federal regulations on, 10–14
technological change and privacy
concerns for, 16–17
Ethical considerations or issues
in data sharing, 86–89
emerging issues, 14–16
with internet and mobile technologies
in research, 177–178
in mental health intervention research,
140–141
in neurobiological research, 163
with research in institutional settings, 114
in research with minors, 206–210
Ethical guidelines
for conflicts of interests, 74–77
for international and global research, 126
for internet-mediated research, 178
for research in and with communities,
196–197
for research with minors, 206
for social environment sampling,
182–183
Ethical Guidelines for Internet-Mediated
Research (British Psychological
Society), 178
Ethical imperative, for data sharing, 83–86
Ethical Principles of Psychologists and Code of
Conduct (APA Ethics Code), 55
applied to international research, 128–129
on conflicts of interests, 75
on disclosure of information, 58–60
329
on incentives for research participation,
211–212
on informed consent, 115
and internet-mediated research, 178
on respect for people’s rights and dignity,
114, 119
Ethical violations
in internet and mobile technologies
in research, 186–187
in research in and with communities,
195–196
Ethics
academic developments in, 17–18
virtue, 17
Ethics review boards, 24, 193. See also
Institutional review boards (IRBs)
European Union General Data Protection
Regulation, 93
Exempt research
revised Common Rule for, 13
in Title 45 Code of Federal Regulations
Part 46, 245–248
Expertise of researcher
in institutional research, 121
in international and global research,
126, 128–133
Expression of choice, 62
F
Fabrication, 100
defined, 99
prevalence of, 101
in research in institutional settings, 119
Facebook, 88
False-positive incidental findings, 166
Falsification, 100
defined, 99
prevalence of, 101
in research in institutional settings, 119
Family Educational Rights and Privacy Act
of 1974 (FERPA), 25–27, 210
Federal Policy for the Protection of Human
Subjects, 24. See also Common Rule
Federal regulations, 10–14. See also
Regulatory compliance; individual
regulations
application of, 23
on beneficence, 11–12
on broad consent, 212
Common Rule in, 13, 24
on disclosure of information, 58–59
ethical principles embedded in, 3.
See also Ethical conduct of research
evolution of, 13–14
on justice, 12–13
for minimal risk, 206–207
personal information governed by, 36
on respect for persons, 11
330 Index
Title 34 Code of Federal Regulations
Part 98, 279–284
Title 34 Code of Federal Regulations
Part 99, 285–321
Title 45 Code of Federal Regulations
Part 46 Subparts A-E, 235–278
FERPA (Family Educational Rights and
Privacy Act of 1974), 25–27, 210
Feudtner, C., 215
Fisher, C. B., 206, 208–210, 213, 214
Funding Opportunity Announcements
(FOAs), 31
Funding sources
and conflicts of interest, 74–75
for digital health research, 37
for institutional research, 119, 120
National Institutes of Health, 126, 142
regulation of, 10–11
requirements of, 23, 29
and research misconduct, 102
“10/90 gap” with, 125
G
Gelsinger, Jesse, 79
Gender norms, in international and
global research, 134–135
Gilligan, C., 17
Global Forum for Health Research, 125
Global research. See International and
global research
Goldenberg, A. J., 213
Grady, C., 154, 214–215
Grisso, T., 63
Group harm, 15
Guttmann, K., 214
H
Harms in research, 7
assessing risks of. See Risk–benefit
assessment
in and with communities, 195, 197
group harm, 15
in institutional settings, 114
with internet and mobile technologies,
37
with mental health interventions,
150–151, 155–157
“minimal risk of,” 42, 206–207
minimizing (non-maleficence), 126, 180
reporting risks of, 29–30
from research misconduct, 101
Hartshorne, H., 77
Harvard mood disorders study case, 78–79
Havasupai tribe, research on, 15
Health care systems
integration of research studies into, 16
interventions implemented in, 140
protected health information, 27–28
Health Care Systems Research
Collaboratory, 148
Health improvement research, 170
Health information
personal, 38, 193
protected, 27–28, 210
Health Insurance Portability and
Accountability Act of 1996 (HIPAA),
27, 36, 180, 210, 324
Health research. See Biomedical research;
Digital health research
Heidorn, P. B., 86
Heller, Jean, 4
HEW. See U.S. Department of Health,
Education and Welfare
HHS. See U.S. Department of Health
and Human Services
HIPAA. See Health Insurance Portability
and Accountability Act of 1996
HIPAA Privacy Rule, 27, 28, 324
Historical events, and research in
and with communities, 192
Hokke, S., 213
Human fetuses as research subjects,
federal regulation of, 264–269
I
IFs (incidental findings), in neurobiological
research, 165–168
Illegal populations, international and
global research on, 136
Incentives
in international and global research, 131
in research with minors, 211–212
Incidental findings (IFs), in neurobiological
research, 165–168
Inclusiveness, with research in institutional
settings, 118–119
Incomplete disclosure of information, 6
Indigenous communities, research in
and with. See Research in and with
communities
Informed consent, 55–67
Belmont Report on, 228–231
decision-making capacity
(comprehension) in, 6, 58, 61–66
defining, 57–66
in digital research, 38
disclosure of information for, 6, 57–61
federal regulations regarding, 11
historical considerations with, 56–57
with internet and mobile technologies,
179
for mental health intervention research,
149–150
in neurobiological research, 164, 167
Privacy Rule vs., 28
in research in and with communities, 195
Index
in research in institutional settings,
115–117
in research with internet and mobile
technologies, 182–184
and research with minors, 207–209,
212–213
as respect for persons, 6. See also Respect
for persons
revised Common Rule for, 13, 14
Title 45 Code of Federal Regulations
Part 46 on, 255–262
voluntariness of, 6, 57, 58
Inspection of education records, rights of,
296–298
Institute of Medicine, 59, 74
Institutional policies, 23
Institutional review boards (IRBs), 24–25
data and safety monitoring criterion
for approval by, 29
and digital health research, 39
disapproval of studies by, 12
for multisite studies, 32, 33
National Commission recommendations
for, 10
personal information governed by, 36, 37
registration of, 276–278
for research in and with communities,
193, 196, 197
for research using internet and mobile
technologies, 187
for research with minors, 206–208,
212–213
Title 45 Code of Federal Regulations
Part 46 on, 248–255
and waivers of parental permission, 26
Institutions
key characteristics of, 113
preventing research misconduct in, 108
as settings for research, 113. See also
Coercive institutional research
environments
International and global research, 125–137
cultural expertise appropriate to setting
in, 132–133
enhancing ethical conduct of, 126–136
ethical guidelines and other standards
for, 126
legally authorized representatives in, 65
on oppressed, stigmatized, and illegal
populations, 136
oversight of, 148
perceptions of plagiarism in, 105–106
positionalities and power relations in,
130, 133–136
recognizing one’s own limitations,
perspective, and expertise in, 126,
128–132
International BRAIN Initiative, 170–171
International Military Tribunal, 55
331
Internet and mobile technologies in
research, 177–189. See also Digital
health research
beneficence in, 180, 184–185
case example of, 188–189
justice in, 180–181, 185
online bipolar disorder intervention
case study, 145–148
and possible harms to participants, 37
preventing/limiting ethical conundrums
in, 183–186
primary ethical considerations with,
177–178
respect for persons in, 179–180, 183–184
responses to ethical violations in,
186–187
scientific integrity in, 181–183, 186
Intervention research
enhancement research, 170, 171
ethics of, 140–141
health improvement research, 170
medical, with minors, 214–215
in mental health. See Mental health
intervention research
therapeutic misconception in, 168
IRBs. See Institutional review boards
J
James, William, 113
Jensen, A. R., 12
Jeste, D. V., 64
Joffe, S., 149
Journal of Visualized Experiments, 85
Justice, 5
Belmont Report on, 7–8, 10, 227–228,
232–233
and consent capacity, 65
distributive, 9
federal regulations regarding, 12–13
historical origin of, 9
in international and global research, 126
in internet and mobile technologies in
research, 180–181, 185
moral psychology on, 17
and norm of reciprocity, 16
Rawls’s theory of, 17
shift in applications of, 14–16
social, 15–17
tensions between other principles and, 13
K
Kandiyoti, D., 134
Kant, Immanuel, 8
Katz, Jay, 57
Keith-Spiegel, P., 109–110
Kennedy, Edward, 4
Kohlberg, L., 17, 18
332 Index
L
N
Labs, preventing research misconduct in,
107–108
Language, in international and global
research, 131–132
Legally authorized representatives (LARs),
65, 144
Limitations of researcher, in international
and global research, 126, 128–132
Limited data sets, 28
Limiting ethical conundrums, in internet
and mobile technologies in research,
183–186
Local laws and regulations, 23, 65. See also
Regulatory compliance
Luchetti, M., 125
National Advisory Council on Drug Abuse,
214
National Commission for the Protection
of Human Subjects of Biomedical
and Behavioral Research (National
Commission), 4–5, 8
on beneficence, 12
ethical principles report of. See Belmont
Report
on institutional review boards, 10
key mandate for, 57
members of, 222
psychosurgery report of, 13
National Institute of Justice, 29
National Institutes of Health (NIH), 10
Certificates of Confidentiality granted by,
29
on clinical trials, 30–32
on financial conflicts of interest
disclosures, 74–75
and mental health intervention research,
142–144
multisite studies requirements of, 32
neurobiological research funded by, 163
research funding from, 126
National Research Act of 1974, 4–5, 10,
56–57
National Science and Technology Council, 13
Native American communities. See
Research in and with communities
Neonates as research subjects, federal
regulation of, 264–269
Nuremberg Code, 55
Neurobiological research, 163–171
incidental findings in, 165–168
informed consent in, 164, 167
noninvasive brain stimulation in,
168–171
with vulnerable populations, 164–165
New York University (NYU), 90, 91
NIH. See National Institutes of Health
Nishimura, A., 65–66
Noninvasive brain stimulation, 168–171
Non-maleficence (minimizing harm),
126, 180
Nosek, B. A., 182
NYU (New York University), 90, 91
M
MacArthur Competence Assessment Tool
for Clinical Research (MacCAT-CR),
63–64
Macaulay, A. C., 197
Mattes, Richard, 80
Mature minors, 209–210
May, M. A., 77
Mental health intervention research,
139–158
adolescents with depression case study,
141, 144, 146, 152–157
case studies of, 145, 146
defined, 140
ethical issues in, 140–141
MoodSwings 2.0 case study, 145–148
regulatory and administrative
considerations in, 142–145
Suicide Prevention Outreach Trial
case study, 146, 148–152
Milgram, S., 4, 140
Mill, John Stuart, 9
Miller, V. A., 215
Millum, J., 154
Minimal risk of harm, 42, 206–207
Minors as research subjects, 205–215.
See also Adolescents; Children
and data sharing on Databrary, 93
ethical challenges in, 211–215
unique ethical issues in, 206–210
Misconduct. See Research misconduct
Mobile technologies. See Internet and
mobile technologies in research
MoodSwings 2.0 case study, 145–148
Moral foundations theory, 18
Morality, academic developments in, 17–18
Moral psychology, 17
Multimedia aids, for informed consent, 66
Multisite studies
oversight of, 148
Single IRB Policy for, 32–33, 324
O
Objectivity, 118
Office of Human Research Protections
(OHRP), 206, 208, 209
Online bipolar disorder intervention,
145–148
Oppressed populations, international
and global research on, 136
Oversight bodies, 23. See also Regulatory
compliance
Index
P
Parental permissions
for research with minors, 207–210
for students, 25–27
Payment to subjects
in international and global research,
131
in research with minors, 211–212
Permission to share data, 92–93
Personal factors, in research misconduct,
103–104
Personal health information, 38, 193
Personally identifiable information
Certificates of Confidentiality for, 28
and data sharing, 87–88
defined, 26
from education records, disclosure of,
300–315
with global data, 136
in research with internet and mobile
technologies, 180, 182–183, 185
in research with minors, 210
Perspective of researcher, in international
and global research, 126, 128–132
PHI (protected health information), 27–28,
210
PHS. See U.S. Public Health Service
Plagiarism, 100–101
culturally differing perceptions of,
105–106
defined, 99
preventing, 107
in research in institutional settings, 119
Planning research. See Regulatory
compliance
Positionality(-ies)
defined, 133
in international and global research,
133–136
Power relations, in international and global
research, 130, 133–136
PPRA (Protection of Pupil Rights
Amendment), 25–26
Practice Guideline for the Treatment of Patients
With Major Depressive Disorder
(American Psychiatric Association),
76–77
Practice of therapy, Belmont Report on, 5
Pregnant women as research subjects,
federal regulation of, 264–269
Preregistrations, 84–85
Pressures associated with research,
104–105, 107
Preventing ethical conundrums
conflicts of interest, 77–78
in internet and mobile technologies
in research, 183–186
in research in and with communities,
197–201
333
Preventing research misconduct, 106–108
Prisoners as research subjects. See also
Coercive institutional research
environments
for mental health interventions,
144–145
Title 45 Code of Federal Regulations
Part 46 on, 269–272
Privacy
in case evaluating commercial product
and vendor supporting research,
40–41
in case using Fitbit-supported
intervention, 45–46
Certificates of Confidentiality, 28–29
defined, 36
and deidentified data, 88
in DHC-R, 39
with online mental health interventions,
147, 151–152
in research in and with communities,
195
in research with minors, 210
in risk–benefit assessment, 35–37
and technological change, 16–17
Privacy boards, 27–28
Privacy Certificates, 29
Privacy Rule, 27, 28, 324
Professionalism and Integrity in Research
Program (Washington University
in St. Louis), 102
Professional practice guidelines, on conflicts
of interest, 76
Protected health information (PHI), 27–28,
210
Protection of Pupil Rights Amendment
(PPRA), 25–26
Psychological Science, 84
Psychosurgery, 13
Publication, of research in institutional
settings, 119–120
Public health challenge, 125
Punishments, with research in institutional
settings, 114
Q
Questionable research practices, 100, 101
R
Rawls, J., 17
Reasoning, 61–62
Reciprocity, norm of, 16
Recognition, tied to research, 73
Registration
of clinical trials, 142
of institutional review boards, 276–278
of NIH-supported studies, 31
334 Index
Regulatory compliance, 23–33. See also
Federal regulations
and Certificates of Confidentiality, 28–29
for clinical trials, 30–32
for data and safety monitoring, 29
for data management plan, 30
for digital health research, 39
for institutional review boards, 24–25
for mental health intervention research,
142–145
for multisite studies review, 32–33
for privacy boards, 27–28
for reporting requirements, 29–30
for research involving students and
access to education records, 25–27
with Title 45 Code of Federal Regulations
Part 46, 243–244
Repetitive TMS (rTMS), 169
Reporting requirements, 29–30
for mental health intervention research,
150–151
for research in institutional settings,
119–120
for research misconduct, 109
for research with internet and mobile
technologies, 185
resources on, 324
Reproducibility
data sharing for, 84–85
in mental health intervention research,
142
Research in and with communities, 15–16,
191–202
ethical challenges in, 192–195
ethical guidelines and other standards
for, 196–197
ethical violations in, 195–196
international or global, 129–130
preventive steps for research in,
197–201
Research misconduct, 99–110
approaching cases of, 108–110
consequences of, 102–103
defined, 99–101
harms of, 101
in institutional settings, 119–120
investigation of, 101–102
prevalence of, 101
prevention of, 106–108
reasons for engaging in, 103–106
Resnik, D. B., 36
Respect for persons, 5. See also Informed
consent
Belmont Report on, 6, 225–226
federal regulations regarding, 11
historical origin of, 8–9
in international and global research, 126
in internet and mobile technologies in
research, 179–180, 183–184
in research in institutional settings,
114, 119
and right to privacy, 16
tensions between other principles and, 13
Review. See also Institutional review boards
(IRBs)
of consent form, 59
of education records, rights of, 296–298
of research done in institutional settings,
119
revised Common Rule for, 13
Rewards
with research in institutional settings, 114
tied to research, 73
Right(s)
bystander, 38
culturally differing perspectives on,
129–130
of inspection and review of education
records, 296–298
to privacy, 16–17
“right to be forgotten,” 93
of students, 25, 279–284
Risk–benefit assessment, 7, 35–50
Belmont Report on, 231–232
beneficence principle in, 35–37.
See also Beneficence
case analyses of, 40–47
in case evaluating commercial product
and vendor supporting research,
42–43
in case using Fitbit-supported
intervention, 46–48
in community-based participatory
research, 200
in DHC-R, 39
and digital health research sector, 37–39
of disclosing brain imaging incidental
findings, 167–168
discussion of, 47–50
federal regulations regarding, 11–12
for research in and with communities,
192, 194
for research in institutional settings,
114–115
Risk management
in data sharing, 83
and length of consent forms, 59
Risk of harm. See Harms in research
Robbins, M. L., 183
Robert Wood Johnson Foundation, 213–214
Ross, M. W., 136
Roy, A. L., 179–180
rTMS (repetitive TMS), 169
S
Safety monitoring. See Data and safety
monitoring
Schockley, W., 12
Index
Science, 84
Scientific integrity
conflicts of interests influencing, 73.
See also Conflicts of interests [CoIs]
in internet and mobile technologies
in research, 181–183, 186
Scientific objectivity, 118
Secretary’s Advisory Committee on
Human Research Protections, 213
Selection of human subjects, 232–233.
See also Justice
Self-plagiarism, 100–101
Sensitive information, 28
Separation, with research in institutional
settings, 118
Sexual health research, 125–126, 214.
See also International and global
research
Single IRB (sIRB) Policy for multisite
studies, 32
resources on, 324
revised Common Rule for, 13
Sisti, D. A., 149
Smartphones, 38, 184–185
SMART professional decision-making
strategies, 106–108
Smith, R., 105
Social environment sampling, 182–183
Social humility, in international
or global research, 126, 128
Social justice, 17
and group harm, 15
and selection and design of studies, 16
Socially sensitive research, 214–215
Social media platforms, in research, 38
Social science research
federal regulation of, 10–11
professional licenses of researchers in, 75
public health challenge in, 125
risk to participants in, 36
technologies in, 37
voluntariness in, 58
Society for Industrial and Organizational
Psychology, 120
Society for Research in Child Development,
206
Spencer, Thomas, 78, 79
SPOT (Suicide Prevention Outreach Trial)
case study, 146, 148–152
Stakeholder groups
in community-based participatory
research, 198–199
in institutions, 113–120
Standards
for conflicts of interests, 74–77
for international and global research,
126
for research in and with communities,
196–197
335
State laws and regulations, 23. See also
Regulatory compliance
for legal adulthood, 209
for legally authorized representatives,
65
Stigmatized populations, international and
global research on, 136
“Student Rights in Research, Experimental
Programs, and Testing,” 25
Students
research involving, 25–27. See also
Education records
Title 34 Code of Federal Regulations
Part 98 on rights of, 279–284
Suicide Prevention Outreach Trial (SPOT)
case study, 146, 148–152
Sustainability of findings/products
from community-based participatory
research, 201
for research in institutional settings,
120–121
T
TADS. See Treatment for Adolescents
With Depression Study
Taplin, S., 211
tDCS (transcranial direct current
stimulation), 169–171
Technological change, privacy concerns
with, 16–17
Technology use
in research, 37. See also Internet and
mobile technologies in research
in research with minors, 213–214
“10/90 gap,” 125
TES (transcranial electrical stimulation),
169
Therapeutic misconception, 167
Title 34 Code of Federal Regulations
Part 98, 279–284
Title 34 Code of Federal Regulations
Part 99, 285–321
crimes of violence definitions, 319–321
disclosure of personally identifiable
information from education records,
300–315
enforcement procedures, 315–319
general provisions, 287–295
procedures for amending education
records, 298–300
rights of inspection and review of
education records, 296–298
Title 45 Code of Federal Regulations
Part 46, 10, 24, 25. See also Common
Rule
additional protections for children
involved as subjects in research, 206,
207, 272–276
336 Index
additional protections for pregnant
women, human fetuses and neonates
involved in research, 264–269
additional protections pertaining to
biomedical and behavioral research
involving prisoners as subjects,
269–272
basic HHS policy for protection of
human research subjects, 237–264
compliance with, 243–244
on deception, 60
definitions of terms, 240–243
exempt research, 245–248
registration of institutional review
boards, 276–278
Subparts A-E, 235–278
TMS (transcranial magnetic stimulation),
169
Training, for mental health intervention
researchers, 143–144
Transcranial direct current stimulation
(tDCS), 169–171
Transcranial electrical stimulation (TES), 169
Transcranial magnetic stimulation (TMS),
169
Transparency
in addressing conflicts of interest, 74
data sharing for, 84–85
in mental health intervention research,
142
in research with internet and mobile
technologies, 185, 187
Treatment for Adolescents With Depression
Study (TADS), 141, 144, 146, 152–157
Tribal communities. See Research in and
with communities
Trimble, J. E., 214
Trust, in research in and with communities,
198
Tuskegee Syphilis Study (U.S. Public Health
Service), 4, 8, 57
U
UBACC (University of California,
San Diego Brief Assessment
of Consent Capacity), 64
“Ugly American” stereotype, 128
Unanticipated problems, 30
Understanding of information, 61
University of California, San Diego, 213–214
University of California, San Diego Brief
Assessment of Consent Capacity
(UBACC), 64
University of Pennsylvania genetic
trials case, 79
U.S. BRAIN Initiative, 163
U.S. Department of Health, Education
and Welfare (HEW), 4, 10, 13
U.S. Department of Health and Human
Services (HHS), 10, 13, 144, 206
U.S. Public Health Service (PHS), 4, 8,
57, 102
U.S. Senate Committee on Labor and Public
Welfare, 4
Utilitarianism, 9
V
Video documentation, 85, 89–90, 93–94
Violence, crimes of, 319–321
Virtue ethics, 17
Vitiello, B., 155
Voluntariness
and informed consent, 6, 57, 58
in institutional studies. See Coercive
institutional research environments
with internet and mobile technologies,
179
for minors in research, 208–209
Vulnerable populations. See also Minors as
research subjects
45 C.F.R.Part 46 on, 25
Belmont report on, 7, 15
international and global research on,
136
mental health intervention research
with, 144
National Commission and research on, 5
neurobiological research with, 164–165
in research with internet and mobile
technologies, 181
W
Waldman, A., 80
Waldron, H. B., 214
Wearable technologies, 37–38
Weiner, L., 211
Wilens, Timothy, 78, 79
Williams, R. L., 192
Wilson, James, 79
Wilson, Robin, 79
Z
Zimbardo, P. G., 4, 140
ABOUT THE EDITORS
Sangeeta Panicker, PhD, is director of the Research Ethics Office in
the Science Directorate of the American Psychological Association (APA).
Dr. Panicker received her doctorate in cognitive neuroscience from The
Catholic University of America. She also has master’s degrees in psychopharmacology from the University of Cincinnati and in clinical psychology from
the University of Bombay. In her current position, she serves as a resource
for APA members, APA staff, and the public at large on ethical, legal, and
scientific issues pertaining to the conduct of behavioral and psychological
research, including research ethics and research integrity, protection of human
participants in research, humane care and treatment of nonhuman animals
in research, and various aspects of responsible conduct of research in general.
Dr. Panicker is a member of Public Responsibility in Medicine & Research
(PRIM&R) and a fellow of APA.
Barbara H. Stanley, PhD, is professor of medical psychology in the Department of Psychiatry at Columbia University Vagelos College of Physicians
and Surgeons. Dr. Stanley is also director of the Suicide Prevention–Training,
Implementation and Evaluation program for the Center for Practice Innovations and research scientist in the Molecular, Imaging, and Neuropathology
Division at the New York State Psychiatric Institute and the recipient of
numerous grants from the National Institute of Mental Health as well as
private foundations. She is a clinical psychologist who specializes in the
treatment of individuals with borderline personality disorder, depression,
337
338 About the Editors
and self-harm. She has received numerous awards, including the American
Foundation for Suicide Prevention Research Award and the Suicide Prevention
Center of New York Research Award. Dr. Stanley is the author of more than
200 articles and book chapters and the author or editor of several volumes, and
she serves as editor-in-chief of Archives of Suicide Research and on numerous
editorial boards.
Download