Uploaded by kirkreps

Bortolotti Mameli (2006)

advertisement
Accountability in Research, 13:259–275, 2006
Copyright © Taylor & Francis Group, LLC
ISSN: 0898-9621 print / 1545-5815 online
DOI: 10.1080/08989620600848561
DECEPTION IN PSYCHOLOGY: MORAL COSTS AND
BENEFITS OF UNSOUGHT SELF-KNOWLEDGE
1545-5815
0898-9621
GACR
Accountability
in Research
Research: Policies and Quality Assurance, Vol. 13, No. 3, June 2006: pp. 1–20
Deception
L.
Bortolotti
inand
Psychology
M. Mameli
LISA BORTOLOTTI
Philosophy Department, University of Birmingham,
Birmingham, United Kingdom
MATTEO MAMELI
King’s College, Cambridge, United Kingdom
Is it ethically permissible to use deception in psychological experiments? We argue
that, provided some requirements are satisfied, it is possible to use deceptive methods without producing significant harm to research participants and without any
significant violation of their autonomy. We also argue that methodological deception is at least at the moment the only effective means by which one can acquire
morally significant information about certain behavioral tendencies. Individuals
in general, and research participants in particular, gain self-knowledge which can
help them improve their autonomous decision-making. The community gains collective self-knowledge that, once shared, can play a role in shaping education,
informing policies and in general creating a more efficient and just society.
Introduction
Researchers sometimes deceive the participants in psychological
experiments on methodological grounds. The participants may
be deceived about the purpose, design, or setting of the experiments. Is methodological deception ethically permissible? Many
authors have maintained that it isn’t and that the existing codes
of ethics of the professional associations, which allow for its use,
need to be revised (Kelman, 1967; Bok, 1999; Clarke, 1999;
Herrera, 1999; Pittinger, 2003). The two main objections against
methodological deception concern the risk of psychological
harm to research participants and the violation of their
autonomy.
Address correspondence to Dr. Lisa Bortolotti, Philosophy Department, University of
Birmingham, Birmingham, B15 2TT, UK. E-mail: l.bortolotti@bham.ac.uk
259
260
L. Bortolotti and M. Mameli
We argue that neither of these objections is convincing: It is
possible to use deceptive methods in psychological experiments
without producing significant harm and without any significant
violation of autonomy. Moreover, we argue that there are moral
reasons in favor of using methodological deception. Deceptive
methods seem to be (at least at the moment) the only means by
which individuals and society can acquire important information
about some psychological tendencies. First, such information can
be of great value to society. Knowledge about discriminatory
biases, for example, can be used to create a more efficient and
just society. Second, in so far as research participants have an
interest in exercising their capacity as autonomous decision-makers and in properly controlling their behavior, they also have an
interest in acquiring information about their unconscious psychological tendencies, even though such information may in some
circumstances be mildly distressing. Thus, the potential benefits
of the experimental results that can now be obtained only by
using methodological deception can be characterized in terms of
two types of morally significant knowledge: the individual selfknowledge acquired by research participants and the collective
self-knowledge acquired by society. We consider the variety of
interests that persons might have in participating in research.
From a perspective in which personal development is connected
to the acquisition of self-knowledge, the use of deception in
experimental psychology can also be instrumental in promoting
the exercise of autonomy.
In Section 1, we briefly present the methodological reasons
for and against the use of deception in psychological research. In
Section 2, we discuss and reject some ethical arguments against
methodological deception. In Section 3, we argue for the moral
significance of the information acquired through the use of
deception. We illustrate this point by means of an example: A
social psychology experiment where deceptive methods were
used to uncover biases in hiring practices.
Methodological Arguments
In most cases, the purpose of deceptive methods in psychology is
to make sure that the research participants are not aware of what
aspect of their psychology is being studied and in what way.
Deception in Psychology
261
Deception regarding the main purpose of the experiment is
often used to avoid the so-called Hawthorne effect, the tendency of
research participants to behave in accordance to what they think
the experimenter’s expectations are (Gillespie, 1991). In social
psychology, though, where frequently the object of the research
is a form of undesirable behavior, the opposite effect often
occurs. For example, when research participants are made aware
that the object of the experiment is aggressive behavior, they
often refrain from engaging in aggressive behavior for the duration of the experiment. Social psychologists use deception to
avoid this kind of effect.
Psychological evidence suggests that reliable data about how
people behave in certain situations cannot be easily obtained by
asking them how they did or would behave in those situations.
People are often mistaken about their behavioral tendencies and
the ways in which they describe themselves or revise their selfdescriptions on the basis of evidence are usually biased by their
(conscious or unconscious) desire to fit a particular profile (Nisbett
and Ross, 1980; Aronson, 1999). Suppose one is studying altruistic
behavior. What people say about what they would do to help or
benefit others can be a poor guide to what people actually do in
those situations. There are many experimental situations where,
at the moment, deception is methodologically necessary in order
to obtain reliable results.1
According to some researchers, the widespread use of
deception in psychological research can be methodologically selfdefeating. On this view, if deception were used in most psychological
experiments, and if the potential participants in the experiments
were aware of this, any experiment would generate suspicion in the
participants. The participants would try to second-guess the experimenter and this would make the experimental results difficult to
interpret: It would be difficult to establish whether the observed
results reflect the way people normally behave or whether they
reflect the way people behave when they are trying to second-guess
an experimenter. Kelman (1967) warned against this possibility at
1
We say “at the moment” because we do not want to rule out the possibility that information about the behavioral tendencies studied by social psychologists can in the future
be obtained without the use of methodological deception. For example, neuroimaging
techniques might render deceptive methods unnecessary in certain areas of investigation.
262
L. Bortolotti and M. Mameli
a time when the use of deception was not as carefully regulated as
it is today. But, thanks in part to constraints on the use of deception
set out by the professional codes, only some of the experiments
performed by psychologists studying human behavior involve
deception. For this reason, the risk of deception becoming methodologically self-defeating is, at the moment, low.
Against Methodological Deception
The codes of ethics of the American Psychological Association
(APA) and of the British Psychological Society (BPS) allow for the
use of deceptive methods in psychological experiments, but they
also set limits to the use of such methods and require that certain
conditions be met when such methods are used. Although there are
differences between the two, both codes require that deception be
used only if there are no other effective procedures to obtain the
desired experimental results, if the results are expected to be highly
significant, and if no physical harm or severe distress is caused to
research participants. The codes also require that research participants be allowed to withdraw from the experiment at any time and
that experimenters debrief the participants as soon and as sensitively as possible after the experiment, by giving them all the relevant information about the structure, purpose, and value of the
experiment (cf. the APA Ethical Principles of Psychologists and
Code of Conduct, 2002, article 8.07, and the BPS Ethical Principles
for Conducing Research with Human Participants, 1992).
Despite the many constraints placed by these professional codes
on the use of deception, some commentators have suggested that
research participants are not sufficiently protected from the harms
of deception and that the codes should be revised accordingly. For
instance, Ortmann and Hertwig (1997) argue for an absolute ban on
the use of deception in psychological experiments (cf. also Pittinger,
2002). In this section, we assess some of the most influential ethical
arguments against the use of deception in psychology.
Harm
The debate about whether the use of methodological deception in
psychology is morally permissible often revolves around its potentially harmful effects. The most cited example is the experiment
Deception in Psychology
263
conducted by Stanley Milgram on people’s tendency to obey
authority (Milgram, 1974).
In one version of this experiment, the participants were
recruited by being told that they were going to be part of a study
on memory and on how punishment affects learning. In the lab,
each participant was told that she had to play the role of
“teacher” and that another participant present in the room had
to play the role of “learner”. Unknown to the genuine participant,
the learner was in fact a confederate of the experimenter. The
teacher was supposed to ask questions to the learner and, in case
of incorrect answers, administer electric shocks of progressively
higher voltage by manipulating a simple electronic device.
Unknown to the participant, the learner didn’t in fact receive any
electric shock. The learner only pretended to be in pain when the
participant “administered” the electric shock. The learner’s manifestations of pain were proportional to the voltage of the electric
shock that the participant believed to be administering. In general,
when the learner’s complaints became relatively strong, research
participants manifested their uneasiness with what they believed
was happening to the learner. Many of them asked the experimenter to stop the experiment. In response to these requests, the
experimenter demanded obedience, insisting that it was very
important for the teacher to follow the instructions accurately. In
the end, 65% of the participants inflicted (what they believed to
be) electric shocks of the highest voltage to their respective
learner, in spite of the learner’s pleas to stop.
Many commentators today consider the Milgram experiment
a paradigmatic example of the unethical use of deception in
psychological research. The participants were deceived about the
purpose and design of the experiment and about the role of the
other participants. Moreover, they were told to follow the instructions despite manifesting uneasiness. The participants were
debriefed, at a time in which debriefings were not required by the
professional codes of ethics, but after debriefing they had to deal
with the knowledge of the fact that they had been capable, under
the influence of authority, of inflicting considerable pain to others.
Milgram’s experiment is often taken to have caused significant
psychological harm to the research participants. However, the significance of the harm actually caused remains controversial. Elms
(1982), who worked behind the scenes of the experiment and
264
L. Bortolotti and M. Mameli
interviewed the participants after their experience, claims that
they suffered remarkably little given what he had expected by witnessing their reactions during the experiment. According to him,
the experience had been distressing for them, but not more than
an emotionally involving movie or a disappointing job interview.
But was Elms right? Whether the experiment caused significant and
long-lasting psychological harm to the participants is an empirical
question, one that cannot be answered by appealing to untested
intuitions or casual observations. That said, there seem to be
many cases in which experiments involving deception don’t cause
any discomfort or harm to the research participants (Kimmel, 1998).
A sensible view is that, if the psychological harm inflicted to
the research participant goes beyond a certain threshold, then
the experiment is not permissible, independently of how great its
potential benefits to society are. When the threshold is surpassed,
the experiment becomes an act of injustice against the research
participant. On this proposal, one can decide whether to conduct
an experiment involving deception by using the following procedure. The first step is to ask whether the experiment is likely to
cause significant psychological harm to the participant. If it is,
then the experiment is not morally permissible. If it is not, then
one should ask whether the harm to the participant (if any) is outweighed by the potential benefits of the research. Only if it is, then
the experiment can be justified. This view seems to be implicit in
the professional codes of ethics and arguably experiments conducted in accordance with the requirements set in those codes
don’t cause significant harm to research participants.
Autonomy
Another popular argument against the use of methodological
deception is that experimenters violate the personal autonomy of
the research participants by deceiving them, and autonomy
should never be violated. On this view, by omitting or lying about
the real purpose or structure of the experiment, researchers use
participants in ways the participants have not consented to.
Obviously, one can reject the view that autonomy can never be
violated. The thought is that, although the autonomy of an individual is valuable, it can be violated when the benefits to be derived
from its violation are more morally valuable for the individual or
Deception in Psychology
265
for society than the preservation of the individual’s autonomy.
On this view, methodological deception can be regarded as morally
permissible when it brings about morally significant benefits. This
is an attractive position, but it leads to difficult questions about
what individual research participants owe to society and about
who gets to decide which outcomes, if any, are more valuable
than the preservation of personal autonomy. However, we shall
explore ways in which one can use methodological deception
without compromising the autonomy of research participants.
The use of deception in research is not necessarily incompatible
with the view that personal autonomy should never be violated.
The standard view is that the preservation of autonomy
requires that informed consent be obtained from research participants. Methodological deception requires that participants
remain unaware of or be misled about important details of the
research in which they take part. If deceptive methods and
informed consent are incompatible, the demands of the principle
of respect for personal autonomy could never be satisfactorily
met by the use of deceptive methods.2 But arguably there are
forms of informed consent that can be reconciled with the use of
deceptive methods in psychological research.
In the biomedical case, there are circumstances in which the
potential participant is unable to give or deny consent concerning
her participation in a given experiment. This can happen when
the subject is unconscious or critically ill. In these cases, it is possible
to ask consent on behalf of the participant to a legal representative or close relative after providing information about all the
important details of the research protocol. This is a form of indirect
informed consent, or informed consent by proxy. According to
some authors, deceptive methods don’t violate the autonomy of
research participants in so far as indirect informed consent is
obtained.
Clarke (1999) suggests a scenario of this sort. If a person is to
participate in a psychological experiment and, because of the
2
Some argue that asking research participants for their informed consent is not always a
necessary measure to ensure that their autonomy is respected. There is a lively debate in
the bioethical literature on the relation between informed consent and the respect for
autonomy that we have no time to explore here. It will suffice to say that, even in the context
of biomedical research, whether informed consent is necessary for autonomy is an open
question (O’Neill, 2003).
266
L. Bortolotti and M. Mameli
nature of the experiment, she cannot be given all the necessary
information about it, she can nominate a person she trusts who is
given all the information about the experiment. This person
judges whether it is acceptable for the research participant to take
part in the experiment and gives or denies consent on her behalf.
If consent is given, then the use of deceptive methods doesn’t
constitute a violation of the participant’s autonomy. Patry (2001)
endorses a similar solution. This proposal, even if not equivalent
to a ban on the use of deceptive methods in psychology, demands
a radical modification of current codes of practice.
A less revisionist alternative, which we prefer, is to make it a
requirement that research participants are told from the start
that they might not be given all the correct information about the
purpose, design or setting of the experiment. It is already
required by many research committees that participants consent
with the understanding that some deception might be involved
and it is in the spirit, although not in the letter, of the current
regulations. The APA code of ethics (2002) requires that participants have the option to withdraw at any time during the experiment and that at the end of the experiment they be debriefed
and acquire all the information that was withheld from them during
the experiment, including information about the way they were
deceived, the purpose of the deception and the importance of
the expected results. But it is not one of the conditions made
explicit in the current formulation of the APA code of ethics that
researchers inform the participants of the possibility of deception
from the start. The code just encourages psychologists to “explain
any deception that is an integral feature of the design and conduct
of an experiment to participants as early as is feasible, preferably
at the conclusion of their participation, but no later than at the
conclusion of the data collection, and permit participants to
withdraw their data”.
If experimenters include in their consent forms that some
information about the design or the conduct of the experiment
might be withheld until debriefing or that some of the information
initially provided might later turn out to be false, participants are
alerted to the possibility of the use of deceptive methods and
maintain control over their participation in the research. Within
this consent procedure, their initial consent is conditional on
their renewing their consent at the moment of debriefing, where
Deception in Psychology
267
they have the option to withdraw their data. The autonomy of
research participants is thereby not violated.
According to Clarke and Patry, the standards that apply to
biomedical research should also apply to psychological research.
This is one of the reasons why they suggest the adoption of
informed consent by proxy. But in the biomedical case that consent
procedure is adopted only when the research participant lacks
capacity. This doesn’t apply to participants in psychological
experiments involving deception. Our preferred consent procedure maintains an important analogy with the biomedical case.
Before clinical trials, participants receive detailed information
about the purposes of the research and the risks involved in their
participation, but they are not told whether they will receive the
potential treatment or the placebo. This shows that, even in the
biomedical case, some important information is withheld from
the research participants. As long as the participants consent with
the understanding that this information is being withheld, this
does not constitute a violation of their autonomy. The same
applies to research participants in psychological experiments
involving deception.
Corruption
According to Elms (1982), researchers in psychology may
become morally corrupt as a result of using deceptive strategies
and public knowledge about the existence of such strategies may
undermine public trust in researchers in general. On this view, a
long-term consequence of the widespread use of deceptive methods
could be the public distrust of psychologists in particular and of
scientists in general. If the relation of trust between potential participants and experimenters is systematically violated, experimenters might acquire a bad reputation among the general public
and the number of people wishing to participate in psychological
research—or wishing to fund it through donations or taxation—
might diminish (Lawson, 2001).
These objections against the use of deception are not very convincing. The risk that experimenters become morally corrupt as a
result of using deceptive strategies is low, if their intention to
deceive is based merely on methodological grounds. In contrast
with other forms of human deception, experimenters are not
268
L. Bortolotti and M. Mameli
motivated by the desire to fraud or to get an unfair advantage
over someone, and have no intention to harm. The purely
methodological function of deception in psychology makes it
unlikely that its use will have negative effects on the personalities
of the experimenters.
The risk that deception will give a bad name to psychology
is also low, provided that experiments are conducted according to the guidelines set out in the codes of practice. As long as
people in general and the participants in psychological
experiments in particular understand that deception is only an
indispensable methodological tool and not the result of some
“wicked” desire of the experimenter, and as long as they
understand that such methodological tools are used only when
no risk of significant harm is present and when important
results are at stake, no feeling of distrust towards the researchers
is likely to emerge.
For Methodological Deception
The use of deceptive methods is in many cases crucial to the
identification of behavioral dispositions that affect in a negative way the person who has such dispositions, people who
interact with that person, and society as a whole. Knowledge of
the existence of these dispositions, of what elicits them, and of
how they work, can certainly be put to good use. We now want
to provide a more complete account of what kinds of moral
benefits may be generated by this knowledge. We shall start
with an example.
Discrimination in hiring practices
The study of prejudices and biases against individuals of a particular race, gender, age, sexual preference, or physical appearance
can help uncover aspects of human behavior that cause unfair
discrimination and that individuals and society can try to control
or change. Deceptive methods can be extremely useful in this
area. A particular case is the study of discrimination in the labor
market. For example, Pingitore et al. (1994) designed and
conducted an experiment to establish the existence of biases
Deception in Psychology
269
generating discrimination against overweight individuals in job
interviews.
Introductory psychology students were shown videotapes of
mock job interviews. The job candidates were two professional
actors, a man and a woman. The independent variables were the
weight of the mock job candidates and the type of job for which
they were being interviewed. The actors were of normal weight
but were made to look moderately obese by make-up and theatrical prostheses in some of the videotapes. The jobs were a job of
sale representative involving contact with people and a job of
systems analyst with limited contact with people. The actors’ performances were designed to represent the possession of equal
average abilities by the two candidates. Research participants
were asked to read job descriptions and CVs, watch the video, and
then rate the applicants. Pingitore and colleagues found that
there was a significant bias against hiring overweight applicants,
especially for female applicants. This study involved methodological
deception. For instance, the students were not told that they were
participating in a study about discrimination against overweight
job applicants and they were not told that they were watching
simulated interviews.
Discriminatory hiring practices are unjust and are a cost to
society. Knowledge about what causes them can become a tool for
avoiding or correcting these practices. This is an example of what
Saxe says when he claims that “in case of deception research [. . .]
lying enables one to conduct high-impact research that potentially serves the public good” (Saxe, 1991, p. 414). Biases against
overweight job applicants harm overweight job applicants by generating an unfair advantage in favor of normal-weight applicants,
harm companies by leading employers to hire less competent
normal-weight individuals over more competent overweight
individuals, and harm society as a whole. Employers who are
aware of the existence of this bias may be able to avoid its effects
and safeguard the interests of overweight job candidates, of their
company, and of society. They may be able to enhance the
efficiency of the hiring process and the prospects of success of
their company as well as to positively contribute to social justice.
Moreover, knowledge about the existence of the bias can be useful
in the creation of anti-discriminatory legislation and other corrective
mechanisms.
270
L. Bortolotti and M. Mameli
Research participants are members of society. Thus, by
benefiting society at large, knowledge about behavioral tendencies obtained through the use of deceptive methods also benefits
research participants. But there is also a more direct way in
which the deceived participant may benefit from taking part in
the experiment. During the experiment or, more likely, at the
moment of debriefing (when the participant learns about the
purpose and design of the experiment), the participant acquires
important information about her own behavioral tendencies.
This information may help the participant better control
her behavior and thereby better exercise her autonomy. In
this particular case, at the moment of debriefing, research participants acquire information about their own biases and become
better able to behave autonomously in the context of hiring
decisions.
In other words, the potential benefits of the experimental
results that can be obtained by means of methodological
deception can be characterized in terms of two types of
morally significant knowledge: the individual self-knowledge
acquired by research participants and the collective selfknowledge acquired by society. But what are the costs of the
acquisition of such knowledge to research participants?
According to Clarke:
Many participants in the Milgram obedience studies found out something
unexpected about themselves; that they were more prone to obey authority
figures than they might have supposed. While there may sometimes be
long-term benefits to individuals to be derived from gaining this information
about themselves, such self-discoveries can often be harmful rather than
beneficial. Subjects who make unexpected and unwelcome discoveries
about themselves can be subjected to lowered self-esteem and other negative
feelings. (Clarke, 1999, p. 154)
Acquiring knowledge about one’s own discriminatory tendencies can be distressing. As we said in Section 2, if the experiment is likely to result in significant psychological harm, then it
should not be performed — and this includes those cases where
the harm is generated by the self-knowledge the participant
acquires during the experiment or at debriefing. If there is no significant harm, then the interest in not being caused mild distress
by the acquisition of knowledge of one’s behavioral tendencies
Deception in Psychology
271
needs to be weighed against other morally significant interests.3
For example, it is implausible to claim that people’s interest in
not suffering the mild distress caused by the acquisition of information about their own discriminatory tendencies is more morally
important than the interests set back by the discriminatory practices
themselves. The ethical significance of the interest that research
participants might have in remaining ignorant about discriminatory biases of which they are unaware is weighed against the ethical
significance of the interests that research participants and society
at large have in knowing about such biases. The outcome seems
to be that acquiring reliable information about the biases that
generate discriminatory practices is much more morally important
than preventing research participants from having a mildly distressing experience.
We shall now focus on the positive effects that the acquisition
of individual self-knowledge might have on research participants.
Promoting autonomy
We claimed that, by obtaining information about their tendencies
through their participation in psychological experiments, research
participants can gain better control over their autonomous
3
Some may want to argue that people have a right not to be exposed to knowledge
about themselves that might be psychologically distressing, unless of course they want and
intentionally seek to acquire such knowledge. On this view, research participants have a
right not to be deceived by researchers aiming at uncovering discriminatory behavioral
tendencies. This view seems implausible. Consider the following analogy. Acquiring the
knowledge that humans are the product of a process of biological evolution that was not
designed by an intelligent mind and that started 4 billion years ago might be very psychologically distressing for a person who believes that humans were created by a benevolent
and omnipotent God a few thousand years ago. But in spite of this, no right of this person
is violated by the teaching of evolutionary theory in schools and universities and by the
publication of popular books that present evidence in favor of it. To many believers, the
only thought that they were not created by God but by a “blind” physical process is disheartening. They might think that, if there is no Creator, their life has no meaning.
Therefore, the impact of evolutionary theory on someone’s self-esteem can be even more
significant than that produced by a psychological experiment which reveals some unconscious dispositions. Of course, one is not “forced” to read books defending evolutionary
theory, whereas one has no choice but to be exposed to unsought self-knowledge if one
takes part in a psychological experiment involving a certain form of deception. But then
again, if evolutionary theory is taught in school, one cannot really avoid being exposed to
it either. These issues are very complex, and point at how difficult it is to determine the
sphere of application of an alleged “right not to know”.
272
L. Bortolotti and M. Mameli
choices and can take steps to limit the effects of psychological
biases on their decisions and behavior.
One of the concerns of those who would like to see deception
banned from psychological research is that, by not fully informing
the participants about the nature and purpose of the experiment,
researchers violate the participants’ autonomy. But in Section 2
we argued that the personal autonomy of the participants can be
adequately safeguarded in psychological research. If participants
are alerted about the possible use of deception, can withdraw
from experiments at any time and are provided with careful and
sensitive debriefing, their autonomy is not violated in a morally significant way as they maintain control over their participation in
the research. Thus, the principle of respect for personal autonomy is honored by the (heavily constrained) use of deception in
psychological experiments.
Not only does properly conducted psychological research
honor the autonomy of the research participants. It can contribute to promoting it. Awareness of the experimental results about,
say, discriminatory tendencies affecting hiring practices are likely
to enhance participants’ autonomy by providing them with the
valuable means to initiate a self-reflective exercise about the reasons
for their decisions and the justification of their adopted selection
criteria. This reflective exercise can have as its consequences the
development of strategies that can improve the research participants’ future performance as decision makers. In so far as it is can
be generalized, the knowledge of psychological tendencies
revealed by experiments involving deception can contribute to
promoting not only the autonomy of research participants, but
also that of other individuals in society.
Promoting autonomy seems to be not only about offering
people the opportunity to make their own independent decisions,
but also about ensuring that those who have this opportunity are
aware of the relevant factors that might affect their decisions. On
some influential accounts of autonomy, there is more to autonomous decision-makers than the capacity for independent thought
and action. Autonomy involves the capacity of the agents to be
responsive to a wide range of reasons for and against behaving as
they do (Wolf, 1990) and only the agents who can change their
minds when they discover a good reason to do so can be said to
be truly autonomous (Dworkin, 1988). If unsought self-knowledge
Deception in Psychology
273
contributes to making research participants aware of some of
their behavioral dispositions—e.g., of the fact that when they
make hiring decisions they are influenced by whether candidates
are overweight—it also gives them the opportunity to track the
reasons for their decisions and to assess the relevance of these
reasons in the light of some of their other beliefs and values.
The philosophical conception of autonomy as responsiveness to reasons for action is reflected in many societal practices.
Our society already takes the view that people should be subject
to potentially distressing experiences if they are to gain important
benefits through them. Education is an obvious example.
Although it does involve a number of stressful experiences, and
especially experiences that have the potential effect to lower
one’s self-esteem, it also promotes a process of discovery and selfdiscovery that ultimately makes people able to take up their roles
as citizens and moral agents and to receive the benefits associated
with full participation in society.
Conclusion
After reviewing some of the most influential arguments against
the use of methodological deception in psychology research and
having developed some arguments in favor of it, we have concluded
that, when some conditions are met, the use of methodological
deception is morally permissible.
The main arguments against deceptive methods are that they
might cause harm to research participants and that they violate
research participants’ personal autonomy. The current professional codes of ethics are designed to prevent psychological
experiments involving deception from being conducted when
they are likely to cause severe psychological harm to the research
participants. Experiments conducted according to current regulations can in some circumstances bring about mild distress, but we
argued that the possibility of such distress doesn’t by itself show
that the experiments are not morally permissible. Whether they
are depends on the moral significance of the other interests
which would be negatively affected by not conducting these
experiments.
We believe that the constrained used of deception in psychological research does not necessarily violate the research
274
L. Bortolotti and M. Mameli
participants’ autonomy. If research participants are alerted to the
fact that some of the information initially provided about the
experiment can be misleading or incomplete and have the
opportunity to withdraw from the experiment at any time without
incurring in any penalty, they maintain control over their participation in the research and their autonomy is not infringed.
Methodological deception can be used to achieve reliable
results about important areas of investigation, such as discrimination in the labor market. In these cases, there are moral reasons
in favor of conducting experiments where deceptive methods are
used. These moral reasons come from the beneficial effects that
the experimental results can produce for the research participants (through the acquisition of individual self-knowledge) and
for society in general (through the acquisition of collective selfknowledge). By raising awareness of behavioral tendencies that
give rise to, say, discrimination, social psychologists can help society
find ways to avoid injustices and they can help individuals gain
better control over their autonomous decisions and behavior by
being able to assess the reasons behind their choices. This type of
self-knowledge can contribute to promoting the exercise of
autonomy. At the present time, the use of deceptive methods is
the only effective way of acquiring this type of self-knowledge in
some sensitive contexts. If alternative and equally effective methods
of investigations become available, the ethical permissibility of
the constrained use of deception will have to be reviewed.
Acknowledgments
In the preparation of this article, we acknowledge the stimulus and support of
the EURECA project on delimiting the research concept and the research activities,
sponsored by the European Commission, DG-Research, as part of the Science and
Society Research Program—6th Framework. We are also grateful to Stephen
Clarke and Chris Herrera for many useful suggestions on how to improve the
article. The audience of the Research Seminar Series organized by the Centre
for the Study of Global Ethics at the University of Birmingham (UK), where an
earlier version of this article was presented, provided stimulating discussion.
References
American Psychological Association (APA) (2002). Ethical Principles of Psychologists
and Code of Conduct, http://www.apa.org/ethics/. Accessed 24 September, 2005.
Deception in Psychology
275
Aronson, E. (1999). Dissonance, Hypocrisy and the Self-Concept. In: E. HarmonJones and J. Mills (eds.), Cognitive Dissonance. Washington D.C.: American
Psychological Association, pp. 103–126.
Bok, S. (1999). Lying: Moral Choice in Public and Private Life. New York: Vintage.
British Psychological Society (1992). Ethical Principles for Conducting Research with
Human Participants, http://www.bps.org.uk/the-society/ethics-rules-chartercode-of-conduct. Accessed 24 September, 2005.
Clarke, S. (1999). Justifying deception in social science research, Journal of
Applied Philosophy, 16(2): 151–166.
Dworkin, G. (1988) The Theory and Practice of Autonomy. New York, NY: Cambridge
University Press.
Elms, A. (1982). Keeping deception honest: Justifying conditions for social scientific research stratagems. In: Beauchamp, T., Faden, R., Wallace, J.,
Walters, L. (eds.) Ethical Issues in Social Science. Baltimore: John Hopkins
University Press, pp. 232–245.
Gillespie, R. (1991). Manufacturing Knowledge: A history of the Hawthorne Experiments.
Cambridge: Cambridge University Press.
Kelman, H. (1967). Human use of human subjects: The problem of deception
in social psychological experiments, Psychological Bulletin, 67(1): 1–11.
Kimmel, A. (2001). Ethical trends in marketing and psychological research, Ethics
& Behavior, 11(2): 131–149.
Herrera, C. (1999). Two arguments for covert methods in social research, British
Journal of Sociology, 50(2): 331–343.
Lawson, E. (2001). Informational and relational meanings of deception: Implications for deception methods in research, Ethics & Behavior, 11(2): 115–130.
Milgram, S. (1974). Obedience to Authority. New York: Harper & Row.
Nisbett, R. and Ross, L. (1980). Human Inference. Englewood Cliffs: Prentice Hall.
O’Neill, O. (2003). Some limits of informed consent, Journal of Medical Ethics, 29: 4–7.
Ortmann, A. and Hertwig, R. (1997). Is deception acceptable? American Psychologist,
52: 746–747.
Patry, P. (2001). Informed consent and deception in psychological research,
kriterion, 14: 34–38.
Pingitore, R. Dugoni, B., Tindale, S., and Spring, B. (1994). Bias against overweight job applicants in a simulated employment interview, Journal of Applied
Psychology, 79(6): 909–917.
Pittinger, D. (2002). Deception in research: Distinctions and solutions from the
perspectives of utilitarianism, Ethics & Behavior, 12(2): 117–142.
Saxe, L. (1991). Lying: Thoughts of an applied social psychologist, American
Psychologist 46(4): 409–415.
Wolf, S. (1990). Freedom and Reason. New York: Oxford University Press.
Download