designing psychological investigations

advertisement
DESIGNING PSYCHOLOGICAL INVESTIGATIONS
To read up on designing psychological investigations, refer to pages 699–720 of Eysenck’s A2 Level
Psychology.
Ask yourself
 What is the difference between a quantitative and qualitative approach to
data collection?
 What is the difference between reliability and validity?
 Can you remember which studies raised ethical issues from your studies at
AS level?
What you need to know
THE
SELECTION
AND
APPLICATIO
N OF
APPROPRIA
TE
RESEARCH
METHODS







Labor
atory
experi
ments
Field
experi
ments
Quasiexperi
ments
Natur
al
experi
ments
Correl
ationa
l
analy
sis
Obser
vation
al
techni
ques
Self-
EXPERIMEN
TAL AND
NONEXPERIMEN
TAL
DESIGNS

Exper
iment
al
resea
rch
desig
ns:
indep
ende
nt
group
s
desig
n,
matc
hed
partic
ipant
s
desig
n,
repea
ted
meas
ures
THE
STRENGTH
S AND
WEAKNESS
ES OF
DIFFERENT
FORMS OF
SAMPLE
METHODS



Rand
om
samp
ling
Oppo
rtuni
ty
samp
ling
Volu
nteer
samp
ling
RELIABILI
TY:
ASSESSME
NT AND
IMPROVE
MENT


Inter
nal
and
exte
rnal
relia
bilit
y
Tech
niqu
es to
chec
k
relia
bilit
y
VALIDITY:
ASSESSMENT
AND
IMPROVEMENT


Internal/e
xperiment
al validity
Checking
validity
ETHICAL
CONSIDE
RATIONS
AND
RESOLUT
IONS IN
PSYCHOL
OGICAL
RESEARC
H


Th
e
BP
S
gui
del
ine
s
Eth
ical
iss
ues
an
d
the
ste
ps
tak
en
to
dea
l
wit
h
the
repor
t
techni
ques
Case
studie
s
desig
n
 Nonexper

iment
al
resea
rch
desig
ns:
obser
vatio
n,
interv
iews,
quest
ionna
ires,
pilot
studi
es
THE SELECTION AND APPLICATION OF APPROPRIATE RESEARCH METHODS
m
Research methods take either a quantitative or qualitative approach, which depends on whether the data
collected is numerical or non-numerical. Thus, quantitative = numbers and qualitative = words. To decide
which method is the most suitable, careful consideration needs to be given to the strengths and weaknesses
of the research methods.
Laboratory experiments
The laboratory experiment takes place in a controlled environment and enables the experimenter to test the
effect of the IV (independent variable) on the DV (dependent variable). To establish a difference and so
detect cause and effect relationships, the IV is systematically varied between two conditions.
Advantages
Weaknesses
Cause and effect can be established
Lack mundane realism
Objective and so less researcher biased
Reductionism
Field experiments
Field experiments take place in natural settings, e.g. the work environment. The experimenter has control of
the IV and so causal relationships can be established. Participants are often not aware they are being
researched.
Advantages
Weaknesses
Have mundane realism
Confounding variables
Cause and effect
Ethical issues
Quasi-experiments
Quasi-experiments take place when the experimenter cannot control the IV—it is said to be naturally
occurring. For example, experiments involving gender, age, class, or cultural differences would be classed
as quasi because the experimenter cannot manipulate these factors as the IV. However, the experimenter
does have control of the research setting.
Advantages
Weaknesses
Can research phenomena that could not Cause and effect not established
otherwise be investigated experimentally
Controlled conditions
Natural experiments
Reductionist
A natural experiment is kind of quasi-experiment but the researcher has no control over the IV or the
research setting.
Advantages
Weaknesses
Can research phenomena that could not Cause and effect not established
otherwise be investigated experimentally
Have mundane realism
Confounding variables
Correlational analysis
Correlational analysis is a technique that measures the strength of the relationship between two variables.
The paired scores of the two variables are analysed to establish the strength and direction of the association,
e.g. the relationship between stress and illness. This can be illustrated visually through scattergrams and
numerically through correlation coefficients. These range from +1 to 0 to -1, where the sign shows the
direction and the number shows the strength of the association.
Advantages
Weaknesses
Shows direction and strength of
relationships
Cause and effect not established
It can be used when an experiment
cannot
Other factors may be involved
Observational techniques
Naturalistic observation
Naturalistic observation involves examining behaviour in a natural setting with minimal intrusion from the
researcher because it aims to observe people’s natural behaviour. Participants might be aware they are
being observed (overt observation), or might not (covert observation).
Advantages
Weaknesses
Has mundane realism
Observer bias
Less biased by demand characteristics
Describe behaviour but do not explain it
Controlled observation
Controlled observation takes place when the researcher has control of the environment in which the
observation occurs.
Advantages
Weaknesses
Less biased by confounding variables
Artificial
Richer data
Researcher and participant effects, e.g.
Hawthorne effect
Self-report techniques
Interviews
As a form of self-report technique, interviews take many different forms: they usually take place face to
face and can yield rich, in-depth data.
Non-directive interviews
These are led by the participant (the interviewee), who is free to discuss whatever he or she chooses. The
interviewer guides the discussion by encouraging the interviewee to elaborate on responses. Such
interviews tend to be used in the treatment of mental disorders but have little relevance to psychological
research.
Informal interviews
The interviewer has a list of topics that are to be discussed. Informal interviews resemble non-directive
interviews in that the interviewee is allowed to discuss the topics in whatever way and order he or she
chooses, with the interviewer mainly encouraging more depth and detail.
Semi-structured or guided interviews
All interviewees are asked precisely the same questions in the same order. Thus, these interviews possess
more structure than informal interviews because the interviewer takes control of the issues to be discussed
and decides in what order they will be covered.
Clinical interviews
These are often used by clinical psychologists to assess patients with mental disorders. All interviewees are
asked the same questions but follow-up questions depend on the answers given. This gives the interviewer
flexibility to explore and follow-up on interesting answers.
Structured interviews
A standard set of questions is asked in exactly the same order and the interviewees have a restricted range
of answers to select from.
Advantages
Weaknesses
Rich detailed data
Interviewer bias
Flexible due to different formats
Participant effects
Questionnaires
Written questionnaires are a type of interview. They can be conducted face to face, via the telephone, or by
post. A questionnaire consists of a standard set of questions, which are either closed (fixed response, e.g.
rating scales) or open-ended (requiring detailed responses). Questionnaires are used to survey attitudes,
beliefs, and behaviour.
Advantages
Weaknesses
Flexible as can use open and closed
questions
Researcher bias, e.g. leading questions
Quick and economical
Participant effects
Case studies
A case study is the in-depth study of an individual or small group. Examples that you will have come
across during your studies include: case studies of abnormality (e.g. Little Albert and Anna O), case studies
of brain damage (e.g. HM), and case studies of privation (e.g. Genie).
Advantages
Weaknesses
In-depth data
Generalisability
Influences future theoretical
developments
Subjectivity
EXPERIMENTAL AND NON-EXPERIMENTAL DESIGNS
To account for the application of research methods you need to be able to explain how different methods
are implemented.
Experimental research designs
There are three experimental designs that aim to control participant variation (i.e. individual differences
between the participants) that could interfere with the effect of the IV (independent variable) on the DV
(dependent variable). The three designs share a common characteristic of experiments: there are two
conditions, the control and experimental, and the IV is varied across these. The three designs are discussed
below.
1. Independent groups design
The independent groups design involves two or more groups of different participants. Thus, there are
different participants in each of the conditions, and each participant experiences only one condition.
Advantages
Weaknesses
Avoids order effects
Participant variables, i.e. individual
differences because there are different
participants in each group
Random allocation is possible
Need a larger number of participants
2. Matched participants design
The matched participant design is as it sounds—the participants in each condition are matched on certain
relevant variables. The participants experience only one condition. Thus, there are two groups, which are
matched, and each participant experiences a different condition. This involves matching the participants on
a one-to-one basis, not simply matching the groups as a whole.
Advantages
Weaknesses
Minimises participant variables
Cannot eliminate participant variables
Order effects are avoided
Difficult to achieve a good match
3. Repeated measures design
In the repeated measures design the same participants experience both conditions; thus, there is one group
of participants who take part in both conditions.
Advantages
Weaknesses
Minimises participant variables
Order effects. These are the effects of
participating in two conditions, e.g.
practice, fatigue, boredom
Fewer participants are needed
Demand characteristics are easier to
guess
Deciding which design to use often comes down to choosing between independent
measures and repeated measures, because matched participants is often just too
difficult to carry out economically. To decide which design is best, a decision has to
be made based on the key weaknesses of the two designs in terms of which will least
affect validity, participant variables (independent measures) or order effects,
including demand characteristics (repeated measures).
Non-experimental research designs
Observation design
Several factors need to be taken into account when performing non-experimental research. First,
researchers need to decide whether to conceal themselves (covert) or not (overt), and this will depend on
what is being investigated. A second consideration is whether to conduct a participant or a non-participant
observation. Participant observation is when the researcher becomes a member of the group under
observation in order to observe more natural behaviour. Reliability is a key design consideration as
observers must be consistent in their judgements if the data are to be reliable and valid. Behavioural
categories of what is being observed must be constructed so that observations are reliable (consistent), i.e.
are two observers categorising the same behaviour in the same (i.e. reliable) way? Reliability can be
assessed by having two or more observers and comparing their observations as a measure of inter-observer
reliability (sometimes known as inter-rater or inter-judge reliability).
Interviews
The first consideration is the format of the interview, which can be structured, semi-structured, or
unstructured. This format determines how researcher led the interview is versus how participant led the
interview is. A key issue is the construction of good questions. This is complex because it is important that
the questions are clear and unambiguous—as if they communicate different meanings to different
participants then the answers will not be comparable. In order to avoid leading the participant, the questions
should also be free from bias and subjectivity.
Questionnaires
Questionnaire design involves deciding on the format of the questions.
Closed questions involve a fixed response, which the participant must choose from, for example, yes/no
answers or a Likert scale. This generates quantitative data, which are easier to score and analyse.
Open questions allow the participants to answer freely and so qualitative analysis is needed, which can be
more difficult and time consuming, but can also yield more meaningful data.
As with the interview, a key design issue is the construction of questions that are free from ambiguity and
bias. Another design issue is the construction of Likert scales for self-report ratings. A Likert scale consists
of a five-point scale to indicate level of agreement or non-agreement with whatever was communicated in
the question. The scoring of positive and negative statements need to be reversed in order for the scores on
the questionnaire to relate to each other.
Pilot studies
Experimental and non-experimental research designs are often preceded by a pilot study, which allows for
a trial run of the materials so that questions can be checked for clarity and ambiguity, and adjusted if there
are problems before the main study. A pilot study also enables the researcher to check the experimental
procedure for design errors and timings. This saves time and money, as if there are issues with materials or
procedures these can be amended before the main study.
THE STRENGTHS AND WEAKNESSES OF DIFFERENT FORMS OF SAMPLE METHODS
Research is conducted on people, and the group of people that the researcher is interested in is called the
target population. However, as it is not usually possible to use all of the people in the target population, a
sample must be selected. Those selected are called participants for research purposes. Thus, research is
conducted on a sample but the researcher hopes that the findings will be true (valid) for the target
population. For this to be the case, the sample must be representative of the target population. If the sample
is representative then the findings can be generalised back to the target population. If not, the findings lack
population validity. Therefore, the key issue is the generalisability of the sample; this is based on two key
factors:
 type of sampling
 size of the sample.
Random sampling
Random methods mean every participant has an equal chance of being selected. They include methods such
as selecting names out of a hat, or everybody in the population being assigned a number and a computer or
random number table being used to generate the numbers that are selected for the sample.
Advantages
Weaknesses
Less biased than samples in which the
Difficult to obtain a truly random sample
Researcher selects the participants
Expensive and time consuming
Opportunity sampling
Opportunity sampling involves selecting anybody who is available at the time of the study to take part. This
is a popular method and as much as 90% of the research discussed in psychology textbooks will have used
this method—participants are mainly undergraduates at American universities who were selected based on
their availability.
Advantages
Weaknesses
Practical and economical
Usually drawn from a restricted target
population
Researcher bias in who is selected as
available
Volunteer sampling
Participants volunteer to take part in a research study, for instance by replying to an advertisement.
Advantages
Weaknesses
Practical and economical
Volunteers are not representative of
general population
RELIABILITY: ASSESSMENT AND IMPROVEMENT
Reliability is based on consistency. Research that produces the same results every time it is carried out is
reliable.
Internal reliability = consistency within the method
Measuring instruments such as rulers are consistent within the method of measurement, as the difference
between 0cm and 5cm is the same as that between 5cm and 10cm. However, Likert rating scales lack such
consistency, as the difference between 1 and 2 on the scale might not be perceived by participants to be the
same as the difference between 4 and 5. Unreliable measures reduce internal validity.
External reliability = consistency between uses of the method
External reliability refers to the consistency of psychological tests over time, i.e. the tests must be
consistent between uses of the measure, which can be checked if the test is taken once and then again on a
later occasion.
Techniques to check internal reliability
Inter-rater reliability (or inter-judge reliability)
Inter-rater reliability is used to test the accuracy of the observations. If the same behaviour is rated the same
by two different observers then the observations are reliable as there is consistency within the observation.
Techniques to check external reliability
Test–retest reliability
Test–retest reliability involves testing once and then testing again at a later date, i.e. replicating the original
research. Meta-analyses draw on this when they compare the findings from different studies that have
tested the same hypothesis, e.g. Milgram’s study of obedience and its variations. Strong consistency
between the different findings (i.e. reliability) indicates validity.
VALIDITY: ASSESSMENT AND IMPROVEMENT
Campbell and Stanley (1966, see A2 Level Psychology page 712) distinguished between internal and
external validity.
Internal validity/experimental validity
Does the research measure what it set out to? Is the effect genuine? Is the independent variable (IV) really
responsible for the effect on the dependent variable (DV)? To be valid, the research must measure what it
claims in the hypothesis, i.e. it must be the IV that causes the effect on the DV. If this happens, the research
has truth because the effect is genuine and is caused by the IV rather than by a confounding variable.
Coolican (1994, see A2 Level Psychology page 712) identified threats to internal validity, i.e. other factors
that could have caused the effect on the DV, such as confounding variables, unreliable measures, a lack of
standardisation, a lack of randomisation, demand characteristics, and participant reactivity.
Good research design increases internal validity as accounting for the above will increase internal validity.
Checking internal validity
If internal validity is high then replication should be possible; if it is low then replication will be difficult.
Thus, validity and reliability are interlinked: if the research has truth (validity) it should be consistent
(reliable) and so replication is possible. Reliability is also an indicator of validity.
Concurrent validity is another means of testing the internal validity of a new test. The scores from the new
test of unknown validity are compared against those of a test in which validity has already been established.
If the scores are similar then the new test is probably valid, i.e. a true measure.
External validity
Coolican (1994, see A2 Level Psychology page 713) identified four main aspects to external validity:
1. Populations: findings have population validity if they generalise to other
populations.
2. Locations: findings have ecological validity if they generalise to other
settings.
3. Measures or constructs: findings have construct validity if the measures
generalise to other measures of the same variable, e.g. does a measure of
recall of word lists generalise to everyday memory?
4. Times: findings have temporal validity if they generalise to other time
periods, e.g. do findings from the past generalise to the current context or do
current findings generalise to the past or future? This is difficult to achieve
as, to some extent, all research is dependent on era and context.
Checking external validity
A meta-analysis involves the comparison of findings from many studies that have investigated the same
hypothesis. Findings that are consistent (reliable) across populations, locations, and periods in time indicate
validity. Thus, if a study has validity then it is likely to replicate, and reliability in the meta-analysis is used
as an indicator of validity. So it would seem that you rarely have one without the other, apart from
consistently wrong findings!
Predictive validity is another means of checking external validity. It involves using the data from a study to
predict behaviour at some point in the future. If the prediction is correct, then this suggests that the original
data did generalise to a future context and so has external validity.
ETHICAL CONSIDERATIONS AND RESOLUTIONS IN PSYCHOLOGICAL RESEARCH
Ethical issues arise when ethical guidelines are breached. The need for ethical controls led to the
establishment of ethical guidelines, i.e. rules that can be used to judge the acceptability of research. Most
countries now have a psychological organisation that has devised its own code of conduct, such as the BPS
(British Psychological Society) guidelines.
The British Psychological Society guidelines for research with human participants
1. Introduction
Ethical guidelines are necessary to ensure psychological research is
acceptable.
2. General
The participants’ viewpoint of research should be considered and so
members of the target population from which the sample will be drawn
should be asked about the acceptability of the research.
3. Consent
Participants’ consent should be informed i.e. they should have full knowledge
of the nature and purpose of the research. A briefing should fully inform
them about the study and advise them of their rights (withdrawal and
confidentiality). If the participant is a child (under 16 years) or impaired,
adult consent must be gained from the parent or from those in loco parentis.
If informed consent was not gained at the outset then the safeguards needed
for such a deception would be as detailed below in the deception guideline.
4. Deception
Deception of the participants should be avoided wherever possible.
Information should not be deliberately withheld and nor should the
participants be misled. Deception should only be used when alternative
procedures, which do not involve deception, have been fully considered and
rejected as unfeasible by independent advisors. Also, participants should be
fully informed at the earliest possible stage and should be consulted in
advance as to how deception would be received.
5. Debriefing
At the end of a study the researcher should provide detailed information
about the research and answer any questions the participants might have.
The researcher should also monitor the participants for unforeseen negative
effects and is responsible for providing active intervention if necessary.
6. Right to withdraw
Participants’ right to withdraw from the study must be clearly communicated
at the outset of the research. Also, participants have the right to withdraw
their consent retrospectively, in which case their data must be destroyed.
7. Confidentiality
In accordance with the Data Protection Act, information disclosed during the
research process is confidential and, if the research is published, the
anonymity of the participants should be protected. If either of these is likely
to be compromised, then the participants’ agreement must be sought in
advance.
8. Protection of participants from psychological harm
Participants should be protected from psychological harm, such as distress,
ridicule, or loss of self-esteem. The risk of harm during the research study
should be no greater than that experienced in everyday life. If there is the
potential for harm then independent approval must be sought, the
participants must be advised, and informed consent gained.
9. Observational research
Studies based on observation must respect the privacy and psychological
well-being of the individuals studied. Consent should be gained unless the
observation is in a public situation where one could expect to be observed by
strangers.
10. Giving advice
Research might reveal physical or psychological problems of which the
participant is unaware. It is the researcher’s responsibility to inform the
participant if it is felt that to not do so would endanger the participant’s
future well-being.
Ethical issues and the steps taken to deal with them
Deception: Ethical issues
Participants in Milgram’s research into obedience were deceived in several ways. Can you think of at least
three of these deceptions? Deception is an ethical issue because it is often considered necessary to avoid
demand characteristics, which would invalidate the findings, and so it might be used despite the potential
harm to the participants.
Resolution: Role-play
One way of avoiding the ethical problems associated with deception is the use of role-playing experiments.
This approach eliminates many of the ethical problems of deception studies, but there is a danger that the
behaviour displayed by role-playing participants is not the same as the behaviour would be if they had been
deceived, and so findings are not as valid. Consider Zimbardo’s (1973) Stanford prison experiment—do
you think the fact that this was a role play affected the validity of the findings?
Debrief
Debriefing is an important method for dealing with deception and other ethical issues because this is an
opportunity to tell participants the actual nature and purpose of the research and so helps resolve the
deception. The debrief should also include the right to withhold data if the participants are unhappy with
the deception.
Informed consent: Ethical issue
Studies that have involved deception lead to the related issue of informed consent. Participants might have
consented to the research but this is not informed consent if they have been deceived. Even in studies such
as Zimbardo’s, where participants were briefed in advance, it is difficult to be sure that the true nature of
the study was grasped, and therefore consent might not have been fully informed and participants might not
have consented had they known the true nature of the research.
Resolution: Seeking presumptive consent
Presumptive consent involves asking the opinion of members of the population from which the participants
in the research are to be drawn. Milgram did this before his experiments.
Prior general consent
Prior general consent (also known as partially informed consent) involves asking participants to take part in
research and revealing that the research involves some deception; thus participants have agreed in advance
that they consent to being deceived about the true nature of the research. Of course this is very vague—they
couldn’t really give informed consent because they lacked detailed information.
Right to withhold data and retrospective consent
Another means of offering informed consent is to do it afterwards. When the experiment is over, during
debriefing, participants should be offered the chance to withhold their data. However, this of course does
not undo any distress participants experienced during the study. Participants who exercise their right to
withhold data might have had experiences during the experiment that they would not have agreed to if they
had realised beforehand what was going to happen to them.
Protection of participants: Ethical issues
The key test of whether or not a participant has been harmed is to ask whether the risk of harm was greater
than in everyday life. Harm includes physical distress, for example, some of Milgram’s participants
experienced physical harm as three had seizures and many perspired and bit their lips. Protection of from
psychological harm covers psychological distress, for example, the studies by Milgram and Zimbardo
demonstrate a lack of protection because of the suffering of the participants: sweating, trembling, and
seizures in Milgram’s study, and crying, screaming, and depression in Zimbardo’s study. Participants may
also have suffered long-term harm if they experienced a loss of self-esteem as a result of the study.
Resolution: Confidentiality
Confidentiality and the right to privacy protect participants from psychological harm. Confidentiality
means that participants are protected in the write-up of the research as they will not be named and that their
data will only be known to the researcher. Confidentiality is reassuring, particularly when the data is of a
sensitive nature. Right to privacy is a concern when conducting observational research as the participant
should not be observed without informed consent unless it is a situation in which they could be observed by
strangers in everyday life.
Right to withdraw
Participants should be informed of their right to withdraw at the outset of the research so that if they are in
any way distressed they can leave the study and so this can minimise any physical or psychological harm.
Debriefing
Debriefing can be used to reduce any distress that might have been caused by the experiment. Participants
should leave the study in the same state of mind as when they arrived. Do you think this is always
achieved?
Evaluation of the BPS ethical guidelines
 Provide clear guidance. The strength of the BPS Code of Ethics is that they
provide clear guidance.
 Has limited force. However, a weakness is that the code is not law and so has
limited force. The penalty (disbarment from the BPS) is perhaps not severe
enough. It also seems that breaches of the code can be justified and that
decisions as to whether research is justifiable can be researcher biased.
 Local ethical committees. This has led to the increasingly widespread
development of local ethical committees. This means the decision is no
longer as researcher biased but the committees are not without bias as they
most include fellow psychologists, who might be biased in favour of research.
Thus, committees should include non-psychologists and non-expert
members of the public.
Cost–benefit analysis
The double-obligation dilemma
The cost–benefit analysis is a safeguard that should precede all research. It involves weighing-up whether
the ends/findings (i.e. the benefits of the research in terms of increased understanding and applications)
justify the means/methods used to gain the data (i.e. the costs, such as harm to participants). A cost–benefit
analysis raises a double-obligation dilemma because researchers have an obligation both to their
participants and to society.
Evaluation of the cost–benefit analysis
 Difficult to predict outcomes. The cost–benefit analysis has a number of
weaknesses, including the fact that it is difficult to predict outcomes (and so
the potential costs and benefits) because the outcomes of research are not
always clear at the outset.
 Difficult to assess. Another problem is that the assessment of costs and
benefits is difficult, as it is hard to quantify costs and benefits, and such
assessments may be open to researcher bias and value judgements. The cost–
benefit analysis is accused of routinely favouring society over the
participants.
So what does this mean?
There are many decisions to be made in the implementing of research, beginning with which research
method is appropriate based on the strengths and weaknesses of the method. Then there are design issues to
do with the implementing of the different methods, issues of sampling, reliability, and validity, and ethical
considerations.
Over to you
Please see the Example Research Question for this chapter.
Download