File

advertisement
VALIDITY
What is validity?
What are the types of validity?
How do you assess validity?
How do you improve validity?
What is validity?
Definition:
Validity refers to whether or not the investigation measures
what it is supposed to measure.
What are the types of validity?
Types of validity – internal and external
Internal – again whether or not the study is really testing
what it is supposed to test.
External – to what extent can the findings of the study be
generalised across people, places and time periods.
How do you assess validity?
Internal:
Consider the extent to which we have control over variables.
Also, specifically for self-reports, observations or other tests there
are the following ways of checking validity:
• Face validity
• Concurrent validity
• Predictive validity
External:
• Consider the 3 types of validity by looking at where the study
took place, when it was conducted and who the sample were.
• Also, you can look at replications of the study - has the study
been replicated with the same results in a different environment,
at a different time and on a different sample?
How do you improve validity?
Here are some ways to improve internal validity:
• Control for variables (e.g. demand characteristics,
investigator effects, participant variables)
• Standardisation
• Randomisation
To improve external validity:
Replicate the study in the same way but in a different
environment, at a different time and on a different sample
Past exam question and example answer
What is meant by ‘validity’? How could the psychologist
have assessed the validity of the questionnaire used to
measure the severity of symptoms? [4 marks]
Eg: ‘Validity refers to whether or not the questionnaire
measures what it is supposed to measure (1 mark).
Concurrent validity would involve getting a Doctor to
assess the symptoms (1 mark) and seeing how closely
they match the score on the questionnaire (1 mark). If the
two matched, the questionnaire would have high validity (1
mark).
RELIABILITY
What is reliability?
What are the types of reliability?
How do you assess each type reliability?
How do you improve reliability?
What is reliability?
If a study can be replicated in the same way and it gains
consistent results it can be said to be reliable.
Types of reliability
• Researcher:
Refers to the consistency of the researchers who are
gathering the data
E.G. In observations or self reports this is called inter-rater reliability
• Internal:
Refers to the consistency of the measures used within
the study.
• External:
Refers to the extent to which a measure is
consistent from one occasion to another.
Researcher reliability
• Assessing: Compare observations of two
observers and conduct a correlation to see if results
are similar.
• Improving: Training, operationalisation, conduct a
pilot study.
Internal reliability
• Assessing: Split-half method
• The observation categories or questions on a
questionnaire should be split in half randomly,
and the two sets of responses for each half
compared – they should indicate the same result.
This can be more formally assessed by carrying
out a correlation
• Improving:
• Look at particular questions that seem to be giving a
different result and change them until the split half
method indicates an improved correlation.
External reliability
• Assessing: Test-retest
• Giving the same participant the same questionnaire at
two different times and giving them no feedback after
their first answers; the time gap will need to be
sufficient for them not to remember how they
answered first time but not so long that they have
changed in a way relevant to the questionnaire. The
two sets of answers should be correlated
• Improving:
• Check individual questions from the test-retest that do
not positively correlate, then change alter them and
do another test-retest until they positively correlate.
Download