Uploaded by brenna.tessier

PSYC 6213 - Week 1

advertisement
Goodwin, K. A., & Goodwin, C. J. (2017). Research in psychology: Methods and designs (8th ed.). Hoboken,
NJ: John Wiley & Sons.
Chapter 1: Scientific Thinking in Psychology

Provides a solid foundation for understanding the information you will encounter in
other psychology courses in more specific topic area
 Knowledge in research methods will make you a more informed and critical thinker
(evaluating research and claims about psychology that appear to be based on research)
 Critical thinking: a form of precise thinking in which a person reasons about relevant
evidence to draw a sound or good conclusion
 We rely on authority as the source of knowledge
 Using reason
 Experiences in the world: empiricism  the process of learning things through direct
observation of experience, and reflection on those experiences
 Experienced are limited to our interpretations of them – bisa
 Confirmation bias: a tendency to seek and pay special attention to information that
supports one’s beliefs while ignoring information that contradicts a belief
 Availability heuristic: occurs when we experience unusual or very memorable events
and then overestimate how often such events typically occur
 Most reliable way to develop a belief is through the method of science: real things,
whose characters are entirely independent of our opinions about them
Science as a way of knowing
 Determinism: means that events, including psychological ones, have causes
 Discoverability: means that by using agreed-upon scientific methods, these causes can
be discovered with some degree of confidence
 There are multiple factors influencing behaviour
 Observing and drawing conclusions (experiences subjective to biases)
 Scientists systematic observations:
o Precise definitions of the phenomena being measured
o Reliable and valid measuring tools that yield useful and interpretable data
o Generally accepted research methodologies
o System of logic for drawing conclusions and fitting those conclusions into general
theories
 A way of knowing that its procedures result in knowledge that can be publicly verified
 Objectivity: eliminating such human factors as expectations and bias
 Scientists cannot separate themselves from their already existing biases, attitudes,
beliefs
 Science is a self-correcting enterprise and its conclusions are not absolute
 Empirical questions: are those that can be answered through the systematic
observations and techniques that characterize the scientific methodology
 Theory: a set of statements that summarize what is known about some phenomena and
propose working explanations for those phenomena

Skeptical optimists: open to new ideas and optimistic about using scientific methods to
test these ideas, but at the same time are though-minded, won’t accept claims without
good evidence
Psychological science and pseudoscience
 Pseudoscience: applied to any field of inquiry that appears to use scientific methods and
tried hard to give that impression, but is actually based on inadequate, unscientific
methods and makes claims that are generally false, or at best, overly simplistic.
 Anecdotal evidence: instances that seem to provide evidence for some phenomenon
 Struggled with anecdotal evidence that if you don’t like them, they can just be ignored
 Pseudoscience is characterized by: false associated with true science, misuse of the rules
of evidence by relying exclusively upon anecdotal data, lack of specificity that avoids a
true test of the theory, an oversimplification of complex processes
Goals of research psychology:
 Describe: identify regularly occurring sequences of events, including both stimuli or
environmental events and responses of behavioural events
 Prediction: regular and predictable relationships exist for psychological phenomona
 Explain: explain some behaviour is to know what causality to happen, the concept of
causality is immensely complex
 Apply: simply to the various ways of applying those principles of behaviour learned
through research
A Passion for research in psychology:
 Eleanor Gibson  honoured for lifetime of research in developmental psychology
 B.F. Skinner  work on operant conditioning created an entire subculture within
experimental psychology
Chapter 2: Ethics in Psychology Research
Developing the code of ethics for psychological science:
 Nuremberg trials (those associated with various war atrocities)
o Emphasizing the importance of voluntary consent from individuals involved in
medical research
 Some ethical issues in the past were research participants not being treated well
 Stanley Milgram  studies which were questioned about research ethics
o Studying the tendency to obey authority (in the guise of a study on the effects of
physical punishment on learning)
o Got volunteers to obey commands from an authority figure (a member of the
research team)
o Participants were told to administer high-voltage shocks to other volunteers (no
actual shocks) to those attempting to complete a memory test
o High number of volunteers complied with the “orders” of the experimenter to
deliver the shock
 Many participants became quite distressed
o Research was criticized for putting volunteers into high levels for stress,
producing possible long-term effects on self-esteem, dignity, and distrust


National Research Act in 1974  national commission for the protection of human
subjects of biomedical and behavioural research
o Respect for persons
o Beneficence
o Justice
Now the ethics code for research includes 5 general principles
o Beneficence and nonmaleficence  weighing benefits and the costs of the
research they conduct and seek to achieve the greatest good int their research
with little harm done to others
o Fidelity and responsibility  constantly aware of the responsibility to society
and reminds them of highest standards
o Integrity  honest in all aspects of the research
o Justice  treat everyone with fairness and to maintain levels of expertise that
reduces the chances of their work showing any form of bias
o Respect for peoples rights and dignity  vigorous in their efforts to safeguard
confidentiality and protect the rights of those volunteering as research
participants
Chapter 3: Developing ideas for research in psychology
Basic research: have studies topics such as perception, learning, cognition, and basic
neurological and physiological processes as they relate to psychological phenomena
 Applied research: direct and immediate relevance to the solution of real-world problems
 Laboratory research: research has greater control, conditions of the study can be
specificed more precisely, participants can be selected and palces in the different
conditions more systemically
 Field research: environments matching situations we encounter in everyday life.
 Quantitative research: data collected and presented In the form of numbers – average
scores percentages, graphs, tables, etc.
 Qualitative research: includes studies that collect interview information, detailed case
studies, observational studies
 Operationsim: objective and precise
 Parsimonious: they include the minimum number of constructs and assumptions
needed to explain the phenomenon adequately and predict future research outcomes.
 Developing research from other research  extension of data and theory
Reviewing the literature:
 Use of electronic data base to do similar research
 Read and reread the abstract

Chapter 4: Sampling, Measurement, and hypothesis testing
Evaluating Measures:
 Reliability: if the results are repeatable with the behaviours that are remeasured
 Reliability is essential in any measure, if there is a lot of error, reliability is low, vice versa

Reliability is assessed more formally in research that evaluated the adequacy of any type
of psychological test
 Realibility is essential in any measurement
 Without it, there is no way to determine what a score on any one particular measure
means
 Test-retest  on an SAT score if the first time a student received 1100, they’re unlikely
to received a score close to 1800 the second time.
 Validity: if It measures what it is designed to measure
 Content validity: whether or not the actual content of the items on a test makes sense
in terms of the contruct being measured
 Face validity: not actually a “valid” form of validity at all. Concerns whether the measure
seems valid to those who are taking it, and it is important in the sense that we want
those taking our tests and filling out our surverys to treat the task seriously
 Criterion validity: whether the measure is related to some behavioural outcome or
criterion that has been established by orior research
o subDivided into two additional forms of validity
o predictive validity: whether the measure can accurately forecast some future
behaviour
o concurrent validity: whether the measure is meaningfully related to some other
measure of behaviour
o ex. Intelligence test
 it should do a reasonably good job predicting how well a child will do in
school
 produce similar results to other known measures of intelligence
 future grades are predictive
 scores on other tests (concurrent)
 construct validity: whether a test adequately measures some construct, and it connects
directly with what is now a familiar concept to you
Scales of Measurement:
 Nominal Scales: the number we assign to events serves only to classify them into one
group or another
o Studies using these scales typically assign people to names of categories
 Ordinal Scales: sets of rankings, showing the relative standing of objects or individual's
o College transcripts
 Interval Scales: extend the idea of rank order to include the concept of equal intervals
between the ordered events
o Research using psychological tests of personality, attitude, and ability are the
most common examples of studies involving interval scales
o Score of zero is simply another point on the scale
 Ratio Scales: concept of order and equal interval are carried over the ordinal and
interval scales, but, in addition, the ratio scale has a true zero point
Statistical Analysis:











Descriptive statistics: summarize the data collected from the sample of participants in
the study
Inferential statistics: allow you to draw conclusions about your data that can be applied
to the wider population
Mean: average  central tendency
Median: score in the exact middle of the set of scored
Mode: score occurring most frequently in a set of scores
Outliers: scores that are far removed from other scores in a data set
Range: difference between the high and low scores
Variance: represents how distributed the scores are, relative to the mean
Standard deviation: estimate of the average amount by which the scores in the sample
deviate from the mean score
Effect size: provides an estimate of the magnitude of the difference among sts of scores,
while taking into account the amount of variability in the scores
Meta-analysis: uses effect-size analyses to combine the results from several
experiments that use the same varaibles, even though the variables are likely to have
different operational defnitions
Graham, P. (n.d.) The perils of obedience. Retrieved from http://www.paulgraham.com/perils.html

Student/Volunteer Brandt aware of the ethics when taking part in a research study and
aware that anyone can leave a study on their own free will!
Griggs, R. A., Blyler, J., & Jackson, S. L. (2020). Using research ethics as a springboard for teaching Milgram’s
obedience study as a contentious classic. Scholarship of Teaching and Learning in Psychology, 6(4), 350–356.
https://doi.org/10.1037/stl0000182




Participants not appropriately debriefed following experimental sessions
Apparently, the majority of participants didn’t know that hadn’t actually shocked
anyone until reading the research
Milgram most likely deliberately misrepresented the postexperimental debriefing in his
publications to protect not only his credibility as a responsible researcher but also the
ethical integrity and possible future of obedience experiments
Experimenter was supposed to use four prods to encourage participants to continue,
after the fourth the experiment was supposed ot be over  wasn’t the case
Hilbig, B. E., Thielmann, I., & Böhm, R. (2022). Bending our ethics code: Avoidable deception and its
justification in psychological research. European Psychologist, 27(1), 62.



Deception of participants in psychological research is addressed by the ethics code,
explaining the conditions under which deception may be justifiable
Deception isn’t to be used unless justified that nondeceptive alternative are not feasible
(American Psychological Association, 2017, section 8.07)
Only when a problem is significant and cannot be investigated in any other way then
deception


Some argue that deception must be justifiable because it is so prevelant in both
research and everyday life
Use of deception as a last resort
Test-Retest reliability:
 Assuming scores would be consistent across time
Internal Consistency:
 Consistency of people’s responses across the items on a multiple-item measure.
People’s responses are supposed to reflect the same underlying construct
Mac Giolla Phadraig, C., Ishak, N. S., Harten, M., Al Mutairi, W., Duane, B., Donnelly, S. E., & Nunn, J. (2021). The Oral
Status Survey Tool: construction, validity, reliability and feasibility among people with mild and moderate intellectual
disabilities. Journal of Intellectual Disability Research, 65(5), 437–451. https://doi.org/10.1111/jir.12820








Research on the oral health of people with intellectual disabilities and the contribution
to health disparities
Construction and evaluation of the Oral Status Survey Tool (OSST)
Using non-clinical construction and content validation phase and a clinical phase to test
concurrent validity, reliability and feasibility.
Data and tool used for planning of oral health services for people with IDs
Aim to ensure the tool produced a range of useful data
Content validity: refers to the dregee in which elements of as assessment instrument
are relevant to and representative of the targeted construct for a particular assessment
purpose
o Ensure that a new tool consists of appropriate items to adequaleyly cover the
specific domains
o Done by 8 dental experts
Content validity ratio (CVR) numeric value, ranging between -1 and 1
Concurrent validity: measures how well a new test correlated to an established test
when both are captured at the same time
Internal and external validity
content validity,
Inter-rater reliability
Download