Monday, May 13, 2019
Research Methods:
-
Collecting Data: Defining the concept under observation
•
The Operational definition is the concept defined in an easily measurable way.
-
Observation
-
Self-Report (Survey)
-
Standardized testing
•
Operational definitions of aggression:
-
Number of times a child hits, kicks or pushes another
-
Self-parental/ teacher reports of the above
-
Collecting Data: Naturalistic Observation
•
Naturalistic Observation: Watching behaviour in real-world settings without trying to manipulate the situation
•
Advantage of naturalistic observation studies: high is external validity
-
Extent to which we can generalize findings in real world settings
how representative of “real-life” your observations are. Do your observations really reflect the behaviour you are hoping to observe
•
Disadvantage to Naturalistic observation studies:
-
Reactivity: When an individuals behaviour change, as a reaction to being observed
•
If by observing behaviour, you change it, then the behaviour you are observing is not representative of that behaviour under “real life”conditions
1
Monday, May 13, 2019
-
Low Internal Validity: Extent to which we can draw cause-and-effect inferences from a study
•
Lack of control over the variables results in the researchers having to “wait for behaviour to unfold”
-
Collecting Data: Case Studies
•
Case Study: Research design that examines one personal or small group of people in depth, or a small umber of people in depth, ofter over an extended period of time
“Examines” = Simple observation, interviews, administer surveys, etc.
•
Advantage of Case studies: 1) Can be helpful in providing existence proof
-
Existence proofs: demonstrations that a given psychological phenomenon can occur
-
2) Provide and opportunity to study rare or unusual phenomenon that might be impossible to create
•
Disadvantage of case studies: generalizability beyond that case
-
Collecting Data: Surveys
•
Sample reported behaviour or opinions from many people:
“How much do you enjoy this class?”
•
Not at all -> neutral -> very much
•
Surveys are a very useful way of gathering a lot of data from people in a short space of time. These days, surveys can be posted online and data is easier to collect
•
One disadvantage of surveys is this difficulty in obtaining representative samples
-
You cannot survey “all people”
•
Survey everyone alive on the planet
2
-
All Canadian Undergraduates
•
All Ryerson Undergraduates
-
All Ryerson undergraduates taking a psychology class
Monday, May 13, 2019
•
All the people in this room
-
All the people willing to fill out the questionnaire
•
So how do we attempt to make our survey samples more representative?
-
Random Selection: Procedure that ensures that every person in a population has an equal change of being chosen to participate
-
Larger samples also are generally more reprobative than smaller samples.
•
Other issues with survey data
-
How many people in the room this that, theoretically, they could eat a chocolate-covered insect?
-
People don’t necessarily answer with what their actual behaviours would be.
-
Surveys rely on self-report
•
People may not answer truthfully (social desirability/positive impression management), or they may not answer accurately (memory is not perfect, nor is our ability to predict our own behaviour).
-
Some questions rely on subjective judgments of words like “very” vs. “somewhat”
•
E.g. “How aggressive is your child?” Very/Somewhat/not at all
•
E.g. "In the last six months, how often has your child punches, kicked, or threatened another child?”
-
Different opinions of aggression to different people
•
What two factors so we need to consider when evaluating results from a survey
3
-
Reliability: Consistency of measurement (test result)
Monday, May 13, 2019
-
Validity: Extent to which a measure assess what it purports to measure
•
Interrater reliability: the extent to which different people who conduct an interview/administer a survey agree on the measurements made
•
Although a test must be reliable in order to be valid, a reliable test can be completely invalid
-
Correlational Research
•
Correlation design: research design that examines the extent to which two variables are associate
-
Assesses the degree of the relationship between two variables.
•
1)Correlations can be positive, negative or zero
-
When one variable goes up the other variable goes up= Positive correlation
-
When one variable increases the other variable decreases= negative correlation
-
When variable goes up the other is not effected= No correlation
•
Correlation coefficients ( r, the statistic psychologists use the measure correlations ) range in value from -1.0 to 1.0
-
Scatterplot: grouping of points on a two-dimensional graph in which each dot represents a single person’s data
• r = 1.0, as the number of hours sober watching violent TV, the levels of aggression also increases
• r = -1.0, as the number of hours watching violent tv increase the levels of aggression decreases
• r = 0 no relation
•
Correlation is not causation
chicken and egg problem
4
Monday, May 13, 2019
•
Does watching violent tv make you violent? Do you watch violent tv because you are a violent persona
•
Third variable problem
•
You cannot make casual claims!
-
Experimental Method
•
Experiment: Research design characterized by 1) random assignment of participants to conditions and 2) manipulation of an independent variable
-
Random assignment: Randomly sorting participants into groups
•
Experimental group: In an experiment, the group if the participants that receives the manipulation
•
Control Group: in an experiment, the group of participants that doesn’t receive manipulation
-
The independent variable- the aspect of the situation that we manipulate. It must have two or more levels(i.e. we must be able to vary it) in order to create two or more conditions
-
The dependent (or outcome) variable- The aspect of the situation that we observe and measure
•
Manipulating the level of the independent variable should cause changes in the dependent variable
•
Boyatzis, Matillo & Nesbit (1995)- studies a group of 5- to 11- yr olds, the control group watched nothing and the experimental group watched power rangers.The groups were both then let into the play group and the researchers observed the behaviours between the two groups.
-
Independent Variable: Watching violent television
dependent Variable: Aggression in the playground
5
Monday, May 13, 2019
•
Operation definition: Yelling insults, threatening other children, hitting, shoving kicking or tripping other children, throwing objects at each other, grabbing objects from each other. Exclusions: Accidents)
-
The control group committed 6 aggressive acts per hour
-
The experimental group committed 40 aggressive acts per hour
-
The experimental group committed 7 times as may aggressive acts as the control group
-
The control groups typical acts of aggression- taking away another child’s crayon
-
The experimental groups typical acts of aggression- a flying karate kick
•
To be able to say the manipulation of the independent variable caused the observed difference in the dependent variable, we need to be sure that the independent variables the only thing that different between the experimental and control groups
•
We need ensure that there no confounding variables in our experiment
-
Cofound: any variable that differs between the experimental and control groups other than the independent variable
•
Is there a confounding variable (i.e. something else that differs, besides exposure to TV violence) in the MMPR study
-
Experimental group: Violence, Excitement
-
Control Group: No violence, no excitement
•
Another potential confound is that our two groups of children may have different in their initial levels of aggression
•
Pre-Tests can equate groups to begin with
•
Random Assignment: Randomly assign individuals to each group
-
The experimental Method: Limitations
•
1) The Placebo Affect: Improvement resulting from the mere expectation of improvement
6
Monday, May 13, 2019
-
E.g. partipants who receive drug treatments might improve just because they were expecting improvement from receiving treatment
•
How to control for placebo effect?
-
Administer a Placebo (sugar bill) to control group
-
Ensuring patients are blind to the condition they are in (i.e. experimental or control)
•
2) Experimenter Expectancy Affect: Phenomenon in which researcher’ hypotheses lead them to unintentionally bias the outcome of a study
-
In a single blind stay, the participants are “blind” to the condition that they are in. They do not know whether they are in the experimental group or the control group. They may even more unaware if the true purpose of the study
-
However, the experimenter also has expectancies about the behaviour of the participants
-
An experimenter who know which groups participants are in may bias the result, either intentionally or even unintentionally
•
Therefore, in a double-blind study, neither the participants nor the experimenter know who is in which condition
•
Obviously, we need some record of who has had the experimental and who has had the control treatment, but the experimenter ought not to be aware of who is who when they are delivering the treatment and measuring the dependent variable
•
Rosenthal’s “Bright” and “Dull” rats
-
Twelve researchers were given five its each and told to train them to run a maze: Six were told the rats were bred to be “maze bright” and the other six were tied that the rats were “maze dull”
-
In reality, the rats were all randomly assigned.
-
Maze Bright rats learned to run the maze more quickly with fewer errors.
•
Demand Characteristics: Cues that participants pick up from a study that allow them to generate guesses regarding the researches hypotheses
7
Monday, May 13, 2019
-
Generalizability: Extent to which your findings can be generalized- or extended/applied- beyond that specific study
•
Excluding expectancies and setting things up so that only the independent variable differs between
-
One common concern is that use of undergraduates in experimental research
-
To what extent are undergraduates representative of the demographic
-
Ethics in Research
•
All research involving human participants is required to undergo review by a research ethics board (REB) -> in order to ensure the protection of participants
•
General Guidelines
-
Informed consent: Informing research participants of what is involved in a study before asking them to participate
debriefing: The process at the end of the research session where participants are informs about the study (purpose, hypotheses, etc.)
8