EXPERIMENTS: PART 1 Overview Experimental versus observational research Variables Designs Between-group Within-subject Similarities and differences Mixed-model Background on Experiments Study where a researcher systematically manipulates one variable in order to examine its effect(s) on one or more other variables Two components Includes two or more conditions Participants are randomly assigned by the researcher Random = Equal odds of being in any particular condition Examples People with GAD randomly assigned to three treatments so the researchers can examine which one best reduces anxiety Students assigned to a “mortality salience” or control condition so the research can examine the impact on “war support” Variables Independent Variable Manipulated by the researcher Typically categorical Also called a “factor” that has “levels” Factor = Type of anxiety treatment Level = CBT (or Psychodynamic or Control) Dependent Variable Outcome variable that is presumably influenced by (depends on the effects of) the independent variable Behavior frequencies, mood, attitudes, symptoms Typically continuous Variables Confounds (extraneous variables, 3rd variables) Happens when unwanted differences (age, gender, researchers, environments, etc.) across experimental conditions Plan: Think of potential confounds up front Control for them methodologically Measure them to examine whether they have an effect Control for them statistically Experimental Designs Three main designs Between-group design Also called a “between-subjects design,” or “randomized controlled trial” (if clinically focused) Within-subject Also design called a “repeated-measures design” Mixed-model Combines design both of the above Between-group Design IV: 2 or more randomly-assigned groups of people DV: Usually a continuous variable Within-subject Design Any time that a study assess participants on the DV on more than one occasion Example: Participants go through more than one experimental condition ■ Control Pill Control Pill Similarities Uses the same type of analyses p-values obtained from t-tests (if two conditions) or F-tests/ANOVA (if more than two conditions) Is the result statistically significant, reliable, trustworthy? Cohen’s d used to compute effect size Tells the number of standard deviations by which two groups differ (kind of like r but on a scale from -∞ to ∞) Effect r r2 d Small ≥ .1 ≥ .01 ≥ 0.2 Medium ≥ .3 ≥ .09 ≥ 0.5 Large ≥ .5 ≥ .25 ≥ 0.8 Cohen’s d Calculator http://www.psychmike.com/calculators.php Usually use the first formula, requires M, SD, and n Can calculate by hand with a simple formula, but it doesn’t account for differences in sample size across conditions, so less accurate d = ( M 1 M 2 ) = (Mean difference) / standard deviation s s = average standard deviation across groups Calculation Example: Does athletic involvement improve physical health? Report 54. Physical Health 7. High School Athlete no yes Total Mean 6.4720 6.7543 6.6367 N 125 175 300 Std. Deviation 1.87331 1.94232 1.91578 M1 = 6.47 M2 = 6.75 s = (1.87+1.94) / 2 = 1.91 d = (6.47 – 6.75) / 1.91 = -0.28 / 1.91 = -0.15 = 0.15 weak effect! +/- sign is arbitrary, so usually just dropped 2014 article in Lancet (impact factor: 45.2) Take-home from the abstract: Differences Between-group design required when it is impossible or impractical to put participants through more than one condition Within-subject design is more powerful More likely to get significant p-value and bigger effect sizes. Why? It allows each participant to serve as their own control, canceling out a lot of cross-participant variability Between-group design requires more people Within-subject design is prone to ordering effects (order of conditions can effect results), such as progressive effects, or carryover effects Solution: Counterbalancing Mixed-model Design Many different types, but requires Random assignment of people to different groups Repeated measurement of dependent variable over time Benefits of both designs Example: Pre-post between-group design Experimental Group: pretest Control Group: pretest Treatment posttest posttest