CRIM 430 Lecture 4: Experimental and NonExperimental Research Research Designs Experimental Design (Classical Experiment) Experimental (receives stimulus) and control groups (does not receive a stimulus) Independent variable=experimental stimulus and dependent variable=effect of the stimulus Pretesting (prior to stimulus) and posttesting (following implementation of the stimulus) Non-experimental Design Does not include a comparison group Contains an independent and dependent variable May include a stimulus May not include pre/posttesting The Classic Experiment Treatment Group Stimulus Pre-Test Sample Control Group Placed into groups using random assignment Post-Test No Stimulus Includes measures of dependent variable in the present & past Includes measures of dependent variable since pretesting Compare results Non-Experimental Research Design With a Stimulus… Sample Treatment Group Pretest Stimulus Posttest Without a Stimulus… Sample Measure Dependent and Independent Variables Can use a retrospective or prospective Examine Relationship between Variables Compare pre/post test results Quasi-Experimental Designs Classical experiments are not always possible Quasi-experimental designs provide alternatives to the classical/experimental design Non-equivalent Cohort Time-Series Quasi-Experimental: Nonequivalent Group Design Same as experimental designs except groups are not selected randomly Group placement by convenience Group placement by first come, first serve Group placement by matching cases on particular characteristics (e.g., gender, age) Groups are not considered statistically equivalent; thus, results are subject to bias and inaccuracies Groups=treatment/experimental group and comparison group Quasi-Experimental: Cohort Designs Two different cohorts form the experimental group and comparison group Only one of the cohorts receives the stimulus Assumption: Factors influencing creation of one cohort are not significantly different from those influencing a second cohort (within limitations) Quasi-Experimental: TimeSeries Designs Examine a series of observations on some variable over time Interrupted time series: Observations compared before and after an intervention is introduced Can be used with or without a comparison group Interpretation must be done carefully and after adequate amounts of time and careful consideration of patterns Validity Validity is critical to assessing whether a study is strong or weak Validity=accuracy Internal validity: Conclusions drawn from experimental results may not accurately reflect what has occurred in the study—changes are due to another factor Types of Validity, Cont’d. Construct validity: Extent to which the measures we use to measure real-world things are accurate External validity: Extent to which research findings in one study apply to other areas (e.g., different cities, populations, etc.) Threats to Internal Validity To increase the validity of a study, it is best to use an experimental design Experimental designs reduce the likelihood that the validity of a study will be threatened There are twelve primary threats that must be considered when evaluating the quality of a study 12 Threats to Validity History: Historical events that occur during the course of a study and potentially impact study results Maturation Change within the subjects that potentially impacts study results Testing Potential impact of testing and retesting in and of itself Instrumentation Using different measures of the dependent variable at pretest and post-test Changes in data collection over time (e.g., record keeping) 12 Threats, Cont’d. Statistical Regression Starting at extreme ends of the spectrum—highly likely that subjects will fluctuate in behavior naturally Effects erroneously connected to stimulus rather than normal behavior patterns Selection Biases Judgmental selection of respondents—e.g., creaming the crop Experimental Mortality Subjects drop out of the sample before the study is over Causal Time Order Confusion or ambiguity over whether the stimulus came before the dependent variable 12 Threats, Cont’d. Diffusion or imitation of treatments When the treatment and control group subjects are in communication and potentially impact each other’s behavior Compensatory treatment When the control group attempts to circumvent what they are being denied (I.e., the stimulus) Compensatory rivalry When the control group works harder than they would otherwise to keep pace with the treatment group Demoralization Control group subjects give up because they do not have access to the stimulus Validity and Research Design Threats to validity can impact all types of research designs No design is perfect—because human behavior is very complex Stronger Experimental Designs Higher Validity QuasiExperimental Designs Weaker Non-Experimental Designs Lower Validity