The Experiment Chapter 7 Doing Experiments In Everyday Life

advertisement
 The Experiment
 Chapter 7
 Doing Experiments In Everyday Life
 Experiments in psychology use the same logic that guides experiments in biology or
engineering.
 Experimental research is strongest for testing causal relationships.
 Experiments most clearly satisfy the three conditions needed to demonstrate causality—
temporal order, association, and no alternative explanations.
 Doing Experiments In Everyday Life
 You can do two types of comparisons:
1. Before-and-after comparison – single group – repeated measures
2. Side-by-side comparison – multiple groups – independent groups
 You do three things in an experiment:
1. Start with a cause-effect hypothesis,
2. Modify a situation or introduce a change,
3. Compare outcomes with and without the modification.
 What Questions Can You Answer With the Experimental Method?
 Research questions most appropriate for an experiment fit its strengths and limitations.
These include:
1. a clear and simple logic,
2. the ability to isolate a causal mechanism,
3. targeted on two or three variables and narrow in scope,
4. limited by practical and ethical aspects of the situations you can impose on
humans.
 Do You Speak The Language Of Experimental Design?
 Steps in Conducting an Experiment
1. Begin with a straightforward hypothesis appropriate for experimental research.
2. Decide on an experimental design to test the hypothesis within practical
limitations.
3. Decide how to introduce the independent variable.
4. Develop a valid and reliable measure of the dependent variable.
5. Set up an experimental setting and conduct a pilot test of the variables.
6. Locate appropriate participants.
 Do You Speak The Language Of Experimental Design?
 Steps in Conducting an Experiment (cont.)
1. Randomly assign participants to groups and give careful instructions.
2. Gather data for the pretest measure of the dependent variable.
3. Introduce the independent variable to the experimental group only and monitor
all groups.
4. Gather data for posttest measure of the dependent variable.
5. Debrief the participants.
6. Examine data collected and make comparisons between different groups using
statistics to determine whether the data support the hypothesis.
 Do You Speak The Language Of Experimental Design?
 Managing Experiments
1. Isolation of effects of independent variable
2. Elimination of alternative explanations
3. Confederates = people who work for an experimenter and mislead participants
by pretending to be another participant or an uninvolved bystander.
 Why Assign People Randomly?
 The purpose of random assignment is to create equivalent groups.
1. Random Assignment = Sort research participants into two or more groups in a
mathematically random process.
 Matching versus Random Assignment
1. True matching on more than one or two characteristics is nearly impossible.
 Do You Speak The Language Of Experimental Design?
 A true experiment includes:
1. Independent variable
2. Dependent variable
3. Posttest
 Posttest = A measure of the dependent variable after the independent
variable has been introduced in an experiment.
 Post test only Design
 Do You Speak The Language Of Experimental Design?
 Do You Speak The Language Of Experimental Design?
 A true experiment includes (cont):
Pretest (in some designs)
 Pretest = A measure of the dependent variable prior to introducing the
independent variable in an experiment.
 Assures that groups are equivalent before experiment starts
 Mortality can be assessed
 Equivalence of small sample groups can be assessed
 Pretest can be used to select participants for the experiment
 Disadvantages are time consuming, awkward administration and might
sensitize subjects (lower ext validity, and induce demand characteristics)
 Do You Speak The Language Of Experimental Design?
 A true experiment includes (cont):
Experimental group (in some designs)
 Experimental group = In an experiment with multiple groups, a group of
participants that receives the independent variable or a high level of it.
Control group (in some designs)
 Control Group = In an experiment with multiple groups, a group of
participants that does not receive the independent variable or a very low
level of it.
Random assignment to groups
 Do You Speak The Language Of Experimental Design?
 Types of Experimental Design
 Independent group design = Experimental designs in which you use two or more
groups and each gets a different level of the independent variable.
 Repeated Measures Design = An experimental design with a single participant
group but that receives different levels of the independent variable.
 REPEATED MEASURES DESIGN
 Counterbalancing
 Complete counterbalancing
 Latin squares
 Time Interval Between Treatments
 Choosing Between Independent Groups and Repeated Measures Design
 COUNTERBALANCING
 Repeated Measures Design
 Advantages and Disadvantages of Repeated Measures Design
 Advantages
 Fewer participants
 Extremely sensitive to statistical differences
 Conditions are identical because person is own control group
 Disadvantages
 Order effect
 Practice effect
 Fatigue effect
 Contrast effect
 Do You Speak The Language Of Experimental Design?
 Types of Experimental Design
 Experimental design = How parts of an experiment are arranged, often in one of
the standard configurations.
 True Experimental Designs
 Classical experimental design = An experimental design that has all key elements
that strengthen its internal validity: random assignment, control and experimental
groups, and pretest and protest
 Do You Speak The Language Of Experimental Design?
 Do You Speak The Language Of Experimental Design?
 Do You Speak The Language Of Experimental Design?
 LATIN SQUARE WITH FOUR CONDITIONS
 Do You Speak The Language Of Experimental Design?
 Types of Experimental Design
 True Experimental Designs
 Factorial Design = An experimental design in which you examine the impact of
combinations of two or more independent variable conditions.
 Main Effects = The effect of a single independent variable on a dependent
variable.
 Interaction Effects = The effect of two or more independent variables in
combination on a dependent variable that is beyond or different from the effect
that each has alone.
 Do You Speak The Language Of Experimental Design?
 Do You Speak The Language Of Experimental Design?
 Preexperimental designs = Experimental designs that lack one or more parts of the
classical experimental design.
 One-Shot Case Study Design. (one-group posttest-only design)
 One-Group Pretest-Posttest Design.
 Static Group Comparison. (posttest-only nonequivalent group design).
 Do You Speak The Language Of Experimental Design?
 Quasi-experimental Designs = Experimental Designs that approximate the strengths of
the classical experimental design but do not contain all its parts.
 Interrupted Time Series.
 Equivalent Time Series.
 Do You Speak The Language Of Experimental Design?
 Do You Speak The Language Of Experimental Design?
 Design notation = A symbol system to express the parts of an experimental design with
X, O and R.

O = observation of dependent variable

X = independent variable

R = random assignment.
 Do You Speak The Language Of Experimental Design?
 Experimental Validity Inside And Out
 Looking at an Experiment’s Internal Validity
 Internal Validity = The ability to state that the independent variable was the one
sure cause that produced a change in the dependent variable.
 Experimental Validity Inside And Out
 Threats to Internal Validity
 Selection Bias
 History
 Maturation
 Testing
 Experimental Mortality
 Contamination or Diffusion of Treatment
 Experimental Validity Inside And Out
 Threats to Internal Validity
 Experimenter Expectancy
 Double blind experiment = An experimental design to control
experimenter expectancy in which the researcher does not have direct
contact with participants. All contact is through assistants from whom
some details are withheld.
 Placebo = A false or non-effective independent variable given to mislead
participants.
 Experimental Validity Inside And Out
 External Validity and Field Experiments
 External validity = An ability to generalize experimental findings to events and
settings beyond the experimental setting itself.
 Threats to external validity:
 Participants are not representative
 Artificial setting
 Artificial treatment
 Experimental Validity Inside And Out
 External Validity and Field Experiments
 Threats to external validity (cont)
 Reactivity = participants modifying their behavior because they are aware
that they are in a study.
•
Hawthorne effect = a type of experimental reactivity in which
participants change due to their awareness of being in a study and
the attention they receive from researchers.
 Experimental Validity Inside And Out
 External Validity and Field Experiments
 Field experiment = An experiment that takes place in a natural setting and over
which experimenters have limited control.
 Less internal validity
 More external validity
 Experimental Validity Inside And Out
 Experimental Validity Inside And Out
 Natural experiments = Events that were not initially planned to be experiments but
permitted measures and comparisons that allowed the use of an experimental logic.
 Also called ex post facto—after the fact
 Experimental Validity Inside And Out
 Practical techniques to carry out effective experiments
 Planning and Pilot Tests
 Instructions to Participants
 Post-experiment Interview
 Debrief = An interview or talk with participants after an experiment ends
in which you remove deception if used and try to learn how they
understood the experimental situation
 Making Comparisons And Looking At Experimental Results
 How To Be Ethical In Experiments
 Experimenters must carefully consult with IRBs.
 Deception is acceptable BUT dishonesty is NOT!
 Debriefing is a MUST.
 References:
 Cozby, P.C. (2006). Methods in Behavioral Research. Ninth Edition. McGraw-Hill
Higher Education: Boston, MA.
 Neuman, W. L. (2009). Understanding Research. Pearson: Boston, MA
Download