Experiment Basics: Variables - the Department of Psychology at

advertisement
Experiment Basics: Variables
Psych 231: Research
Methods in Psychology

Results




Mean: 80.7%
Median: 83%
Range: 51-98
If you want to go over your exam set up a
time to see me
Exam 1
Turn in your data sheets (pass to front)
 Turn in your consent forms

Class Experiment

You’ve got your theory.


What behavior you want to examine
Identified what things (variables) you think
affects that behavior
So you want to do an experiment?


You’ve got your theory.
Next you need to derive predictions from
the theory.


These should be stated as hypotheses.
In terms of conceptual variables or constructs
• Conceptual variables are abstract theoretical entities

Consider our class experiment

Hypotheses:
• What you try to memorize & how you try to memorize it
will impact memory performance.
So you want to do an experiment?



You’ve got your theory.
Next you need to derive predictions from the
theory.
Now you need to design the experiment.

You need to operationalize your variables in terms of how
they will be:
• Manipulated
• Measured
• Controlled

Be aware of the underlying assumptions connecting your
constructs to your operational variables
• Be prepared to justify all of your choices
So you want to do an experiment?

Characteristics of the psychological situations


Constants: have the same value for all individuals
in the situation
Variables: have potentially different values for
each individual in the situation
Variables in our experiment:
• Levels of processing
• Type of words
• Memory performance
• time for recall
• kind of filler task given
• pacing of reading the words on the list
•…
Constants vs. Variables

Conceptual vs. Operational


Conceptual variables (constructs) are abstract
theoretical entities
Operational variables are defined in terms within
the experiment. They are concrete so that they
can be measured or manipulated
Conceptual
Operational
How we memorize
Has an ‘a’
‘Related to ISU’
(Levels of processing)
Kinds of things
Words rated as abstract
or concrete
Memory
Memory test
Variables



Independent variables (explanatory)
Dependent variables (response)
Extraneous variables



Control variables
Random variables
Confound variables

Correlational designs

Many kinds of Variables
have similar functions



Independent variables (explanatory)
Dependent variables (response)
Extraneous variables



Control variables
Random variables
Confound variables
Many kinds of Variables


The variables that are manipulated by the
experimenter (sometimes called factors)
Each IV must have at least two levels


Remember the point of an experiment is
comparison
Combination of all the levels of all of the IVs
results in the different conditions in an
experiment
Independent Variables
Factor A
1 factor, 2 levels
Condition 1 Condition 2
Factor A
1 factor, 3 levels
Cond 1 Cond 2 Cond 3
Factor B
2 factors, 2 x 3 levels
Cond 1 Cond 2 Cond 3
Factor A
Cond 4 Cond 5 Cond 6
Independent Variables

Methods of manipulation

Straightforward
• Stimulus manipulation - different conditions use
different stimuli Abstract vs. concrete words
• Instructional manipulation – different groups are given
different instructions Has an “a” vs. “ISU related”

Staged
• Event manipulation – manipulate characteristics of the
context, setting, etc.

Subject (Participant)– there are (pre-existing mostly)
differences between the subjects in the different conditions
• leads to a quasi-experiment
Manipulating your independent variable

Choosing the right levels of your independent
variable




Review the literature
Do a pilot experiment
Consider the costs, your resources, your limitations
Be realistic
• Pick levels found in the “real world”

Pay attention to the range of the levels
• Pick a large enough range to show the effect
• Aim for the middle of the range
Choosing your independent variable

These are things that you want to try to
avoid by careful selection of the levels of
your IV (may be issues for your DV as well).




Demand characteristics
Experimenter bias
Reactivity
Floor and ceiling effects
Identifying potential problems


Characteristics of the study that may give away the
purpose of the experiment
May influence how the participants behave in the study
 Examples:
• Experiment title: The effects of horror movies on mood
• Obvious manipulation: Ten psychology students looking
straight up
• Biased or leading questions: Don’t you think it’s bad to
murder unborn children?
Demand characteristics

Experimenter bias (expectancy effects)

The experimenter may influence the results
(intentionally and unintentionally)
• E.g., Clever Hans

One solution is to keep the experimenter (as well as
the participants) “blind” as to what conditions are
being tested
Experimenter Bias

Knowing that you are being measured

Just being in an experimental setting, people don’t
always respond the way that they “normally”
would.
• Cooperative
• Defensive
• Non-cooperative
Reactivity

A value below which a response cannot be
made


As a result the effects of your IV (if there are indeed
any) can’t be seen.
Imagine a task that is so difficult, that none of your
participants can do it.
Floor effects

When the dependent variable reaches a level
that cannot be exceeded



So while there may be an effect of the IV, that effect
can’t be seen because everybody has “maxed out”
Imagine a task that is so easy, that everybody scores
a 100%
To avoid floor and ceiling effects you want to
pick levels of your IV that result in middle level
performance in your DV
Ceiling effects



Independent variables (explanatory)
Dependent variables (response)
Extraneous variables



Control variables
Random variables
Confound variables
Variables


The variables that are measured by the
experimenter
They are “dependent” on the independent
variables (if there is a relationship between the IV
and DV as the hypothesis predicts).

Consider our class experiment


Conceptual level: Memory
Operational level: Recall test



Present list of words, participants make a
judgment for each word
15 sec. of filler (counting backwards by 3’s)
Measure the accuracy of recall
Dependent Variables

How to measure your your construct:

Can the participant provide self-report?
• Introspection – specially trained observers of their own thought
processes, method fell out of favor in early 1900’s
• Rating scales – strongly agree-agree-undecided-disagreestrongly disagree

Is the dependent variable directly observable?
• Choice/decision (sometimes timed)

Is the dependent variable indirectly observable?
• Physiological measures (e.g. GSR, heart rate)
• Behavioral measures (e.g. speed, accuracy)
Choosing your dependent variable


Scales of measurement
Errors in measurement
Measuring your dependent variables


Scales of measurement
Errors in measurement
Measuring your dependent variables

Scales of measurement - the correspondence
between the numbers representing the
properties that we’re measuring

The scale that you use will (partially) determine what
kinds of statistical analyses you can perform
Measuring your dependent variables

Categorical variables


Nominal scale
Quantitative variables
Scales of measurement

Nominal Scale: Consists of a set of categories that have
different names.



Label and categorize observations,
Do not make any quantitative distinctions between
observations.
Example:
• Eye color:
blue, green, brown, hazel
Scales of measurement

Categorical variables



Nominal scale
Ordinal scale
Quantitative variables
Scales of measurement

Ordinal Scale: Consists of a set of categories that are
organized in an ordered sequence.

Rank observations in terms of size or magnitude.

Example:
• T-shirt size:
Small,
Med,
Lrg,
XL,
Scales of measurement
XXL

Categorical variables



Nominal scale
Ordinal scale
Quantitative variables

Interval scale
Scales of measurement

Interval Scale: Consists of ordered categories where
all of the categories are intervals of exactly the same
size.

With an interval scale, equal differences between numbers on
the scale reflect equal differences in magnitude.

Ratios of magnitudes are not meaningful.
• Example: Fahrenheit temperature scale
40º
20º
“Not Twice as hot”
Scales of measurement

Categorical variables



Nominal scale
Ordinal scale
Quantitative variables


Interval scale
Ratio scale
Scales of measurement

Ratio scale: An interval scale with the additional feature
of an absolute zero point.

Ratios of numbers DO reflect ratios of magnitude.

It is easy to get ratio and interval scales confused
• Example: Measuring your height with playing cards
Scales of measurement
Ratio scale
8 cards high
Scales of measurement
Interval scale
5 cards high
Scales of measurement
Ratio scale
Interval scale
8 cards high
5 cards high
0 cards
high means
‘no height’
0 cards high
means
‘as tall as
the table’
Scales of measurement

Categorical variables



Nominal scale
Ordinal scale
Quantitative variables


Interval scale
Ratio scale
“Best” Scale?
• Given a choice, usually prefer highest level of
measurement possible
Scales of measurement


Scales of measurement
Errors in measurement

Reliability & Validity
Measuring your dependent variables
Example: Measuring intelligence?




Reliability & Validity
How do we measure the
construct?
How good is our
measure?
How does it compare to
other measures of the
construct?
Is it a self-consistent
measure?

Reliability


If you measure the same thing twice (or have two
measures of the same thing) do you get the same
values?
Validity

Does your measure really measure what it is
supposed to measure?
• Does our measure really measure the construct?
• Is there bias in our measurement?
Errors in measurement
Reliability = consistency
Validity = measuring what is intended
unreliable
invalid
reliable
invalid
Reliability & Validity
reliable
valid

True score + measurement error


A reliable measure will have a small amount of
error
Multiple “kinds” of reliability
Reliability

Test-restest reliability

Test the same participants more than once
• Measurement from the same person at two
different times
• Should be consistent across different
administrations
Reliable
Reliability
Unreliable

Internal consistency reliability


Multiple items testing the same construct
Extent to which scores on the items of a measure
correlate with each other
• Cronbach’s alpha (α)
• Split-half reliability
• Correlation of score on one half of the measure with
the other half (randomly determined)
Reliability

Inter-rater reliability


At least 2 raters observe behavior
Extent to which raters agree in their observations
• Are the raters consistent?

Requires some training in judgment
Reliability

Does your measure really measure what it is
supposed to measure?

There are many “kinds” of validity
Validity
VALIDITY
CONSTRUCT
INTERNAL
CRITERIONORIENTED
FACE
PREDICTIVE
CONVERGENT
CONCURRENT
DISCRIMINANT
Many kinds of Validity
EXTERNAL
VALIDITY
CONSTRUCT
INTERNAL
CRITERIONORIENTED
FACE
PREDICTIVE
CONVERGENT
CONCURRENT
DISCRIMINANT
Many kinds of Validity
EXTERNAL

Usually requires multiple studies, a large body
of evidence that supports the claim that the
measure really tests the construct
Construct Validity

At the surface level, does it look as if the
measure is testing the construct?
“This guy seems smart to me,
and
he got a high score on my IQ measure.”
Face Validity

Are experiments “real life” behavioral situations,
or does the process of control put too much
limitation on the “way things really work?”
External Validity

Variable representativeness


Subject representativeness


Relevant variables for the behavior studied along
which the sample may vary
Characteristics of sample and target population
along these relevant variables
Setting representativeness

Ecological validity - are the properties of the
research setting similar to those outside the lab
External Validity

The precision of the results

Did the change result from the changes in the DV
or does it come from something else?
Internal Validity

History – an event happens the experiment

Maturation – participants get older (and other
changes)

Selection – nonrandom selection may lead to biases

Mortality – participants drop out or can’t continue

Testing – being in the study actually influences how
the participants respond
Threats to internal validity



Independent variables (explanatory)
Dependent variables (response)
Extraneous variables



Control variables
Random variables
Confound variables
Variables


Can you keep them constant?
Should you make them random variables?
Control your extraneous variable(s)

Control variables

Holding things constant - Controls for excessive
random variability
• 90 seconds for recall
• 15 seconds of counting backwards by 3’s
Extraneous Variables

Random variables – may freely vary, to spread
variability equally across all experimental conditions

Randomization
• A procedures that assure that each level of an extraneous
variable has an equal chance of occurring in all conditions of
observation.
• On average, the extraneous variable is not confounded with our
manipulated variable.
• random order of word presentation
• time of day administered
• what they ate that day
• when they woke up
•…
Extraneous Variables

Confound variables

Other variables, that haven’t been accounted for
(manipulated, measured, randomized, controlled)
that can impact changes in the dependent variable(s)
Confound Variables

Confound variables

Other variables, that haven’t been accounted for
(manipulated, measured, randomized, controlled)
that can impact changes in the dependent variable(s)
Confound Variables

Pilot studies



A trial run through
Don’t plan to publish these results, just try out the
methods
Manipulation checks


An attempt to directly measure whether the IV
variable really affects the DV.
Look for correlations with other measures of the
desired effects.
“Debugging your study”
Download