I/O Psychology Research Methods

advertisement
I/O Psychology
Research Methods
What is Science?
Science: Approach that involves the
understanding, prediction, and control of
some phenomenon of interest.
 Scientific Knowledge is

Logical and Concerned with Understanding
 Empirical
 Communicable and Precise
 Probabilistic (Disprove, NOT Prove)
 Objective / Disinterestedness

Goals of Science
Ex: We want to study absenteeism in an
organization
 Description: What is the current state of
affairs?
 Prediction: What will happen in the future?
 Explanation: What is the cause of the
phenomena we’re interested in?

What is “research”?
Systematic study of phenomena according
to scientific principles.
 A set of procedures used to obtain
empirical and verifiable information from
which we then make informed, educated
conclusions.

The Empirical Research Process
1.
2.
3.
4.
5.
Statement of the Problem
Design of the Research Study
Measurement of Variables
Analysis of Data
Interpretation/Conclusions
Step 1: Statement of the Problem

Theory: statement that explains the relationship
among phenomena; gives us a framework within
which to conduct research.


“There is nothing quite so practical as a good theory.”
Kurt Lewin
Two Approaches:


Inductive – theory building; use data to derive theory.
Deductive – theory testing; start with theory and
collect data to test that theory.
Step 1: Statement of the Problem

Hypothesis
 A testable
statement about the status of a
variable or the relationship among multiple
variables
 Must be falsifiable!
Step 1: Statement of the Problem

Types of variables
Independent Variables (IV): Are variables that
are manipulated by the researcher.
 Dependent Variables (DV): Are the outcomes
of interest.
 Predictors and Criterion
 Confounding variables: Uncontrolled
extraneous variables that permits alternative
explanations for the results of a study.

Moderator Variable


Special type of IV that influences the relationship
between 2 other variables
X
Y
M

Example



Gender & Hiring rate
M = Type of job
Relationship b/t gender and hiring rate may change
depending on the type of job individuals are applying
for.
Mediator Variable




Special type of IV that accounts for the relation
between the IV and the DV.
Mediation implies a causal relation in which an
IV causes a mediator which causes a DV.
IV
MED
DV
Example:



IV = negative feedback
MED = negative thoughts
DV = willingness to participate
Moderator vs. Mediator
A moderator variable is one that influences
the strength of a relationship between two
other variables.
 A mediator variable is one that explains
the relationship between the two other
variables.

Example

You are an I/O psychologist working for an
insurance company. You want to assess which
of two training methods is most effective for
training new secretaries. You give one group of
secretaries on-the-job training and a booklet to
study at home. You give the second group of
secretaries on-the-job training and have them
watch a 30-minute video.
Step 2: Research Design

A research design is the structure or
architecture for the study.


A plan for how to treat variables that can
influence results so as to rule out alternative
interpretations.
Primary Research Methods:



Experimental (Laboratory vs. Field Research)
Quasi-Experimental
Non-Experimental (Observational, Survey)
Step 2: Research Design

Secondary Research Methods


Meta-analysis: statistical method for
combining/analyzing the results from many
studies to draw a general conclusion about
relationships among variables (p.61).
Qualitative Research Methods

Rely on observation, interview, case study,
and analysis of diaries to produce narrative
descriptions of events or processes.
Evaluating Research Design

Internal validity (Control)
Does X cause Y?
 Lab studies eliminate distracting variables
through experimental control.
 Using of statistical techniques to control for
the influences of certain variables is
statistical control.


External validity (Generalizability)

Does the relation of X and Y hold in other
settings and with other participants and
stimuli?
Threats to Internal Validity
History
 Instrumentation
 Selection
 Maturation
 Mortality/Attrition
 Testing
 Experimenter Bias
 Awareness of Being a Subject

Step 3: Measurement


Goal: Quantify the IV and DV
Psychological Measurement – the process of
quantifying variables (called constructs)


“The process of assigning numerical values to
represent individual differences, that is, variations
among individuals on the attribute of interest”
A “Measure” …
 Any
mechanism, procedure, tool, etc, that purports
to translate attribute differences into numerical
values
Step 3: Measurement

Two classes of measured variables:

Categorical (or Qualitative)
 Differ

in type but not amount
Continuous (or Quantitative)
 Differ
in amount
Step 4: Data Analysis

Statistics are what we use to summarize
relationship among variables and to estimate
the odds that they reflect more than mere
chance



Descriptive Statistics: Summarize, organize, and
describe a sample of data.
Inferential Statistics: Used to make inferences from
sample data to a larger sample or population.
Distributions
Descriptive Statistics

Measures of Central Tendency


Mean, Median, Mode
Measures of Variability

Range, Variance, SD
Differences in Variance
Low variance
Frequency
Normal
High variance
Inferential Statistics
Compares a hypothesis to an alternative
 Statistical Significance: The likelihood that
the observed difference would be obtained
if the null hypothesis were true
 Statistical Power: Likelihood of finding a
statistically significant difference when a
true difference exists

Correlation

Correlation
Used to assess the relationship between 2
variables
 Represented by the correlation coefficient “r”
 r can take on values from –1 to +1

 Size
denotes the magnitude of the relationship
 0 means no relationship
Correlation and Regression

Correlation
Scatterplot
 Regression Line

Linear vs. Non-Linear
 Multiple Correlations
 Correlation and Causation

Prediction of the DV with one IV

Correlations allow us to make predictions
86
115
IV
Interpretation: Evaluating Measures

How do you determine the usefulness of
the information gathered from our
measures?

The Answer:
Reliability Evidence
 Validity Evidence

Interpretation: Evaluating Measures




Reliability: Consistency or stability of a measure.
A measure should yield a similar score each
time it is given
We can get a reliable measure by reducing
errors of measurement: any factor that affects
obtained scores but is not related to the thing we
want to measure.
Errors of measurement

Random factors, practice effects, etc.
Evaluating Measures: Reliability

Test-Retest (Index of Stability)
Method: Give the same test on two occasions and correlate
sets of scores (coefficient of stability)
Error: Anything that differentially influences scores across time
for the same test
Issue: How long should the time interval be?
Limitations:
 Not good for tests that are supposed to assess change
 Not good for tests of things that change quickly (i.e., mood)
 Difficult and expensive to retest
 Memory/practice effects are likely




Evaluating Measures: Reliability

Equivalent Forms (Index of Equivalence)

•
•
•
Method: Give two versions of a test and correlate
scores (coefficient of equivalence)
Reflects the extent to which the two different versions
are measuring the same concept in the same way
Issue: are tests really parallel?; length of interval?
Limitations:
•
•
•
Difficult and expensive
Testing time
Unique estimate for each interval
Evaluating Measures: Reliability

Internal Consistency Reliability

Method: take a single test and look at how well the
items on the test relate to each other



Split-half: similar to alternate forms (e.g., odd vs. even
items)
Cronbach’s Alpha: mathematically equivalent to the
average of all possible split-half estimates
Limitations



Only use for multiple item tests
Some “tests” are not designed to be homogeneous
Doesn’t assess stability over time
Evaluating Measures: Reliability
Inter-Rater Reliability


Method: two different raters rate the
same target and the ratings are
correlated

•
•
Correlation reflects the proportion of
consistency among the ratings
Issue: reliability doesn’t imply accuracy
Limitations
•
•
Need informed, trained raters
Ratings are not a good way to measure many
attributes
Interpretation: Evaluating Measures

Validity:
The accurateness of inferences made based
on data.
 Whether a measure accurately and
completely represents what was intended to
be measured.


Validity is not a property of the test

It is a property of the inferences we make
from the test scores
Evaluating Measures: Validity

Criterion-Related
Predictive
 Concurrent

Content-Related
 Construct-Related
 Reliability is a necessary but not
sufficient condition for validity

Content Validity


The extent to which a predictor provides a
representative sample of the thing we’re
measuring
Example: First Exam


Content: history, research methods, criterion theory,
job analysis, measurement in selection
Evidence

SME evaluation
Criterion-Related Validity
The extent to which a predictor relates to a
criterion
 Evidence


Correlation (called the validity coefficient)
 A good
validity coefficient is around .3 to .4
 Concurrent Validity
 Predictive Validity
Construct Validity



The extent to which a test is an accurate
representation of the construct it is trying to
measure
Construct validity results from the slow
accumulation of evidence (multiple methods)
Evidence:



Content validity and criterion-related validity can
provide support for construct validity
Convergent validity
Divergent (discriminant) validity
Step 5: Conclusions From
Research
You are making inferences!
 What if it you’re inferences seem “wrong”?

Theory is wrong?
 Information (data) is bad?

 Bad
measurement?
 Bad research design?
 Bad sample?

Analysis was wrong?
Step 5: Conclusions From
Research
Cumulative Process
 Dissemination



Conference presentations & journal
publications
Boundary conditions
Generalizability
 Causation


Serendipity
Research Ethics
Informed consent
 Welfare of subjects
 Conflicting obligations to the organization
and to the participants

Download