March 26—Quantitative analysis

advertisement
QUANTITATIVE ANALYSIS
Reporting Standards in Psych



JARS and MARS
Impetus for them
Covers more than statistics
What are threats to conclusion
validity? How can you deal with them?








Low reliability of measures
Problems with your manipulation
“Noise” in the setting (internet data)
Heterogeneity of participants
Low power (includes those listed above)
Multiple comparisons (fishing and error rate)
Violations of assumptions of your tests
Type I vs. Type II error
Steps

What should be the first thing you do when you begin
collecting data?

Even before that:




Clean the data
Then what?


Check it as you go along
What should you do next?


Design it well to start with
Pilot test
Examine descriptives
And only then

Do inferential stats
Initial checks

What do you want to look for as you are collecting
data?
 Make
sure people are answering questions correctly
 Check for ceiling or floor effects
 Check for any other problems or misunderstandings

How can these issues be addressed before/during
collection?
 Pilot
tests
 Talk with participants after the study
 Have a comment box
What checks should occur during data
entry/cleaning?









Do double entry or random checks
Do frequencies to look for duplicates or impossible scores
Check IP addresses and time stamps
Declare missing values (and make sure it can’t be a real
value)
Add value labels and label all variables
Do recodes in syntax or into new variables
Do computes in syntax and describe them with value labels
Make a participant variable and variables for when/where
collected, by whom, time started and ended, etc.
Clearly label your datafiles so you know which is which
(make a codefile with info on the study and data)
Other common data management
problems



When you recode categories, make sure your recodes
make sense (e.g., political orientation)
Do reliability item analyses to check for poor items and
to see if there are ones that should have been recoded
and weren’t
Figure out how to appropriately deal with missing
values
Little’s MCAR test (not available in our version)
 Imputation
 Re-read our earlier article on missing values!
 What is the default in SPSS and why is that bad?

What else should you do before you
begin analyzing your data?

Get to know your data!!!
Do frequencies and look at the distributions and ranges
 Look at the means and SDs and think about what those
really mean (measures of central tendency and variability)


Std deviation vs. variance vs. std error
Do crosstabs (what test would go with these?)
 Do scatterplots and other graphs

Histogram
 Stem and leaf
 Boxplot
 Check out Edward Tufte’s books


Look for outliers—why do you get them and how can you
find them?
Check for violations of assumptions

Normality



Linearity








Multivariate
Scatterplots
Graphing residuals
Homoscedasticity


Univariate and multivariate
Skew and kurtosis
Homogeneity of variance at univariate level (Levene’s test)
Scatterplots
Box’s M at multivariate level
Use analyze/descriptives/explore or within GLM options
Independence (more on that later)
May need to transform variables
Relationship between…


Sample size
Effect size
 Partial






eta2 vs. d
Alpha level (think about multiple tests)
Power (what level do you want?)
df
Confidence intervals
What should you report?
One vs. two-tailed tests
What things bias correlations?




Sample size
Outliers
Truncated range
See page 280 in book—Anscombe’s Quartet
When to use which test


How do you test relationships vs. levels?
GLM:
t-test
 ANOVA
 ANCOVA
 MANOVA
 Regression (what does it mean that it’s least sqs?)
 Factor analysis (exploratory vs. confirmatory)
 Multidimensional scaling
 Discriminant function analysis

ANCOVAS




Why can’t you use ANCOVA with nonmanipulated
IVs and unadjusted DVs?
Adjusted pretest= mean + reliability(value – mean)
Which reliability estimate should you use?
Other options:
 Matching
 Propensity

score analysis
Homogeneity of regression assumption
More on inferential stats


What is the general linear model?
Terms and how they are used:
 Dummy
variables (ethnicity in regression)
 Regression line
 Residuals
 Standard errors
 Model specification
 Least squares
Analysis terms







Anova factors: random vs. fixed
Methods of regression: stepwise, hierarchical, enter
Types of regression: linear, logistic
Building interaction terms in regression
Multicollinearity
Factor analysis (eigenvalues, exploratory vs.
confirmatory, scree plot)
Sample size
Review
 Mediator
 Moderator
 Covariate
 Control


variable
How many effects are there in a 2 x 2 x 3 design?
How many people for n=10 with a between,
between, within design?
What are df and how/when do you report them?
Other types of analyses

SEM, HLM
Issues of nonindependence



Why might data not be independent? Examples of
dyad and group analyses in different areas of psych?
Why should we care? When is it a nuisance vs.
important for its own sake?
How can you determine the degree of nonindep?
Is there a natural distinction in dyads? Then r
 If not, or if groups, intraclass correlation



What is an intraclass correlation? What does it tell you?
How can you test its significance?
ICCs




Calculation: www.uni.edu/harton/ICC.xlsx
ICC sig. test has low power, so use liberal test (a = .20)
Effects of nonindependence depends on direction and
type of IV (table 17.5)
Examples of
Between IV
 Within IV
 Mixed IV (not design!)

Biases in ICC



When between variable and negative ICC, too
conservative
When between and positive ICC, too liberal
(opposite with within, differs with mixed)
How to deal with nonindep

Between variables: 3 levels of variation: A, G/A, and
S/G/A
Can use group or individual level depending, using different
F denom
 Regression for continuous data



Within variables: 4 levels of variation: A, G, G x A,
S/G x A
Mixed variables: Actor-partner interdependence model
Within dyad regression (differences) then between dyad
regression (averages)
 Regression coefficients used to estimate actor and partner
effects

When multi-level models are needed




The lower level n is not the same for every upper level
You’re interested in interactions between lower and
upper level variables
The lower levels within upper levels are potentially
nonindependent
Analysis:
Regression for each upper-level group, then use upper-level
predictors to predict intercepts and slopes of groups (wted)
 Doesn’t work if negative ICC and problematic for
dyads/small groups (need group size > (n lower level
predictors + 2)

When there are multiple dyads in a
group



Round robin designs
Block designs
Kenny’s social relations model (actor-ptr indep. Model)
Components: John’s rating of Ashley = average for the class,
John’s tendency to see people as productive, Ashley’s
tendency to be seen as productive, and John’s unique view
of Ashley (group mean, actor, partner, and relationship
effects)
 Can also test correlations between actor and partner or
with self-reports

Download