RM_Regression_and_tables

advertisement
Correlation, OLS (simple) regression,
logistic regression, reading tables
Some statistics used to test relationships
Procedure
Level of
Measurement
Statistic
Interpretation
Regression
All variables
continuous
r2, R2
Proportion of change in the dependent variable
accounted for by change in the independent
variable. R2 denotes cumulative effect of multiple
independent variables.
Unit change in the dependent variable caused by
a one-unit change in the independent variable
b
Logistic
regression
DV nominal &
dichotomous,
IV’s nominal or
continuous
b
exp(B)
Don’t try - it’s on a logarithmic scale
Odds that DV will change if IV changes one unit,
or, if IV is dichotomous, if it changes its state.
Range 0 to infinity; 1 denotes even odds, or no
relationship. Higher than 1 means positive
relationship, lower negative relationship. Use
percentage to describe likelihood of effect.
Chi-Square
All variables
categorical
(nominal or
ordinal)
X2
Reflects difference between Observed and
Expected frequencies. Use table to determine if
coefficient is sufficiently large to reject null
hypothesis
Difference
between
means
IV dichotomous, DV
continuous
t
Reflects magnitude of difference. Use table to
determine if coefficient is sufficiently large to
reject null hypothesis.
Review – What are the odds?
•
•
•
•
“Test” statistics help us evaluate whether there is a relationship between variables that
goes beyond chance
If there is, one can reject the null hypothesis of no relationship
But in the social sciences, one cannot take more than five chances in one-hundred of
incorrectly rejecting the null hypothesis
Here is how we proceed:
– Computers automatically determine whether the test statistic’s coefficient
(expressed numerically, such as .03) is of sufficient magnitude to reject the null
hypothesis
– How large must a coefficient be? That varies. In any case, if a computer decides that
it’s large enough, it automatically assigns one, two or three asterisks (*, **, ***).
– One asterisk is the minimal level required for rejecting the null hypothesis. It is
known as < .05, meaning less than five chances in 100 that a coefficient of that
magnitude (size) could be produced by chance.
– If the coefficient is so large that the probability is less than one in one-hundred that
it was produced by chance, the computer assigns two asterisks (**)
– An even better result is three asterisks (***), where the probability that a coefficient
was produced by chance is less than one in a thousand
CORRELATION
Correlation
Simple association between variables - used when all are continuous
•
•
•
•
r: simple relationship between variables
– Coefficients range between -1 and +1 (0 = no relationship)
R: multiple correlation – correlation among multiple variables (seldom used)
Computers automatically test correlations for statistical significance
To test hypotheses must use regression (R2)
Correlation “matrix”
Displays relationships between variables
240
220
Correlations
WEIGHT
200
HEIGHT
180
WEIGHT
160
140
HEIGHT
WEIGHT
1.000
.719**
.
.000
26
26
.719**
1.000
.000
.
26
26
**. Correlation is s ignificant at the 0.01 level
(2-tailed).
120
100
58
Pears on Correlation
Sig. (2-tailed)
N
Pears on Correlation
Sig. (2-tailed)
N
60
62
64
66
68
HEIGHT
70
72
74
76
Sig. (2-tailed) means that the significance of the
relationship was computed without specifying the
direction of the effect. The relationship is positive both variables rise and fall together.
Correlation matrices
•
•
•
Data analysis often begins with a correlation matrix
Correlation matrices display the simple, “bivariate” relationships between every possible
combination of continuous variables. Dependent variables are usually included.
The same variables run in the same order down the left and across the top
– When a variable intersects with itself, “1.00” is inserted as a placeholder
Effort
Male
Richard B. Felson and Jeremy Staff, “Explaining the Academic Performance-Delinquency Relationship,” Criminology (44:2, 2006)
REGRESSION
Regression (ordinary - known as “OLS”)
Using the r to test hypotheses. All variables must be continuous.
•
•
•
•
r2 – coefficient of determination:
proportion of change in the
dependent variable accounted
for by the change in the
independent variable
R2 – same, summary effect of
multiple IV’s on the DV
b or B. Unit change in the DV for
each unit change in the IV. Unlike
r’s, which are on a scale of -1 to
+1, b’s and B’s are not
“standardized,” so they cannot
be compared
•
Lowercase (b) refers to a
sample
•
Uppercase (B) refers to a
population (no sampling)
•
For our purposes it makes no
difference whether b’s are
lowercase or uppercase.
SE - the standard error. All
coefficients include an error
component. The greater this
error the less likely that the b or
B will be statistically significant.
Hypothesis: Age  Weight
R2 = .98 B = 7.87 SE = .271 sig = .000
For each unit change in age (year) weight will change 7.87 units (pounds)
Since the B is positive, age and weight go up & down together
Probability that the null hypothesis is true is less than 1 in 1,000
R2 = .97 B = 2.096 SE = .088 sig = .000
Age range: 0 - 20
Hypothesis:
Age  Height
B is negative: as
one variable goes
up, the other goes
down (a tiny bit!)
Non-significant:
(by age 20 we’re
done growing)
R2 = .07 B = -.249 SE = .169 sig = .152
Age range: 20 - 33
Restricting the range of
an explanatory variable
Another regression example
Hypothesis: observations of social disorder  perceptions of social disorder
Procedure
Independent variables
B
SE
p
1. Dependent variable is understood - it is
“embedded” in the table (here it is
“citizen perceptions of social disorder,”
a continuous measure)
2. Independent variables normally run
down the left column
3. Significant relationships (p <.05) are
denoted two ways - with asterisks,
and/or a p-value column
4. When assessing a relationship, note
whether the B or b is positive (no sign)
or negative (- sign).
Joshua C. Hinkle and Sue-Ming Yang, “A New Look Into Broken
Windows: What Shapes Individuals’ Perceptions of Social
Disorder?,” Journal of Criminal Justice (42: 2014, 26-35)
R2 has siblings with similar interpretations
IV’s
B
S.E.
p
IV’s
B
Exp B
S.E.
p
S.E.
R2 reports the percentage of the change
in the dependent variable that is caused
by the changes in the independent
variables, taken together. (It’s their total,
“model” effect.) Here R2 is used as
originally designed, in ordinary (OLS)
regression analysis
The “real” R2 requires that all variables
be continuous. In this study, which used
logistic regression (DV is nominal, 0 or
1), the authors give the coefficients for
two R2 stand-in’s, which supposedly
report the same thing.
Joshua C. Hinkle and Sue-Ming Yang, “A New Look Into Broken Windows: What Shapes Individuals’ Perceptions of Social Disorder?,” Journal of Criminal Justice (42: 2014, 26-35)
LOGISTIC REGRESSION
Logistic regression
Used when dependent variable is nominal (i.e., two mutually exclusive categories, 0/1)
and independent variables are nominal or continuous
Richard B. Felson, Jeffrey M.
Ackerman and Catherine A.
Gallagher, “Police
Intervention and the Repeat of
Domestic Assault,”
Criminology (43:3, 2005)
*
*
*
Dependent variable: Risk of a future assault (0,1)
•
•
b is the logistic regression coefficient. It is given in log-odds units.
Exp b, the “odds ratio,” is derived from the b. It reports the effect on the dependent variable
(DV) of a one-unit change in the independent variable (IV).
– An Exp b of exactly 1 means no relationship: the odds are even (50/50) that as the IV
changes one unit the DV will change one unit. In other words, the chances of correctly
predicting an effect are no better than a coin toss.
– Exp b’s greater than 1 indicate a positive relationship, less than 1 a negative relationship
• Arrest decreases (negative b) the odds of repeat victimization by 22 percent (1 - .78
= .22), but the effect is non-significant (no asterisk)
• Not reported (positive b) increases the odds of repeat victimization by 89 percent
(1 + .89) or 1.89 times, a statistically significant change
• Prior victimization increases the odds of repeat victimization 408 percent or 5.08
times, also statistically significant (it’s not 508 percent because Exp b’s begin at 1)
“Percent” v. “times”
200%
larger
100%
larger
2X
two times
larger
3X
three times
larger
OLS regression
Logistic regression
OLS regression analysis predicting
perception of social disorder (DV)
IV’s
B
S.E.
p
Logistic regression analysis predicting
feeling unsafe (DV)
IV’s
B
Exp B
S.E.
p
S.E.
DV is continuous
DV is nominal – 0 and 1
Joshua C. Hinkle and Sue-Ming Yang, “A New Look Into Broken Windows: What Shapes Individuals’ Perceptions of Social Disorder?,” Journal of Criminal Justice (42: 2014, 26-35)
Practical exercise - logistic regression
Effects of broken homes on future youth behavior
Main
independent
variable:
broken home
Dependent
variable:
conviction
for crime
of violence
Delphone Theobald, David P.
Farringron and Alex R.
Piquero, “Childhood Broken
Homes and Adult Violence:
An Analysis of Moderators
and Mediators,” Journal of
Criminal Justice (41:1, 2013)
•
•
Use the column Exp(B) and percentages to describe the effects of significant variables
Describe the levels of significance using words
•
Youths from broken homes were 236 percent more likely of being convicted of a crime of
violence. The effect was significant, with less than 1 chance in 100 that it was produced by
chance.
•
Youths with poor parental supervision were 128 percent more likely to be convicted of a
violent crime. The effect was significant, with less than 5 chances in 100 that it was
produced by chance.
Using logistic regression to analyze parking lot data
A different hypothesis:
Car value  parking lot
• Independent variable: car value. Continuous,
1-5 scale.
• Dependent variable: Parking lot. Nominal,
faculty lot = 0, student lot = 1
• b = -1.385* SE = .668 Sig.: .038 Exp (B) = .250
•
Effect: Since b is negative, as car value increases one unit it is 75 percent more likely that lot
type will go in the opposite direction, from 1 (student) to 0 (faculty). If car value goes down, lot
type will go from 0 (faculty) to 1 (student). This effect is as hypothesized.
– When students estimated car values they probably did it in chunks of $5,000 or so. So that
would constitute a “unit”.
•
Calculation: 1.00-Exp B (.25)=.75, or 75 percent. (Exp B is less than 1.00 because b is negative).
•
Probability that null hypothesis is true: Less than 4 chances in 100 (one asterisk, actual
significance .038). Because its probability is less than 5 in 100, the null hypothesis is rejected
and the working hypothesis is confirmed.
“Poisson” logistic regression* –
effects of audience characteristics on substance use
Alcohol and cannabis use
at adolescent parties
Research questions
• What is the relationship
between the size of gatherings
and substance use?
• What is the relationship
between the presence of
peers and substance use?
• What is the relationship
between the behavior of peers
and substance use?
Owen Gallupe and Martin Bouchard, “Adolescent Parties
and Substance Use: A Situational Approach to Peer
Influence,” Journal of Criminal Justice (41: 2013, 162-171)
* “Poisson” best when comparing counts of things
“standardizing” makes the b’s comparable
Findings
• Higher levels of substance use
tend to occur in smaller
gatherings
• Less alcohol use in the
presence of close friends
• Except that higher levels of
alcohol/cannabis use when
used by friends
• Peer behavior is the key
Logistic regression –
converting b to exp(b)
•
•
•
•
•
•
Use an exponents calculator
– http://www.rapidtables.com/calc/math/Exponent_Calculator.htm
For “number,” always enter the constant 2.72
For “exponent,” enter the b or B value, also known as the “log-odds”
The result is the odds ratio, also known as exp(b)
In the left example the b is 1.21, and the exp(b) is 3.36.
– Meaning, for each unit change in the IV, the DV increases 236 percent
In the right example the b is -.610 (note the negative sign) and the exp(B) is .543
– Meaning, for each unit change in the IV, the DV decreases 46 percent (1.00-.54)
READING TABLES
Logistic regression –
Economic adversity  criminal cooperation
DV: “co-offending”
Regression coefficient.
Positive means IV and DV go
up and down together,
negative means as one rises
the other falls.
Different ways to measure
the main IV’s (each is a
separate independent
variable)
Additional, “control”
independent variables. Each
is measured on a scale or, if it
is a nominal variable (e.g.,
gender, M or F) is coded 0 or
1. The value displayed on the
table is usually “1”, while its
“reference” - the comparison
value - is usually “0”. Here
“male”=1 and its reference,
“female”=0. “White”=1 and
its reference, “non-white”=0.
Holly Nguyen and Jean Marie McGloin, “Does Economic
Adversity Breed Criminal Cooperation? Considering the
Motivation Behind Group Crime,” Criminology (51:4, 2013)
A “model” is a unique
combination of independent variables
Sometimes probabilities are given in a dedicated column there may be no “asterisks,” or they may be in an unusual place
Asterisks are at
the end of variable
names
Shelley Johnson Listwan, Christopher
J. Sullivan, Robert Agnew, Francis T.
Cullen and Mark Colvin, “The Pains
of Imprisonment Revisited: The
Impact of Strain on Inmate
Recidivism,” Justice Quarterly (30:1,
2013)
Probability
that the null
hypothesis is
true / that the
coefficient
was generated
by chance:
* <.05
** <.01
*** <.001
Sometimes different dependent variables run across the top;
and, sometimes important statistics are left out!
Daniel P. Mears, Joshua C. Cochran, Brian J. Stults, Sarah J. Greenman, Avinash S.
Bhati and Mark Greenwald, “The ‘True” Juvenile Offender: Age Effects and Juvenile
Court Sanctioning,” Criminology (52:2, 2014)
No Exp b odds ratios! Authors who use
logistic regression often do not include a
column for this easily interpretable statistic.
But in the text they will nonetheless describe
effects in percentage terms, based on their
(secret) computation of the Exp b’s. Go figure!
Sometimes results are reported for
multiple measures of the dependent variable
Richard B. Felson
and Keri B.
Burchfield,
“Alcohol and the
Risk of Physical
and Sexual
Assault
Victimization,”
Criminology
(42:4, 2004)
Dependent
variable:
victimization
Independent
variables
Sometimes they’re reported for multiple values of a
“control” variable (here it’s neighborhood disadvantage)
Dependent variable
Dependent
variable:
satisfaction
with police
Independent
variables
Yuning Wu, Ivan
Y. Sun and Ruth
A. Triplett, “Race,
Class or
Neighborhood
Context: Which
Matters More in
Measuring
Satisfaction With
Police?,” Justice
Quarterly (26:1,
2009)
And just when you thought you had it “down”…
It’s rare, but sometimes categories of the dependent variable run in rows, and
the independent variable categories run in columns.
Jodi Lane, Susan Turner, Terry Fain and Amber Sehgal,
“Evaluating an Experimental Intensive Juvenile
Probation Program: Supervision and Official
Outcomes,” Crime & Delinquency (51:1, 2005)
Hypothesis: SOCP (intensive
supervision)  fewer violations
INTERPRETIVE ISSUES
A caution on hypothesis testing…
•
•
•
•
Probability statistics are the most common way to evaluate relationships, but
they are being criticized for suggesting misleading results. (Click here for a
summary of the arguments.)
We normally use p values to accept or reject null hypotheses. But the actual
meaning is more subtle:
– Formally, a p <.05 means that, if an association between variables was
tested an infinite number of times, a test statistic coefficient as large as the
one actually obtained (say, an r of .3) would come up less than five times in
a hundred if the null hypothesis of no relationship was actually true.
For our purposes, as long as we keep in mind the inherent sloppiness of social
science, and the difficulties of accurately quantifying social science
phenomena, it’s sufficient to use p-values to accept or reject null hypotheses.
We should always be skeptical of findings of “significance,” and particularly
when very large samples are involved, as even weak relationships will tend to
be statistically significant. (See next slide.)
Statistical significance v. size of the effect
•
•
Once we are confident that an effect was NOT caused by chance, we need to inspect its
magnitude
Consider this example from an article that investigated the “marriage effect”
– Logistic regression was used to measure the association of disadvantage (coded 0/1)
and the probability of arrest (Y/N) under four conditions (not important here)
Model 1
Disadvantage
Model 2
Model 3
Model 4
b
Sig
SE
b
Sig
SE
b
Sig
SE
b
Sig
SE
.078
*
.037
.119
NS
.071
.011
NS
.107
.320
***
.091
Bianca E. Bersani and Elaine Eggleston Doherty, “When the Ties That Bind Unwind: Examining the Enduring and Situational Processes of Change
Behind the Marriage Effect,” Criminology (51:2, 2013)
•
•
•
– Without knowing more, it seems that the association between these two variables is
confirmed in model 1 (p < .05) and model 4 (p < .001).
But just how meaningful are these associations? Logistic regression was used, so we can
calculate exp B’s.
– For model 1, the exp B is 1.08, meaning that “disadvantaged” persons are eight percent
more likely to have been arrested than non-disadvantaged. That’s a tiny increase.
– For model 4 the exp B climbs to 38 percent (a little more than one-third more likely)
Since standard error decreases as sample size increases, large samples have a well-known
tendency to label even the most trivial relationships as “significant”
Aside from exp B, r2 is another statistic that can help clue us in on just how meaningful
relationships are “in the real world”
Final exam practice
•
The final exam will ask the student to interpret a table. The hypothesis will be
provided.
•
Student will have to identify the dependent and independent variables
•
Students must recognize whether relationships are positive or negative
•
Students must recognize whether relationships are statistically significant, and if so,
to what extent
•
Students must be able to explain the effects described by log-odds ratios (exp b)
using percentage
•
Students must be able to recognize and interpret how the effects change:
– As one moves across models (different combinations of the independent
variable)
– As one moves across different levels of the dependent variable
•
For more information about reading tables please refer to the week 14 slide show
and its many examples
•
IMPORTANT: Tables must be interpreted strictly on the techniques learned in this
course. Leave personal opinions behind. For example, if a relationship supports the
notion that wealth causes crime, then wealth causes crime!
Sample question and answer on next slide
Hypothesis: Unstructured socializing
and other factors  youth violence
1. In which model does Age have the
greatest effect?
Model 1
2. What is its numerical significance?
.001
3. Use words to explain #2
Less than one chance in 1,000 that the
relationship between age and violence is
due to chance
4. Use Odds Ratio (same as Exp b) to
describe the percentage effect of Age on
Violence in Model 1
For each year of age increase, violence
is seventeen percent more likely
5. What happens to Age as it moves from
Model 2 to Model 3? What seems most
responsible?
Age becomes non-significant. Most
likely cause is introduction of variable
Deviant Peers.
Download