RSQUARE, LACKFIT,
SELECTION, and interactions
Just as with linear regression, logistic regression allows you to look at the effect of multiple predictors on an outcome.
Consider the following example: 15- and 16year-old adolescents were asked if they have ever had sexual intercourse. The outcome of interest is intercourse. The predictors are race (white and black) and gender (male and female).
Example from Agresti, A. Categorical Data Analysis, 2 nd ed. 2002.
Race
White
Black
Gender
Male
Female
Male
Female
Intercourse
Yes
43
26
29
22
No
134
149
23
36
The data set intercourse is created with the variables “white” (1 if white, 0 if black),
“male” (1 if male, 0 if female), and
“intercourse” (1 if yes, 0 if no). We want to examine the odds of having intercourse with race and gender as predictors.
Enter the code on the next slide into SAS.
DATA intercourse;
INPUT white male intercourse count;
DATALINES ;
1 1 1 43
1 1 0 134
1 0 1 26
1 0 0 149
0 1 1 29
0 1 0 23
0 0 1 22
0 0 0 36
;
RUN ;
First look at the effect of race and gender with no interaction. The SAS code is similar to that of simple logistic regression; one more independent variable has been added to the model statement.
PROC LOGISTIC DATA = intercourse descending ; weight count;
MODEL intercourse = white male/ rsquare lackfit ;
RUN ;
• “ descending ” models the probability that intercourse = 1 (yes) rather than = 0 (no).
• “ rsquare ” requests the R 2 value from SAS; it is interpreted the same way as the R 2 from linear regression.
• “ lackfit ” requests the Hosmer and Lemeshow
Goodness-of-Fit Test. This tells you if the model you have created is a good fit for the data.
2
2
The R 2 value is 0.9907. This means that
99.07% of the variability in our outcome
(intercourse) is explained by including gender and race in our model.
Notice that the race and gender terms are both statistically significant (p < 0.0001 and p = 0.0040, respectively).
The logistic regression model is: log(odds) = β
0
+ β
1
(white) + β
2
(male) log(odds) = -0.4555 – 1.3135(white) +
0.6478(male)
The odds of having intercourse is 73.1% (1-
0.269) lower for whites than blacks.
The odds of having intercourse is 1.911 times greater for males versus females.
Suppose you wanted to know the odds of intercourse for black males versus white females:
Log(odds)black males = β
0
+
Log(odds)white females = β
0
β
1
(0) + β
+ β
1
(1) +
2
(1)
β
2
(0)
Log(OR) = β
0
+ β
2
– [ β
0
+ β
1
] = β
2
– β
1
Log(OR) = 0.6478 – (-1.3135) = 1.9613
OR = exp(1.9613) = 7.11
Black males have a 7.11 times greater odds of having intercourse than white females.
The Hosmer and Lemeshow Goodness-of-Fit Test tests the hypotheses:
H o
: the model is a good fit, vs.
H a
: the model is NOT a good fit
With this test, we want to FAIL to reject the null hypothesis, because that means our model is a good fit (this is different from most of the hypothesis testing you have seen).
Look for a pvalue > 0.10 in the H-L GOF test. This indicates the model is a good fit.
In this case, the pvalue = 0.2419, so we do NOT reject the null hypothesis, and we conclude the model is a good fit.
Let’s consider an interaction between race and gender:
PROC LOGISTIC DATA = intercourse descending ; weight count;
MODEL intercourse = white male white*male/ rsquare lackfit ;
RUN ;
We have added a third term to the model: the interaction between race and gender
(“white*male”). We did not need to create this variable in the data set.
The new R 2 value is 0.9908, which is barely higher than the
R 2 from the model with only the main effects. Adding the interaction did not help explain more variance in the model.
The interaction is not significant (p =
0.8092). We probably will not want to include it in our model. If it were significant, the model would be: log(odds) = β
0
β
3
+
(white*male)
β
1
(white) + β
2
(male) + log(odds) = 0.4925 -1.2534(white) +
0.7243(male) – 0.1151(white*male)
The pvalue of the Hosmer and Lemeshow
GOF Test is 0.2439, which is not much greater than that of the previous model without the interaction. Therefore, we conclude the model with just race and gender, without the interaction, is sufficient.
Often, if you have multiple predictors and interactions in your model, SAS can systematically select significant predictors using forward selection, backwards selection, or stepwise selection.
In forward selection, SAS starts with no predictors in the model. It then selects the predictor with the smallest pvalue and adds it to the model. It then selects another predictor from the remaining variables with the smallest pvalue and adds it to the model. It continues doing this until no more predictors have pvalues less than 0.05.
In backwards selection, SAS starts with all of the predictors in the model and eliminates the non-significant predictors one at a time, refitting the model between each elimination. It stops once all the predictors remaining in the model are statistically significant.
We will let SAS select a model for us out of the three predictors: white, male, white*male. Type the following code into SAS:
PROC LOGISTIC DATA = intercourse descending ; weight count;
MODEL intercourse = white male white*male/ selection = forward lackfit ;
RUN ;
Output from Forward Selection: “white” is added to the model
“male” is added to the model
No more predictors are found to be statistically significant
Hosmer and Lemeshow GOF Test: The model is a good fit
You are now familiar with multiple logistic regression and model selection in SAS. If given multiple predictors, you have the tools to find an appropriate model that explains the outcome of interest.