Lecture 2_Multiple regression_Part 1_Oct 6

advertisement
Advanced Statistical Methods: Continuous Variables
http://statisticalmethods.wordpress.com
Multiple Regression – Part I
tomescu.1@sociology.osu.edu
The Multiple Regression Model
Ŷ = a + b1X1 + b2X2 + ... + biXi
- this equation represents the best prediction of a DV from several
continuous (or dummy) IVs; i.e. it minimizes the squared differences
btw. Y and Ŷ  least square regression
Goal: arrive at a set of regression coefficients (bs) for the IVs that
bring Ŷs as close as possible to Ys values
Regression coefficients:
- minimize (the sum of squared) deviations between Ŷ and Y;
- optimize the correlation btw. Ŷ and Y for the data set.
Interpretation of Regression Coefficients:
a = the estimated value of Y when all independent (exploratory)
variables are zero (X1,…i = 0).
bi measures the partial effect of Xi on Y;
= effect of one-unit increase in Xi, holding all other independent
variables constant.
The estimated parameters b1, b2, ..., bi are partial regression
coefficients; they are different from regression coefficients for bivariate relationships between Y and each exploratory variable.
Three criteria for a number of independent (exploratory) variables:
• (1) Theory
• (2) Parsimony
• (3) Sample size
Common Research Questions
a)
b)
c)
d)
Is the multiple correlation between the DV and the IVs
statistically significant?
If yes, which IVs in the equation are important, and which
not?
Does adding a new IV to the equation improve the
prediction of the DV?
Is prediction of a DV from one set of IVs better than
prediction from another set of IVs?
Multivariate regression also allows for non-linear relationships,
by redefining the IV(s): squaring, cubing, .. of the original IV
Assumptions
-
-
Random sampling;
DV = continuous; IV(s) variables = continuous (can be treated as
such), or dummies;
Linear relationship btw. the DV & the IVs variables (but we can
model non-linear relations);
Normally distributed characteristics of Y in the population;
Normality, linearity, and homoskedasticity btw. predicted DV scores
(Ŷs) and the errors of prediction (residuals)
Independence of errors;
No large outliers
Initial checks
1. Cases-to-IVs Ratio
Rule of thumb: N>= 50 + 8*m
N>=104 + m
where m = no. of IVs
for testing the multiple correlation;
for testing individual predictors,
Need higher case-to-IVs ratio when:
the DV is skewed (and we do not transform it);
a small effect size is anticipated;
substantial measurement error is to be expected
2. Screening for outliers among the DV and the IVs
3. Multicollinearity
- too highly correlated IVs are put in the same regression model
4. Assumptions of normality, linearity, and homoskedasticity btw.
predicted DV scores (Ŷs) and the errors of prediction (residuals)
-
4.a. Multivariate Normality
each variable & all linear combinations of the variables are normally
distributed;
if this assumption is met  residuals of analysis = normally distributed
& independent
For grouped data: assumption pertains to the sampling distribution of
means of variables;
 Central Limit Theory: with sufficiently large sample size, sampling
distributions are normally distributed regardless of the distribution of the
variables
What to look for (in ungrouped data):
-
is each variable normally distributed?
Shape of distribution: skewness & kurtosis. Frequency histograms; expected
normal probability plots; detrend expected normal probability plots
-
are the realtionships btw. pairs of variables (a) linear, and (b) homoskedastic
(i.e. the variance of one variable is the same at all values of other variables)?
Homoskedasticity
-
for ungrouped data: the variability in scores for one continuous variable is ~
the same at all values of another continuous variable
for grouped data: the variability in the DV is expected to be ~ the same at all
levels of the grouping variable
Heteroskedasticity = caused by:
- non-normality of one of the variables;
- one variable is related to some transformation of the other;
- greater error of measurement at some level of an IV
Residuals Scatter Plots to check if:
4.a. Errors of prediction are normally distributed around each & every
Ŷ
4.b. Residuals have straight line relationship with Ŷs
- If genuine curvilinear relation btw. an IV and the DV, include a square of the
IV in the model
4.c. The variance of the residuals about Ŷs is ~the same for all
predicted scores (assumption of homoskedasticity)
- heteroskedasticity may occur when:
- some of the variables are skewed, and others are not;
 may consider transforming the variable(s)
- one IV interacts with another variable that is not part of the equation
5. Errors of prediction are independent of one another
Durbin-Watson statistic = measure of autocorrelation of errors over the sequence
of cases; if significant it indicates non-independence of errors
Major Types of Multiple Regression
Standard multiple regression
Sequential (hierarchical) regression
Statistical (stepwise) regression
x1
R² = a + b + c + d + e
a
DV
R²= the squared multiple correlation; it is
the proportion of variation in the DV that is
predictable from the best linear combination of the IVs
(i.e. coefficient of determination).
x2
b
c
d
e
x3
R = correlation between the observed and predicted Y values (R = ryŶ )
Adjusted R2
Adjusted R2 = modification of R2 that adjusts for the number of terms
in a model. R2 always increases when a new term is added to a
model, but adjusted R2 increases only if the new term improves the
model more than would be expected by chance.
Standard (Simultaneous) Multiple Regression
-
all IVs enter into the regression equation at once; each one is
assessed as if it had entered the regression after all other IVs had
entered.
-
each IV is assigned only the area of its unique contribution;
- the overlapping areas
(b & d) contribute
to R² but are not assigned
to any of the individual IVs
x1
a
x2
b
c
DV
d
e
x3
Table 1: Regression of (DV) Assessment of Socialism in 2003 on (IVs) Social
Status, controlling for Gender and Age
Linear regression
DV = scores from 1 to 5
Independent variables
Gender (Male=1)
Age
Social Status
Constant
B
BETA
Standard Error
(unstandardized
(standardized
coefficient)
coefficient)
Model I: Effect of Social Status without Controlling for
Lagged Assessment of Socialism
-0.044
0.011**
-0.207**
2.504
0.069
0.003
0.034
0.131
-0.023
0.135
-0.217
N = 742;
Fit statistics
**p
F= 15.5 (df=3) Adjusted R2=0.06
<0.001; *p < 0.05;
Interpretation of beta (standardized) coefficients: for a one standard deviation unit
increase in X, we get a Beta standard deviation change in Y;
Since variables are transformed into z-scores (i.e. standradized), we can assess
their relative impact on the DV (assuming they are uncorrelated with each other)
Sequential (hierarchical) Multiple Regression
- researcher specifies the order in which IVs are added to the equation;
- each IV/IVs are assessed in terms of what they add to the equation at their
own point of entry;
- If X1 is entered 1st, then X2, then X3:
X1 gets credit for a and b;
x1
X2 for c and d;
X3 for e.
IVs can be added one at a time, or in blocks
a
X2
b
c
d
DV
e
x3
Model Summary
Change Statistics
R
F
Adjusted R Std. Error of R Square Chang
Square
the Estimate Change
R Square
df1
e
Model
1
,109a
,012
,011
,80166
,012
9,128
2
,200b
,040
,037
,79101
,028
14,772
df2
Sig. F Change
2
1524
,000
3
1521
,000
a. Predictors: (Constant), age1998, gender 1998,
female=0
b. Predictors: (Constant), age1998, gender 1998, female=0, tertiary 1998 = 1, else =0, emlement 1998 = 1, else =0
The Regression SUM of SQUARES, SS(regression) = SS(total) + SS(residual)
SSregression = Sum (Ŷ – Ybar)² = portion of variation in Y explained by the use
of the IVs as predictors;
SStotal = Sum (Y - Ybar)²
SSresidual = Sum (Y- Ŷ)² - the squared sum of errors in predictions
R² = SSreg/SStotal
ANOVA
Model
1
2
Regression
Residual (error)
Total
Regression
Residual (error)
Total
Sum of
Squares
11,732
979,415
991,147
39,460
951,687
991,147
df
2
1524
1526
5
1521
1526
Mean
Square
5,866
,643
7,892
,626
F
9,128
Sig.
,000a
12,613
,000b
c. Dependent Variable: eval soc 1998 categories
The Regression MEAN SQUARE : MSS(regression) =
SS(regression) / df, df = k where k = no. of variables
The MEAN square residual (error): MSS(residual) =
SS(residual) / df, df= n - (k + 1) where n = no. of cases and
k= no. of variables.
Hypothesis Testing with (Multiple) Regression
F – test
The null hypothesis for the regression model:
Ho: b1 = b2 = … = bk = 0
MSS(model)
• F = -------------MSS(residual)
The sampling distribution of this statistic is the F-distribution
Coefficientsa
Model
1
(Constant)
gender 1998, female=0
Unstandardized
Coefficients
Std.
Error
B
Standardize
d
Coefficients
Beta
t
Sig.
17,510
,000
1,661
,095
-,010
,041
-,006
-,242
,809
,008
,002
,109
4,270
,000
1,762
,096
18,330
,000
-,007
,041
-,004
-,171
,864
Age in 1998
,006
,002
,090
3,330
,001
Elementar educ 1998 = 1, else =0
,070
,054
,036
1,282
,200
-,223
,052
-,115
-4,258
,000
-,058
,020
-,077
-2,960
,003
age1998
2
(Constant)
Gender (female=0)
tertiary educ 1998 = 1, else =0
Estimated income for 1998 (in zscores)
a. Dependent Variable: eval soc 1998 categories
t – test for the effect of each independent variable
The Null Hypothesis for individual IVs
The test of H0: bi = 0 evaluates whether Y and X are statistically
dependent, ignoring other variables.
We use the t statistic
b
• t = -------------σB
where σB is a standard error of B
SS(residual)
• σB = -------n-2
Assessing the importance of IVs
-
if IVs are uncorrelated w. each other: compare standardized
coefficients (betas); higher absolute values of betas reflect greater
impact;
-
if the IVs are correlated w. each other: compare total relation of the
IV with the DV, and of IVs with each other using bivariate
correlations; compare the unique contribution of an IV to predicting
the DV = generally assessed through partial or semi-partial
correlations
In partial correlation (pr), the contribution of the other IVs is taken out of
both the IV and the DV;
In semi-partial correlation (sr), the contribution of the other IVs is taken
out of only the IV  (squared) sr shows the unique contribution of
the IV to the total variance of the DV
Assessing the importance of IVs – continued
In standard multiple regression, sr² = the unique contribution of
the IV to R² in that set of IVs
(for an IV, sr² = the amount by which R² is reduced, if that IV is deleted
from the equation)
If IVs are correlated: usually, sum of sri² < R²
- the difference R² - sum of sri² for all IVs = shared variance
(i.e. variance contributed to R² by 2/more variables)
Sequential regression: sri² = amount of variance added to R² by
each IV at the point that it is added to the model
In SPSS output sri² is „R² Change” for each IV in „Model
Summary” Table
Download