Elaboration
Elaboration extends our knowledge about an association to see
if it continues or changes under different situations, that is,
when you introduce an additional variable. This is sometimes
referred to as a control variable because you are seeing if the
original relationship changes or continues when you control for
(hold the effects of) a new variable.
When you introduce one control variable the process is
sometimes called first-order partialling. You can continue to add
multiple variables, called second-order, third-order, and so on,
for more elaborate models, but interpretation can get complex
at that point especially if each of those variables has numerous
values or categories. The original bivariate association is the
zero-order relationship.
Voting in Election * Race of Respondent Crosstabulation
Voting in
Election
voted
did not vote
not eligible
refused
Total
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Race of Respondent
white
black
other
893
101
38
71.2%
62.0%
50.7%
69.2%
337
56
27
420
26.9%
34.4%
36.0%
28.2%
19
5
10
34
1.5%
3.1%
13.3%
2.3%
5
1
6
.4%
.6%
.4%
1254
163
75
1492
100.0%
100.0%
100.0%
100.0%
Chi-Square Tests
Pearson Chi-Square
Likelihood Ratio
Linear-by-Linear
Association
N of Valid Cases
Total
1032
Value
54.646a
33.994
27.663
6
6
Asymp. Sig.
(2-sided)
.000
.000
1
.000
df
1492
a. 4 cells (33.3%) have expected count less than 5. The
minimum expected count is .30.
Voting in Election * Race of Respondent * Married ? Crosstabulation
Married ?
yes
Voting in
Election
voted
did not vote
not eligible
refused
Total
no
Voting in
Election
voted
did not vote
not eligible
refused
Total
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Count
% within Race of
Respondent
Race of Respondent
white
black
other
531
39
17
Total
587
76.6%
67.2%
42.5%
74.2%
151
17
15
183
21.8%
29.3%
37.5%
23.1%
7
2
8
17
1.0%
3.4%
20.0%
2.1%
4
4
.6%
.5%
693
58
40
791
100.0%
100.0%
100.0%
100.0%
362
61
21
444
64.5%
58.7%
60.0%
63.4%
186
39
12
237
33.2%
37.5%
34.3%
33.9%
12
3
2
17
2.1%
2.9%
5.7%
2.4%
1
1
2
.2%
1.0%
.3%
561
104
35
700
100.0%
100.0%
100.0%
100.0%
Chi-Square Tests
Married ?
yes
no
Pearson Chi-Square
Likelihood Ratio
Linear-by-Linear
Association
N of Valid Cases
Pearson Chi-Square
Likelihood Ratio
Linear-by-Linear
Association
N of Valid Cases
6
6
Asymp. Sig.
(2-sided)
.000
.000
1
.000
791
4.865b
3.926
6
6
.561
.687
2.027
1
.155
Value
75.921a
39.848
34.205
df
700
a. 5 cells (41.7%) have expected count less than 5. The minimum
expected count is .20.
b. 5 cells (41.7%) have expected count less than 5. The minimum
expected count is .10.
Multiple Correlation
multiple correlation (R) is based on the Pearson r correlation
coefficient and essentially looks at the combined effects of two or more
independent variables on the dependent variable. These variables
should be interval/ratio measures, dichotomies, or ordinal measures
with equal appearing intervals, and assume a linear relationship
between the independent and dependent variable.
Similar to r, R is a PRE when squared. However, unlike
bivariate r, multiple R cannot be negative since it represents
the combined impact of two or more independent variables, so
direction is not given by the coefficient.
Multiple R2 tell us the proportion of the variation in the dependent
variable that can be explained by the combined effect of the
independent variables
Regression
Uncovering which of the independent variables are contributing
more or less to the explanations and predictions of the dependent
variable is accomplished by a widely used technique called linear
regression. It is based on the idea of a straight line which has the
formula
Y= a + bX
Y is the value of the predicted dependent variable, sometimes
called the criterion and in some formulas represented as Y' to
indicate Y-predicted;
X is the value of the independent variable or predictor;
a is the constant or the value of Y when X is unknown, that is,
zero; it is the point on the Y axis where the line crosses when X is
zero; and
b is the slope or angle of the line and, since not all the independent
variables are contributing equally to explaining the dependent
variable, b represents the unstandardized weight by which you
adjust the value of X. For each unit of X, Y is predicted to go up or
down by the amount of b.
the regression line which predicts the values of Y, the outcome
variable, when you know the values of X, the independent
variables. What linear regression analysis does, is calculate the
constant (a), the coefficient weights for each independent
variable (b), and the overall multiple correlation (R). Preferably,
low intercorrelations exist among the independent variables in
order to find out the unique impact of each of the predictors.
This is the formula for a multiple regression line:
Y' = a + bX1 + bX2 + bX3 + bX4 …. + bXn
The information provided in the regression analysis includes the
b coefficients for each of the independent variables and the overall
multiple R correlation and its corresponding R2. Assuming the
variables are measured using different units as they typically are
(such as pounds of weight, inches of height, or scores on a test),
then the b weights are transformed into standardized units for
comparison purposes. These are called Beta (b) coefficients or
weights and essentially are interpreted like correlation coefficients:
Those furthest away from zero are the strongest and the plus or
minus sign indicates direction.
Model Summary
Model
1
R
.294a
Adjusted
R Sq uare
.080
R Sq uare
.086
Std. Error of
the Estimate
4.754
a. Predictors: (Constant), Size of Place in 1000s, Hours
Per Day Watching TV, Respondent's Sex, Hig hest Year
of School Completed
ANOVAb
Model
1
Reg ression
Residual
Total
Sum of
Squares
1299.940
13740.066
15040.007
df
4
608
612
Mean Square
324.985
22.599
F
14.381
a. Predictors: (Constant), Size of Place in 1000s, Hours Per Day Watching TV,
Respondent' s Sex, Highest Year of School Completed
b. Dependent Variable: Age When First Married
Sig .
.000a
Coefficientsa
Model
1
(Constant)
Highest Year of
School Completed
Hours Per Day
Watching TV
Respondent's Sex
Size of Place in 1000s
Unstandardized
Coefficients
B
Std. Error
22.204
1.094
Standardi
zed
Coefficien
ts
Beta
t
20.290
Sig .
.000
.307
.062
.202
4.978
.000
2.360E-02
.088
.011
.267
.790
-2.112
5.474E-05
.392
.000
-.210
.011
-5.384
.283
.000
.777
a. Dependent Variable: Age When First Married
Sex: 1=Male, 2=Female
SN
In Words: Respondents who marry at younger
ages tend to have less education and are
female. Those who marry later tend to have
more education and are male.