Estimated Marginal Means

advertisement
The Analysis of Covariance
1 of 34
In an analysis of covariance, interval (or ordinal) level variables, referred to as covariates, are
included in an ANOVA model in addition to the factors. In our problems, we will use a single
covariate.
The rationale for adding a covariate is to increase the opportunity to find statistical
significance for the factors. Recalling that each independent variable is only credited with the
variance in the dependent variable that it uniquely explains, the inclusion of a covariate
independent variable reduces the variance in the dependent variable that is to be explained by
the factors. With less variance to explain, the chances that the factor will explain a significant
portion of the variance increases.
There are several other reasons why a covariate (also called a concomitant variable or a
control variable) is included in the analysis. In experimental designs, the rationale is usually an
effort to account for the effect of a variable that does affect the dependent variable, but
could not be accounted for in the experimental design. In observational studies, the covariate
is used to control for variables that rival the independent variable of interest.
In order for a covariate to be useful, it should have a reasonable correlation with the
dependent variable. A reasonable correlation can be translated into a linear relationship, so
analysis of covariance has an additional assumption of a linear relationship between the
dependent variable and the covariate.
In addition, the covariate should not have a significant interaction with the factors. This
implies that the slopes of the regression lines representing the relationship between the
dependent variable and the covariate are similar for all of the cells represented by all
combinations of factors. Thus, an analysis of covariance must also satisfy the additional
assumption of homogeneous regression slopes.
If the assumption of homogeneous regression slopes cannot be satisfied, the model including
the covariate should not be interpreted, because the relationships between the factors and the
dependent variable change with different scores of the covariate. The covariate does not have
the same meaning or play the same role for all cells in the analysis.
Once the assumption of homogeneous regression slopes has been met, the covariate interaction
term is removed from the model, and the model becomes a full factorial model plus the
covariate. The covariate relationship may or may not be interpreted, since it may not be a
predictor of interest. If the covariate is not interpreted, the findings for the analysis of
covariance parallel those of factorial analysis of variance. It is required to meet the
assumptions of normality and homogeneity of variance. The main effects for the factors are not
interpreted if a significant interaction is found for the factors.
Level of Measurement Requirement and Sample Size Requirement
In an analysis of covariance, the level of measurement for the independent variables can be
any level that defines groups (dichotomous, nominal, ordinal, or grouped interval) and the
dependent variable and covariate are required to be interval level. If the dependent variable
or covariate is ordinal level, we will follow the common convention of treating ordinal
variables as interval level, but we should note the use of an ordinal variable in the discussion of
our findings.
I have imposed a minimum sample size requirement of 5 cases per cell for these problems. The
cells are the possible combinations of categories for the two factors. If the factor one
contained 2 categories and the factor two contained three categories, the total number of cells
would be 6, as shown in the following table:
2 of 34
Factor one
Category 1
Category 2
Category A
Cell 1
Cell 4
Factor two
Category B
Cell 2
Cell 5
Category C
Cell 3
Cell 6
If both of the level of measurement requirement and the sample size requirement are
satisfied, the check box “The level of measurement requirement and the sample size
requirement are satisfied” should be marked. If either of them is not satisfied, the correct
answer to the problem is “Inappropriate application of the statistic.” All other answers should
be unmarked when the answer about level of measurement and sample size is “Inappropriate
application of the statistic.”
The Assumption of Normality
Analysis of covariance assumes that the dependent variable and the covariate are both
normally distributed, but there is general consensus that violations of this assumption do not
seriously affect the probabilities needed for statistical decision making.
The problems evaluate normality based on the criteria that the skewness and kurtosis of the
dependent variable fall within the range from -1.0 to +1.0. If both the dependent variable and
the covariate satisfies these criteria for skewness and kurtosis, the check box “The skewness
and kurtosis of both the independent variable and the covariate satisfy the assumption of
normality” should be marked. If the criteria for normality are not satisfied, the check box
should remain unmarked and we should consider including a statement about the violation of
this assumption in the discussion of our results.
In these problems we will not test transformations or consider removing outliers to improve the
normality of the variables.
The Assumption of Linearity
If the covariate is to improve the analysis, it should be selected because it has a linear
relationship to the dependent variable. We will use the SPSS test of linearity for this
assumption. The test of linearity tests for the presence of a linear relationship and a non-linear
relationship.
Statistical significance for the linear relationship satisfies the assumption. If the relationship is
not linear, we should consider including a statement about the violation of this assumption in
the discussion of our results.
It is very possible to find a statistically significant linear relationship between the covariate
and the dependent variable in the test of linearity, but not find a statistically significant
relationship between the covariate and the dependent variable in the test of between subject
effects. In the test of linearity, there are no other independent variables that might be
correlated such they reduce the unique relationship between the covariate and the dependent
variable. In the test of between subject effects, it is likely that the covariate is related to the
factors (else why would it be included), and this relationship reduces the amount of variance
uniquely attributed to the covariate.
The Assumption of Homogeneity of Regression Slopes
This assumption presumes that the relationship between the covariate and the dependent
variable is the same for all combinations of the factors. It is tested as an interaction that
includes the covariate and the factors. If it is found to be statistically significant, the
assumption is violated. When this assumption is violated, we cannot interpret the relationships
between the factors and the dependent variable (the main effects) because the interpretation
changes when the values of the covariate differ. The correct answer to the problem is
“Inappropriate application of the statistic” and we stop the analysis. The check boxes for level
of measurement and sample size, and the assumption of normality should remain marked if
these conditions are satisfied prior to testing for homogeneity of variance.
3 of 34
This is a diagnostic test and the interaction with the covariate should be removed from the
model when the assumption is satisfied and before we interpret the analysis.
The Assumption of Homogeneity of Variance
Analysis of covariance assumes that the variance of the dependent variable is homogeneous
across all of the cells formed by the factors (independent variables). We will use the
significance of Levene’s test for equality of variance as our criteria for satisfying the
assumption, which SPSS provides as part of the output.
Levene’s test is a diagnostic statistic that tests the null hypothesis that the variance is
homogeneous or equal across all cells. The desired outcome, and support for satisfying the
assumption, is to fail to reject the null hypothesis.
If the significance for the Levene test is greater that the alpha for diagnostic statistics, we fail
to reject the null hypothesis and the check box “The assumption of homogeneity of variance is
supported by Levene's test for equality of variances” should be marked. If the criterion for
homogeneity of variance is not satisfied, the check box should remain unmarked.
Analysis of covariance is robust to violations of the assumption of homogeneity of variances
provided the ratio of the largest group variance is not more than 3 time the smallest group
variance.
If we violate this assumption, but the ratio is less than or equal to 3.0, we should consider
including a statement about the violation of this assumption in the discussion of our results.
If we violate this assumption and the ratio of largest to smallest variance is 3.0 or greater, we
should not use analysis of covariance for the data for these variables and we mark the
checkbox , “Inappropriate application of the statistic.” The check boxes for level of
measurement and sample size, and the assumption of normality should remain marked if these
conditions are satisfied prior to testing for homogeneity of variance.
The Existence of an Interaction Effect
The interpretation of an interaction effect between the factors in analysis of covariance is very
similar to the interpretation in analysis of variance. The only notable difference is that the
means for each combination of factors are referred to as means adjusted for the effect of the
covariate.
Interaction effects represent the effects associated with combinations of the independent
variables that are not detected when each independent variable is analyzed by itself. An
interaction effect is generally understood to contradict the interpretation of the main effects,
such that main effects are not interpreted when there is a statistically significant interaction
effect. The pattern that we might ascribe to a single independent variable changes when we
take into account the pattern that is exhibited when we look at it jointly.
If the interaction effect is statistically significant, the check box “The relationship between
income and sex cannot be interpreted independent of self-employment’ is marked. If the
interaction effect is not statistically significant, the check box is left blank.
If the interaction effect is statistically significant, none of the statements about main effects
are marked, even though they might be statistically significant.
The problem statement does not include a statement interpreting the interaction effect
because the interpretation is complex. However, the feedback for the problem will contain a
statement about the interaction effect when it is found to be statistically significant.
Interpretation of Main Effects
4 of 34
The interpretation of main effects in analysis of covariance is very similar to the interpretation
of main effects in analysis of variance. The only notable difference is that the means for each
cell are referred to as means adjusted for the effect of the covariate.
Determination of the correctness of statements about main effects is a two stage process.
First, it is required that the main effect be statistically significant. Second, it is required that
the statement be a correct comparison of the direction of the means, based on either a direct
comparison of the group means when the factor contains two categories, or a post-hoc test
when the factor includes three or more categories.
There are two statements for each main effect. If the main effect is not statistically
significant, neither of the two statements should be marked. If the effect is statistically
significant, the one that is supported by the post-hoc tests should be marked. For these
problems, we will use the Bonferroni pairwise comparison test to determine which pairs of
means are and are not statistically significant.
The problems report the comparisons for the category with the largest mean. It is possible that
it is significantly larger than the means of all of the other categories, or only some of the other
categories. The problem should be answered in terms of the post-hoc comparison in the
statements about main effects. It possible, but unlikely, that the main effect will be
statistically significant, but the category with the highest mean does not meet the criteria for
statistically significant post-hoc differences.
It is quite likely that there are other statements about post-hoc differences that could
legitimately be make, but these differences are not germane to correctly answering the
question.
If a main effect is statistically significant and both statements about the effect are marked,
zero credit will be given for the answer, since the points will be counted for the correct
answer, but be deducted for the incorrect answer.
Interpretation of the Covariate
In our problems, the significance of the covariate is not interpreted. It is simple used to explain
a portion of the variance, and it plays that role whether or not it has a statistically significant,
unique relationship to the dependent variable.
Inappropriate application of the statistic
We should not use analysis of covariance if we violate the level of measurement requirement,
the minimum sample size requirement, the assumption of homogeneous regression slopes, or
the assumption of homogeneity of variance when the ratio of largest to smallest group variance
is larger than 3.0.
Solving Analysis of Covariance Problems in SPSS
We will demonstrate the
use of SPSS for an
analysis of covariance
with this problem.
The introductory statement identifies
the variables for the analysis and the
significance levels to use.
Level of Measurement - 1
The first statement in the problem asks about
level of measurement and sample size. In an
analysis of covariance, the level of
measurement for the independent variables
can be any level that defines groups
(dichotomous, nominal, ordinal, or grouped
interval) and the dependent variable and
covariate are required to be interval level.
5 of 34
6 of 34
Level of Measurement - 2
To determine the level of measurement,
we examine the information about
variables in the SPSS data editor,
specifically the values and value labels.
"Sex" [sex] is dichotomous satisfying the requirement for
an independent variable. "General happiness" [happy] is
ordinal satisfying the requirement for an independent
variable. The dependent variable "total family income"
[income98] and the covariate "highest academic degree"
[degree] are both ordinal level. However, we will follow
the common convention of treating ordinal variables as
interval level. This convention should be mentioned in the
discussion of our findings.
Using Univariate General Linear Model for Descriptive Statistics - 1
Select General Linear
Model > Univariate from
the Analyze menu.
To check for compliance with sample
size requirements, we run the univariate
general linear model procedure. This
procedure will give us the correct
number of cases in each cell, taking into
account missing data for each of the
four variables in the analysis.
Using Univariate General Linear Model for Descriptive Statistics - 2
First, move income98
to the Dependent
Variable text box.
Second, move sex
and happy to the
Fixed Factor(s) list
box.
Fourth, click on the
Options button.
Third, move degree
to the Covariate(s)
list box.
Using Univariate General Linear Model for Descriptive Statistics - 3
First, mark the check
boxes for Descriptive
statistics.
Second, since this is the
only output we need for now,
click on the Continue button.
7 of 34
Using Univariate General Linear Model for Descriptive Statistics – 4
Click on the OK
button to obtain
the output.
Descriptive Statistics from the Univariate General Linear Model - 1
The table of Descriptive
Statistics contains the number
of cases in each cell for the
combination of factors.
The smallest cell in the analysis
had 9 cases. The sample size
requirement of 5 or more cases
per cell is satisfied.
8 of 34
Marking the Statement for the Level of Measurement and Sample Size Requirement
Since we satisfied both the level
of measurement and the sample
size requirements for analysis of
covariance, we mark the first
checkbox for the problem.
The next statement in the problem focuses on
the assumption of normality, using the skewness
and kurtosis criteria that both statistical values
should be between -1.0 and +1.0.
Computing Skewness and Kurtosis to Test for Normality – 1
Skewness and kurtosis are
calculated in several procedures.
We will use Descriptive Statistics.
Select Descriptive Statistics >
Descriptives from the Analyze
menu.
9 of 34
Computing Skewness and Kurtosis to Test for Normality – 2
First, move the dependent
variable, income98, and the
covariate, degree, to the
Variable(s) list box.
Second, click on the
Options button to
specify the statistics
we want computed.
Computing Skewness and Kurtosis to Test for Normality - 3
Second, click
on the Continue
button to close
the dialog box.
First, mark the check
boxes for Kurtosis and
Skewness.
10 of 34
Computing Skewness and Kurtosis to Test for Normality – 4
Click on the OK
button to obtain
the output.
Evaluating the Assumption of Normality
"Total family income" [income98] satisfied the
criteria for a normal distribution. The skewness of
the distribution (-.628) was between -1.0 and
+1.0 and the kurtosis of the distribution (-.248)
was between -1.0 and +1.0.
"Highest academic degree" [degree] satisfied the
criteria for a normal distribution. The skewness of
the distribution (.948) was between -1.0 and
+1.0 and the kurtosis of the distribution (-.051)
was between -1.0 and +1.0.
11 of 34
Marking the Statement for the Assumption of Normality
Since the skewness and kurtosis
was between -1.0 and +1.0 for
both variables, the assumption
of normality is satisfied and the
check box is marked.
The next statement in the problem
focuses on the assumption of linearity.
We will use the test of linearity in SPSS to
determine if the data supports this
assumption.
The Test of Linearity – 1
To compute the test of linearity,
select Compare Means > Means
from the Analyze menu.
This test is similar to the lack of fit
test in the univariate general linear
model procedure, but I prefer this
test because it reports a test for
linearity as well as a test for nonlinearity (lack of fit).
12 of 34
13 of 34
The Test of Linearity – 2
First, move income98
to the Dependent List
box.
Second, move degree
to the Independent List
box.
The Test of Linearity – 3
First, mark the
check box for the
Test of linearity.
Second, click on the
Continue button to
close the dialog box.
Third, click on the
Options button to
select the test.
14 of 34
The Test of Linearity – 4
Click on the OK
button to obtain
the output.
The Assumption of Linearity
Based on the test of linearity, the relationship
between highest academic degree and total
family income was linear, at a statistically
significant level, F(1, 223) = 84.52, p < .001.
Since the probability of the test statistic is less
than our diagnostic alpha (0.01), the null
hypothesis that there was not a linear
relationship was rejected.
Note: this test also provides information about
the presence of a significant non-linear term,
suggesting that the model may be improved
by including a polynomial term.
Marking the Statement for the Assumption of Linearity
Since the test of linearity
produced statistically significant
evidence of a linear relationship,
the check box for a linear
relationship is marked.
The next statement in the problem
focuses on the assumption of
homogeneous regression slopes. We will
test an interaction term composed of the
factors and covariate to evaluate this
assumption.
Testing the Assumption of Homogeneity of Regression Slopes - 1
We use the Univariate General Linear
Model procedure to test for the
significance of an interaction term in the
model made up of the covariate and the
factors. If this term is statistically
significant, we cannot include the
covariate in the model.
Select General Linear Model > Univariate
from the Analyze menu.
15 of 34
16 of 34
Testing the Assumption of Homogeneity of Regression Slopes - 2
Second, click on the
Model button to specify
the model with the
covariate interaction.
First, move the
dependent variable,
factors, and covariate to
the list boxes.
Testing the Assumption of Homogeneity of Regression Slopes - 3
Click on the Custom
option button to specify
the model with the
covariate interaction.
Testing the Assumption of Homogeneity of Regression Slopes – 3
Select the factors and
covariate individually and
move each to the Model list
box by clicking on the right
arrow button.
Testing the Assumption of Homogeneity of Regression Slopes – 3
To create the covariate
interaction term, first,
select all three of the
factors and covariates.
Third, click on the right
arrow button in the Build
Term(s) panel.
Second, make sure that
Interaction is selected in
the drop down menu.
17 of 34
Testing the Assumption of Homogeneity of Regression Slopes – 3
When you click on the
right arrow button, the
interaction term,
degree*happy*sex,
is added to the model.
Click on the Continue
button to close the
dialog box.
Testing the Assumption of Homogeneity of Regression Slopes - 5
Click on the OK
button to obtain
the output.
18 of 34
Testing the Assumption of Homogeneity of Regression Slopes - 6
The probability associated with the interaction (sex * happy *
degree) which tests whether the assumption that the
regression slopes are homogeneous (F(5, 216) = 1.83, p =
.109) is greater than the alpha for diagnostic tests (0.01). The
assumption of homogeneous regression slopes is satisfied.
If the interaction is statistically significant, we cannot interpret
the model and the correct answer to the problem is
“Inappropriate application of the statistic”.
The covariate interaction term needs to be
removed from the model, and the analysis
should continue with the full factorial model.
Marking the Statement for the Assumption of Homogeneity of Regression Slopes
Since the covariate interaction term
was not statistically significant, we
mark the check box for the
homogeneity of regression slopes.
The next statement requires us to test
the model without the covariate
interaction term and test the final
assumption of homogeneity of group
variances.
19 of 34
Removing the Interaction Term for Testing Homogeneity of Regression Slopes - 1
To remove the model that includes
the interaction term, click on the
Full factorial option button. This
will deactivate the custom model.
Removing the Interaction Term for Testing Homogeneity of Regression Slopes - 2
When the Full factorial option
button is clicked, the model that
we had entered is deactivated.
SPSS will compute the full
factorial model and include any
covariates we have specified.
Click on the Continue
button to close the
dialog box.
20 of 34
21 of 34
Adding Plots that Show Factor Interaction – 1
We will specify the
additional outputs
that we need for
the complete
interpretation.
Next, click on the Plots
button to request the
plots that will assist us in
evaluating an interaction
effect.
Adding Plots that Show Factor Interaction – 2
Second, click
on the Continue
button to close
the dialog box.
First, add the plots that
show the interaction
using each of the factors
as the variable on the
horizontal axis.
22 of 34
Specifying Additional Output – 1
Click on the Options
button to specify
additional output.
Specifying Additional Output – 2
First, move all of the
Factors and Factor
Interactions to the Display
Means for list box.
Second, mark the
check box Compare
main effects. This will
compute the post hoc
tests for the main
effects.
Third, select Bonferroni from the
Confidence interval adjustment
drop down men. This will hold the
error rate for our multiple
comparisons to the specified alpha
error rate.
Specifying Additional Output – 3
23 of 34
Next, mark the check boxes for:
Homogeneity tests, Estimates of
effect size, and Observed power,
in addition to the check box for
descriptive statistics.
Finally, click on the
Continue button to
close the dialog box.
Specifying Additional Output – 4
Having completed all of
the specifications, click on
the OK button to generate
the output.
Levene’s Test for Homogeneity of Variance - 1
The probability associated with Levene's test for
equality of variances (F(5, 220) = 2.05, p = .073)
is greater than the alpha for diagnostic tests
(0.01). The assumption of equal variances is
satisfied.
Had we not satisfied the assumption of
homogeneous variances, we would calculate the
ratio of largest squared variance to smallest
squared variance to determine whether or not we
could report the results from this analysis.
Marking the Statement for the Assumption of Homogeneity of Variance
Since the results of the Levene
test supported the assumption of
homogeneity of variance, we
mark the checkbox for statement.
The next statement in the problem focuses
on the presence of an interaction effect
that would preclude us from interpreting
the main effects in the analysis.
24 of 34
25 of 34
Interaction Effect - 1
The interaction between general
happiness and sex was not statistically
significant, F(2, 219) = 0.177, p = .838,
partial eta squared = .002. The null
hypothesis of no interaction effect is not
rejected.
The relationship between general
happiness and total family income is
not contingent on the category of sex.
Interaction Effect – 2
The profile plots offer some
support for the presence of an
interaction, especially the one
below where general happiness
is plotted on the horizontal axis.
Without the test of
statistical significance, I
would have interpreted
this plot as indicating the
possible presence of an
interaction.
26 of 34
Interaction Effect – 3
Without the test of statistical
significance, I would have
interpreted this plot as
supporting the presence of an
interaction.
Marking the Statement for an Interaction Effect
Since the interaction effect was
not statistically significant, the
checkbox for the statement that
there was an interaction effect is
left unmarked.
The next pair of statements offer potential
interpretations of the main effect for sex. It is
possible that one of them is correct, and it is
possible that neither of them is correct because
the effect is not statistically significant.
27 of 34
Main Effect for Sex
The main effect for total family income by
sex was not statistically significant (F(1,
219) = 0.021, p = .884, partial eta squared
= .0001). The null hypothesis that "the
mean total family income was equal across
all categories of sex" was not rejected.
We could include a statement in our
findings that sex did not have a
relationship to total family income that was
distinct from the effects for highest
academic degree and general happiness.
Marking the Statement for a Main Effect for Sex
Since the main effect was not
statistically significant, the
checkboxes for both of the
statements that interpreted a
main effect is left unmarked.
The next pair of statements offer
potential interpretations of the
main effect for general happiness.
28 of 34
Main Effect for General Happiness
The main effect for total family income by
general happiness was statistically
significant (F(2, 219) = 4.735, p = .010,
partial eta squared = 0.04). The null
hypothesis that "the mean total family
income was equal across all categories of
general happiness" was rejected.
Since the main effect for general
happiness is statistically significant, we
use the Bonferroni test to interpret it.
Interpreting the Main Effect for General Happiness - 1
The group with the highest
mean was respondents who
were generally very happy.
We will interpret the effect
based on this category.
Interpreting the Main Effect for General Happiness – 2
The Bonferroni pairwise comparison of the
difference between very happy and not too happy
(3.30) was statistically significant
(p = .011), but the Bonferroni pairwise
comparison of the difference between very happy
and pretty happy (0.45) was not statistically
significant (p = 1.000). Note that the p of 1.0 is a
consequence of SPSS’s adjustment of the
probability for the Bonferroni comparison.
Interpreting the Main Effect for General Happiness – 3
For the first comparison, we can state that: based on
the mean total family income adjusted by highest
academic degree, survey respondents who said that
overall they were very happy had significantly higher
total family incomes (M=16.35, SE=0.61) compared
to survey respondents who said that overall they were
not too happy (M=13.05, SE=0.93).
Note that we report the adjusted means and indicate
that we are using adjusted means in our statement.
29 of 34
Interpreting the Main Effect for General Happiness – 4
For the second comparison, we can state that: based
on the mean total family income adjusted by highest
academic degree, survey respondents who said that
overall they were very happy had higher total family
incomes (M=16.35, SE=0.61) compared to survey
respondents who said that overall they were pretty
happy (M=15.90, SE=0.38), but the Bonferroni
pairwise comparison of the difference (0.45) was not
statistically significant (p = 1.000).
Interpreting the Main Effect for General Happiness – 5
Combining these two comparisons, we can say that
the statement that "survey respondents who said
that overall they were very happy had higher total
family incomes than those who said that overall they
were not too happy, but not more than those who
said that overall they were pretty happy" is correct.
30 of 34
31 of 34
Marking the Statement for a Main Effect for General Happiness
The second of the possible
interpretations for general
happiness matches our findings,
so we mark that checkbox.
The problem is
complete.
The Problem Graded in BlackBoard
When this assignment was submitted,
BlackBoard indicated that all marked
answers were correct, and we received
full credit for the question.
Logic Diagram for Two-Factor Analysis of Covariance Problems – 1
SPSS Univarate General Linear
Model, requesting descriptive
statistics and homogeneity tests
Level of measurement
and sample size ok?
(cell n ≥ 5)
No
Do not mark check box
5
Mark: Inappropriate
application of the statistic
Yes
Stop
Mark check box for
correct answer
Ordinal dv?
Yes
Mention convention in
discussion of findings
SPSS Descriptives to get
skewness and kurtosis for
dependent and covariate
Assumption of normality
satisfied? (skewness,
kurtosis between +/-1)
No
Do not mark check box
Mention violation in
discussion of findings
Yes
Mark check box for
correct answer
SPSS Means to get
test of linearity
Assumption of linearity
satisfied? (Linearity Sig
≤ diagnostic alpha)
No
Do not mark check box
Mention violation in
discussion of findings
Yes
32 of 34
33 of 34
Logic Diagram for Two-Factor Analysis of Covariance Problems – 2
Mark check box for
correct answer
SPSS Univarate General Linear
Model, testing model with
covariate/factors interaction
Assumption of
homogeneity of
regression slopes
satisfied? (Sig >
diagnostic alpha)
No
Do not mark check box
Mark: Inappropriate
application of the statistic
Stop
Yes
Mark check box for
correct answer
SPSS Univarate General Linear
Model excluding
covariate/factors interaction
Assumption of
homogeneity of variance
ok? (Levene Sig >
diagnostic alpha)
No
Do not mark check box
Yes
Ratio of largest group
variance to smallest
group variance ≤ 3
Mark check box for
correct answer
No
Yes
Mention violation in
discussion of findings
Mark: Inappropriate
application of the statistic
Stop
34 of 34
Logic Diagram for Two-Factor Analysis of Covariance Problems – 3
Interaction effect
statistically significant?
(Sig < alpha)
Yes
Mark check box for
correct answer
Interpret interaction using
means for combined cells
No
Do not interpret main effects
Do not mark check box
No
Stop
Main effect statistically
significant?
(Sig < alpha)
No
Do not mark check box
Yes
Relationship for group
with largest mean
statistically significant
and correctly stated?
No
Yes
Mark check box for
correct answer
Repeat for other
main effects
Download