advertisement
Tests of Significance for Regression & Correlation
b* will equal the population parameter of the slope rather than

because beta
has another meaning with respect to regression coefficients.
b is normally distributed about b* with a standard error of
sb 
s y .x
sx N  1
To test the null hypothesis that b* = 0, then
t
b b*

sb
b
sy .x

bsx N  1
s y .x
Distributed as N-2 df.
sx N  1
When there is a single predictor variable, then testing b is the same as testing r not equal to zero.
t
r N 2
1 r
2
Again, distributed with N-2 df
The difference between two independent slopes
(like a t-test for two means)
If the null hypothesis is true (b1* = b2*), then the sampling distribution of b1-b2
is normal with a mean of 0 and a standard error of…
sb1  sb 2  sb21  sb22
Thus,
t
b1  b2
Because we know…
Therefore….
And is distributed with N1 + N2 –4 df
sb21  sb22
sb 
sb1  sb 2 
s y .x
sx N  1
sy2.x1
s ( N 1  1)
2
x1

sy2.x 2
sx22 ( N 2  1)
Transformed…. (is we assume homogeneity of error variance then we can pool the two estimates.)
s
2
y .x

( N1  2) sy2.x1  ( N  2) sy2.x 2
Thus…
N1  N 2  4
b1  b2
t
sb1 b 2
This can be substituted for the individual
error variances in the above formula.
Distributed with N1+N2 –4 df
The difference between independent correlations
When

is not equal to zero, the sampling distribution of r is NOT normal
approaches 1.0 and the random
and its becomes more skewed more

error is not easily estimated.
The same is the case for
r1  r2
Fisher’s solution is that we transform r into r’
r '  (0.5) log e
1 r
1 r
Then r’ is approximately normally distributed and the standard error is…
sr' 
z
r r
'
1
1
N 3
Sometimes called the z transformation.
'
2
1
1

N1  3 N 2  3
As a z score, the critical value is 1.96
Test for difference between two related correlation coefficients
t
(r12  r13 ) ( N  3)(1  r23 )
2(1  r122  r132  r232  2r12 r13r23 )
Distributed with N-3 df.
Note that to apply this test the correlation
r23
is required.
Because the two correlations are not independent, we must take this into account (remember
the issue with ANOVA). In this case, specially, we must incorporate a term that reflects
the degree to which the two test themselves are related.
Download