Chapter 8 Notes

advertisement

Chapter 8 Notes

7-Step Version of Hypothesis Testing (pg. 179-186)

Sometimes, researchers conduct an extra step in hypothesis testing

Objective is to go beyond the decision made about H o

and say something about the degree to which the sample data is incompatible with the null hypothesis

Reason for doing this that large sample sizes can produce a statistically significant result, which causes a decision to reject a null hypothesis which turns out to be true

In other words, sometimes (especially in the case of large sample size), statistically significant result may not have practical significance

Hence, researchers sometimes perform an extra 7 th step to see how strong (or how like the possibility of a mistake) the decision to reject or fail-to-reject the null hypothesis is

3 possible techniques:

1.

Computing a Strength-of-Association Index

- Similar concept to coefficient of determination ( r

2

)

- Idea is to compute a measure that indicates the degree to which the sample data were found to be incompatible with the null hypothesis

- The higher the number, the more incompatible the sample data is

- A decimal number between 0 and 1.00

- Three most frequently seen:

1.

2.

3. r

2 eta squared (

 2

) omega squared (

 2

)

2.

Estimating the Effect Size

- An effect size is a measurement that is used to estimate the actual degree to which the null hypothesis is false

- Hence, higher the number, higher the probability that the null hypothesis is false

- Usually a decimal value between 0 and 1.0, but possible to end up > 1.0 if there is a giant discrepancy

- Usually researchers use criteria established by Cohen to interpret strength of the effect size:

.20 – small

.50 – medium

.80 – large

- Sometimes, effect size symbolized as d

3.

Computing the Statistical Test’s Power

- This is computing the power of the test used

- The higher the power, the lower is the probability of a Type II Error

- Means, a high power means that you have a smaller chance of fail-toreject a null hypothesis that turns out to be false

- Key idea is:

- A very large sample size may result in statistical significance which causes a Type I Error (reject a null hypothesis which turns out to be true)

- A too small sample size may result in small statistical significance that causes a Type II Error (fail to reject a null hypothesis which turns out to be false)

- Power is dependent on the sample size

- When analyzing a research study with low power due to small sample size, we should ask “why didn’t the researcher design the study so that it would possess adequate power?”

Adjusting for Inflated Type I Error Rate

Recall, that the level of significance determines the probability of a Type I Error

If hypothesis testing is repeated many times (e.g. for many dependent variables in a study), then the chances for Inflated Type I Error rate increases

Adjustment needed

One technique – The Bonferroni Adjustment Procedure

Choose an overall Type I Error rate desired (e.g. .05)

Divide this number by the number of tests conducted (e.g. .05/5 = .01)

This divided number will be the individual level of significance for each test (i.e.

.01 in this case)

Other adjustment technique: Dunn-Sidak modification

Download