Glaucoma Notes

advertisement

Hypothesis Testing |

α

β

Hypothesis

Power

= false positive ; Type I Error → saying there is a difference, when no difference exists.

= saying there is an effect when really there isn’t an effect.

α → usually = 0.05 or 5% or 1 in 20

= false negative rate ; Type II Error = false negative error → saying there is no difference, no effect etc. desired β→ 0.2 (20%) or less to obtain desired P=0.8 (80%) to minimize Type II Error

= statement of cause or purpose

= sensitivity of a statistical test, that it correctly rejects, a FALSE null hypothesis (H

0

).

= probability of correctly accepting the alternative hypothesis (H

1

) – true hypothesis.

= ability of a test to detect an effect, if one exists.

= 1 - β

as power ↑→ β↓

↑ power by using larger significance criterion→ use 0.1 instead of 0.05

↑ power →↓ Type II error – β - (false negative)

→ is a less conservative test

↑power →↑ Type I error – α - (false positive)

↓Type II Error → power =/>80% desired.

To minimize Type II Error →desired P= 0.8 (80%) or greater → desired β→ 0.2 (20%) or less

Does NOT indicate clinical importance if difference found.

Factors affecting 1. Statistical Significance

Power

Significance criterion (reason) = statement of how unlikely a positive result has to be, for the null hypothesis (no difference; no effect) to be rejected (considered wrong). o Probabilities = the most frequently used criteria → examples below

 0.05 = 5% → 1 in 20→ usual cut off for statistical significance

 0.01 = 1%→ 1 in 100

 0.001 = 0.1% → 1 in 1,000 o P<0.05 → Conclude:

 Finding statistically significant

 Null hypothesis → false (reject it)

 Treatment effect found; difference seen not likely chance o P>/= 0.05 → Conclude:

 Finding not statistically significant

 Null hypothesis → true (accept it)

 Treatment effect not found; difference if seen more likely due to chance o Larger significance criterion ↑power→↓ chance Type II error but ↑ Type I error

2. Magnitude of the effect

, of interest, in the population -

Threshold in the difference between outcomes that indicates statistical significance i.e. differences in BP → easier to detect a 15 mmHg difference than a 5 mmHg difference

 risk of missing larger difference as statistically significant (type II error; β) < than with small

difference; in this case power = p is > for detecting larger difference

 the smaller the effect used in the power calculation → ↑ # of patients needed for adequate statistical power compared to larger effect size

 power calculation effect size → make this the smallest difference to be clinically important

Too large of an effect size used in power calculation → might have insufficient power to detect a smaller but clinically important difference as statistically sig. →↑ Type II risk →

Too large effect size → insufficient power to detect small Δs as statistically significant →↑ Type II risk

1│

Hypothesis Testing |

Statistical

Significance

Clinical

Significance

Risk, Risk

Reduction

2│

3. Sample Size used to detect the effect = # of subjects in a study.

↑ sample size →↑ power

↓ sample size →↓ power

4. Variability of outcome measure

= normal spread (SD = standard deviation) of the outcome data

 i.e. K+ normal values = 3.5-5.1 mEq/L = variability of outcome

 the value for the SD in the power calculation is obtained from previous studies using this outcome measure

The larger the SD or spread, the larger the number of patients needed to be enrolled in the study

Study findings not likely due to chance.

Finding is statistically significant when P<0.05

MUST have STATISTICAL SIGNIFICANCE to have CLINICAL SIGNIFICANCE .

If difference is large enough to be clinically significant, but not statistically significant → consider a Type II error → false negative error → saying there was no difference; no effect → error due to

Power insufficient and sample size too small.

Extent to which an event might reduce likelihood of an adverse event (ADR)

Odds= # times event occurs/ # of times event does not occur

1.

Odds Ratio = OR = odds event occurring in treatment group/odds event occurring in control

OR<1 →odds event occurring in Tx group < odds occurring in control grp.

OR=1 →odds event occurring in Tx group = odds event occurring in control grp.

OR > 1 →odds event occurring in Tx group > odds event occurring in control grp.

 i.e. 30 pts. of 150 receiving Tx developed event; 50 pts. of 200 in control developed event o OR = [30/(150-30)] / [50/(200-50)] = [30/120] / [50/150] = 0.25/0.33 =0.76 o Since OR<1→odds of experiencing event are < with treatment than control → o The odds with the treatment for an event occurring is 76% of the odds with the control

2. Relative Risk = [# events in Tx grp./ total # of persons inTx group] / [# events control/# person in control]

 i.e. 30 pts. of 150 receiving Tx developed event; 50 pts. of 200 in control developed event o Risk = [30/150] / [50/200]=0.2/0.25= 0.8 → o RR<1 → risk of developing event in Tx group is less than in the control group→

Risk of event occurring in Tx group is 80% of the risk occurring in the control grp.

3. Relative Risk Reduction = RRR = extent of reduction in relative risk = 1-RR

RRR= Control rate – Tx Rate / Control rate o RRR=1-0.8=0.2

4. Actual Risk Reduction = % event rate with one Tx - % event rate in other Tx group

5. Numbers Needed to Treat = 1/ ARR → #s of patients needed to be treated with new drug to prevent one additional event in the Tx group

Download