STAT 310 Final Exam Review Topics One-way ANOVA One variable that researchers are manipulating to see the effects on the response H0: group/treatment means are equal Ha: at least two of the groups means differ Assumptions: o Independent observations – random assignment o Normal distribution – bell-shaped & symmetric histogram, points fall on reference line in normal quantile plot o Constant variance – no fan/wedge shape Statistical model: yij = μ + τi + εij Predicting Randomized Complete Block Design Blocks are created to reduce unwanted variability. That is, a source of variation you’re not interested in measuring the effects of that you can control. There is one factor in which researchers are manipulating to see the effects on the response Assumptions: o Normality o Constant variance o Interaction between treatment and block – assumed to hold without actually checking Statistical model: yij = μ + τI + βj + εij predicting Two-way ANOVA (Factorial Treatment Design) Two variables that researchers are manipulating to see the effects on the response Statistical model: yijk = μ + αi + βj + αβij + εijk Analysis: o First check to see if interaction is significant If yes, look at simple effects If no, look at main effects o Look at main effects (if needed) to determine whether each is significant or not Assumptions: o Normality o Constant variance o Independent observations Predicting 1 Two-way ANOVA in a Randomized Complete Block Design Two variables that researchers are manipulating to see the effects on the response. Blocks are also created to reduce unwanted variability Statistical model: yijk = μ + αi + βj + αβij + ρk + εijk Assumptions: o Normality o Constant variance o Independent observations o No treatment and block interaction Analysis: o First check to see if interaction is significant If yes, look at simple effects If no, look at main effects o Look at main effects (if needed) to determine whether each is significant or not Predicting Two Numeric Variables Correlation o -1 ≤ r ≤ 1 o Strongest linear relationship ±1 o Weakest linear relationship 0 Simple Linear Regression o Fitted regression equation from JMP output o Testing whether regression is useful o Interpreting intercept and slope o Interpreting/identifying R2 o Assumptions: Linear Constant variance Independent observations Normality Outliers o Predicting 2 Multiple Numeric variables Correlation matrix Scatterplot matrix Multiple Linear Regression o Fitted regression equation from JMP output o Testing whether regression is useful o Interpreting intercept and slope coefficients o Assumptions: Linearity Constant variance Independence Normality Outliers o Backward elimination ANOCOVA o Used when you want to adjust for particular quantitative variables o Testing for unequal slopes model (interaction term present) o Testing for equal slopes model (no interaction term) o Estimated regression equations from JMP output o Predicting Goodness of Fit Tests One categorical variable with 2 outcomes o H0: p = 0.30 Ha: p ≠ 0.30 o Expected counts ≥ 5 o Chi-square test o Test statistic o Make conclusion in terms of Ha One categorical variable with more than 2 outcomes o H0: pA = 0.40 pB = 0.20 pC= 0.10 Ha: Two or more differ o Expected counts ≥ 5 o Chi-square test o Test statistic o Make conclusions in terms of Ha pD = 0.30 3 Comparing Two Proportions Chi-square test o H0: pA = pB Ha: pA ≠ pB o Expected counts ≥ 5 o Test statistic o Make conclusion in terms of Ha Fisher’s Exact Test o H0: pA ≤ pB or Ha: pA > pB o No expected counts o No test statistic o Make conclusion in terms of Ha H0: pA ≥ pB Ha: pA < pB Chi-square Test of Independence When there are more than two outcomes for at least one of the categorical variables H0: the two variables are independent (i.e. no relationship) Ha: the two variables are not independent (i.e. relationship exists) Expected counts ≥ 5 Chi-square test Test statistic Make conclusion in terms of Ha 4