Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Outline 1 Hypothesis Testing for Proportions 2 Statistical Significance vs. Practical Significance Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Outline 1 Hypothesis Testing for Proportions 2 Statistical Significance vs. Practical Significance Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Hypotheses about Proportions For qualitative data, just as for quantitative, we phrase our questions in terms of null and alternative hypotheses. Example If a coin doesn’t come up heads half the time, you’ll call it unfair. The population is all flips of the coin, ever. The population proportion p is the probability that the coin comes up heads. Our hypotheses are: H0 : p = 0.50 HA : p 6= 0.50. We’ll take a sample by flipping the coin some number of times and counting how often it did come up heads. The percentage of times it came up heads in the sample is b, the sample proportion. p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Sometimes it’s easier to think about HA first. Example If you are confident candidate Smith has more than 50% of the vote, you’ll predict him to win the election. The population is all voters. The population proportion p is the percentage of the population that will vote for Smith. Our hypotheses are: HA : p > 0.50. H0 : p ≤ 0.50. We’ll take a random sample of some voters and ask them for whom they’ll vote. The percentage of the sample that says they’ll vote for b, the sample proportion. Smith is p Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Recall Hypothesis Testing for Means When you’re doing a hypothesis test, 1 2 Formulate your hypotheses H0 and HA . DRAW A PICTURE, and decide whether You act if x is too small or too large, you act only if x is too small, or you act only if x is too large. 3 Use α to find the boundary(ies) between rejection and nonrejection. 4 Decide which t-curve you need (or z-curve if n ≥ 30). 5 Convert x into a t-score and see into which region it falls. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Some correspondences This. . . µ µ0 x √σ n corresponds to . . . p p0 b p q pq n (parameter) (target parameter) (statistic) (standard error) (Recall that q = 1 − p.) Strategy Our strategy will be to mimic what we did before, using these correspondences. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Some correspondences This. . . µ µ0 x √σ n corresponds to . . . p p0 b p q pq n (parameter) (target parameter) (statistic) (standard error) (Recall that q = 1 − p.) Strategy Our strategy will be to mimic what we did before, using these correspondences. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Rejection and Non-Rejection Ranges 0 p0 b falls far away from p0 , we reject H0 and accept HA . If p b falls close to p0 , however, we don’t reject H0 . If p The question, then, is where exactly to draw the boundaries between the rejection region and the nonrejection region. 1 Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Rejection and Non-Rejection Ranges 0 p0 rejection region b falls far away from p0 , we reject H0 and accept HA . If p b falls close to p0 , however, we don’t reject H0 . If p The question, then, is where exactly to draw the boundaries between the rejection region and the nonrejection region. 1 Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Rejection and Non-Rejection Ranges nonrejection region 0 p0 rejection region b falls far away from p0 , we reject H0 and accept HA . If p b falls close to p0 , however, we don’t reject H0 . If p The question, then, is where exactly to draw the boundaries between the rejection region and the nonrejection region. 1 Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Rejection and Non-Rejection Ranges nonrejection region 0 ? ? p0 rejection region b falls far away from p0 , we reject H0 and accept HA . If p b falls close to p0 , however, we don’t reject H0 . If p The question, then, is where exactly to draw the boundaries between the rejection region and the nonrejection region. 1 Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Probabilities We think H0 HA Reality H0 HA 1−α β α 1−β Recall that α is the probability that we think HA , given that H0 is really true. In conditional probability terms, α = Pr we think HA H0 . So in order to work with alpha, we presume H0 is true. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Probabilities We think H0 HA Reality H0 HA 1−α β α 1−β Recall that α is the probability that we think HA , given that H0 is really true. In conditional probability terms, α = Pr we think HA H0 . So in order to work with alpha, we presume H0 is true. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Probabilities We think H0 HA Reality H0 HA 1−α β 1−β α Recall that α is the probability that we think HA , given that H0 is really true. In conditional probability terms, α = Pr we think HA H0 . So in order to work with alpha, we presume H0 is true. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Probabilities We think H0 HA Reality H0 HA 1−α β 1−β α Recall that α is the probability that we think HA , given that H0 is really true. In conditional probability terms, α = Pr we think HA H0 . So in order to work with alpha, we presume H0 is true. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Relating α to the rejection region Suppose H0 is true, so that p really equals p0 . b has a z-distribution with mean p0 and standard Then q p error p0 q0 n . µ0 Suppose we set our rejection region and nonrejection region as shown. Then the shaded areas are the probabilities of rejection and nonrejection. We want to set the regions so that α is a specific (small) number. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Relating α to the rejection region Suppose H0 is true, so that p really equals p0 . b has a z-distribution with mean p0 and standard Then q p error p0 q0 n . µ0 Suppose we set our rejection region and nonrejection region as shown. Then the shaded areas are the probabilities of rejection and nonrejection. We want to set the regions so that α is a specific (small) number. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Relating α to the rejection region Suppose H0 is true, so that p really equals p0 . b has a z-distribution with mean p0 and standard Then q p error p0 q0 n . µ0 Suppose we set our rejection region and nonrejection region as shown. Then the shaded areas are the probabilities of rejection and nonrejection. We want to set the regions so that α is a specific (small) number. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Relating α to the rejection region Suppose H0 is true, so that p really equals p0 . b has a z-distribution with mean p0 and standard Then q p error p0 q0 n . 1−α α µ0 Suppose we set our rejection region and nonrejection region as shown. Then the shaded areas are the probabilities of rejection and nonrejection. We want to set the regions so that α is a specific (small) number. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Relating α to the rejection region Suppose H0 is true, so that p really equals p0 . b has a z-distribution with mean p0 and standard Then q p error p0 q0 n . 1−α α µ0 Suppose we set our rejection region and nonrejection region as shown. Then the shaded areas are the probabilities of rejection and nonrejection. We want to set the regions so that α is a specific (small) number. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die Example You want to test whether a die is fair, so you roll it 100 times. 25 of those times it turns up . Can you conclude that the die is unfair? Use α = 0.05. Solution Our hypotheses are H0 : p = 16 . HA : p 6= 16 . So our “target” proportion is p0 = 61 . First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die Example You want to test whether a die is fair, so you roll it 100 times. 25 of those times it turns up . Can you conclude that the die is unfair? Use α = 0.05. Solution Our hypotheses are H0 : p = 16 . HA : p 6= 16 . So our “target” proportion is p0 = 61 . First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die Example You want to test whether a die is fair, so you roll it 100 times. 25 of those times it turns up . Can you conclude that the die is unfair? Use α = 0.05. Solution Our hypotheses are H0 : p = 16 . HA : p 6= 16 . So our “target” proportion is p0 = 61 . First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die Example You want to test whether a die is fair, so you roll it 100 times. 25 of those times it turns up . Can you conclude that the die is unfair? Use α = 0.05. Solution Our hypotheses are H0 : p = 16 . HA : p 6= 16 . So our “target” proportion is p0 = 61 . First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 1 We want α = 0.05. 2 Since n = 100, we draw the z-curve. 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Then the area of the left tail is 0.025. 5 The z-table tells us the left tail ends at -1.96. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 1 We want α = 0.05. 2 Since n = 100, we draw the z-curve. 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Then the area of the left tail is 0.025. 5 The z-table tells us the left tail ends at -1.96. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 1 We want α = 0.05. 2 Since n = 100, we draw the z-curve. 0.95 0.05 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Then the area of the left tail is 0.025. 5 The z-table tells us the left tail ends at -1.96. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 1 We want α = 0.05. 2 Since n = 100, we draw the z-curve. 0.025 0.95 0.05 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Then the area of the left tail is 0.025. 5 The z-table tells us the left tail ends at -1.96. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 1 We want α = 0.05. 2 Since n = 100, we draw the z-curve. 0.025 0.95 0.05 −1.96 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Then the area of the left tail is 0.025. 5 The z-table tells us the left tail ends at -1.96. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 1 We want α = 0.05. 2 Since n = 100, we draw the z-curve. 0.025 0.95 −1.96 0.05 1.96 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Then the area of the left tail is 0.025. 5 The z-table tells us the left tail ends at -1.96. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 0.95 6 7 8 −1.96 1.96 25 b = 100 We got 25 ’s out of 100 rolls, so p = 0.25. 1 5 Now p0 = 6 = 0.167, so q0 = 6 = 0.833; also, n = 100. q q (0.167)(0.833) p0 q0 So the standard error is = 0.037. n = 100 b, namely We calculate the z-score of p b p−p0 q = p0 q0 n 9 0.05 0.25−0.167 0.037 = 2.24 Since this z-score lies in the rejection region, we reject H0 and accept HA . We believe the die is unfair. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 0.95 6 7 8 −1.96 1.96 25 b = 100 We got 25 ’s out of 100 rolls, so p = 0.25. 1 5 Now p0 = 6 = 0.167, so q0 = 6 = 0.833; also, n = 100. q q (0.167)(0.833) p0 q0 So the standard error is = 0.037. n = 100 b, namely We calculate the z-score of p b p−p0 q = p0 q0 n 9 0.05 0.25−0.167 0.037 = 2.24 Since this z-score lies in the rejection region, we reject H0 and accept HA . We believe the die is unfair. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 0.95 6 7 8 −1.96 1.96 25 b = 100 We got 25 ’s out of 100 rolls, so p = 0.25. 1 5 Now p0 = 6 = 0.167, so q0 = 6 = 0.833; also, n = 100. q q (0.167)(0.833) p0 q0 So the standard error is = 0.037. n = 100 b, namely We calculate the z-score of p b p−p0 q = p0 q0 n 9 0.05 0.25−0.167 0.037 = 2.24 Since this z-score lies in the rejection region, we reject H0 and accept HA . We believe the die is unfair. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 0.95 6 7 8 −1.96 1.96 25 b = 100 We got 25 ’s out of 100 rolls, so p = 0.25. 1 5 Now p0 = 6 = 0.167, so q0 = 6 = 0.833; also, n = 100. q q (0.167)(0.833) p0 q0 So the standard error is = 0.037. n = 100 b, namely We calculate the z-score of p b p−p0 q = p0 q0 n 9 0.05 0.25−0.167 0.037 = 2.24 Since this z-score lies in the rejection region, we reject H0 and accept HA . We believe the die is unfair. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Fairness of a Die 0.95 6 7 8 −1.96 1.96 25 b = 100 We got 25 ’s out of 100 rolls, so p = 0.25. 1 5 Now p0 = 6 = 0.167, so q0 = 6 = 0.833; also, n = 100. q q (0.167)(0.833) p0 q0 So the standard error is = 0.037. n = 100 b, namely We calculate the z-score of p b p−p0 q = p0 q0 n 9 0.05 0.25−0.167 0.037 = 2.24 Since this z-score lies in the rejection region, we reject H0 and accept HA . We believe the die is unfair. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election Example Your newspaper wants to predict the results of the election between candidates Smith and Jones. You poll 1000 voters at random; 513 say they will vote for Smith, and 487 plan to vote for Jones. Can you conclude that Smith will win the election? Use α = 0.10. Solution Our hypotheses are HA : p > 0.5. H0 : p ≤ 0.5. So our “target” proportion is p0 = 0.5. First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election Example Your newspaper wants to predict the results of the election between candidates Smith and Jones. You poll 1000 voters at random; 513 say they will vote for Smith, and 487 plan to vote for Jones. Can you conclude that Smith will win the election? Use α = 0.10. Solution Our hypotheses are HA : p > 0.5. H0 : p ≤ 0.5. So our “target” proportion is p0 = 0.5. First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election Example Your newspaper wants to predict the results of the election between candidates Smith and Jones. You poll 1000 voters at random; 513 say they will vote for Smith, and 487 plan to vote for Jones. Can you conclude that Smith will win the election? Use α = 0.10. Solution Our hypotheses are HA : p > 0.5. H0 : p ≤ 0.5. So our “target” proportion is p0 = 0.5. First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election Example Your newspaper wants to predict the results of the election between candidates Smith and Jones. You poll 1000 voters at random; 513 say they will vote for Smith, and 487 plan to vote for Jones. Can you conclude that Smith will win the election? Use α = 0.10. Solution Our hypotheses are HA : p > 0.5. H0 : p ≤ 0.5. So our “target” proportion is p0 = 0.5. First, let’s figure out our rejection regions. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 1 We want α = 0.10. 2 We draw the z-curve. 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Looking up 0.90 in the z-table, we see the boundary is at 1.28. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 1 We want α = 0.10. 2 We draw the z-curve. 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Looking up 0.90 in the z-table, we see the boundary is at 1.28. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 1 We want α = 0.10. 2 We draw the z-curve. 0.90 0.10 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Looking up 0.90 in the z-table, we see the boundary is at 1.28. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 1 We want α = 0.10. 2 We draw the z-curve. 0.90 0.10 1.28 3 We draw the rejection and nonrejection regions, and label them with their probabilities. 4 Looking up 0.90 in the z-table, we see the boundary is at 1.28. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 0.90 0.10 6 7 1.28 b = 0.513. Our sample had 513 successes out of 1000, so p Now p0 = 0.5, so q0 = 0.5; also, n = 1000. q q So the standard error is 8 pq n = (0.5)(0.5) 1000 = 0.0158. b, namely We calculate the z-score of p 0.513 − 0.500 b p−p0 q = = 0.82. p0 q0 0.0158 n 9 Since this z-score lies in the nonrejection region, we do not have enough evidence to reject H0 . The race is too close to call. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 0.90 0.10 6 7 1.28 b = 0.513. Our sample had 513 successes out of 1000, so p Now p0 = 0.5, so q0 = 0.5; also, n = 1000. q q So the standard error is 8 pq n = (0.5)(0.5) 1000 = 0.0158. b, namely We calculate the z-score of p 0.513 − 0.500 b p−p0 q = = 0.82. p0 q0 0.0158 n 9 Since this z-score lies in the nonrejection region, we do not have enough evidence to reject H0 . The race is too close to call. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 0.90 0.10 6 7 1.28 b = 0.513. Our sample had 513 successes out of 1000, so p Now p0 = 0.5, so q0 = 0.5; also, n = 1000. q q So the standard error is 8 pq n = (0.5)(0.5) 1000 = 0.0158. b, namely We calculate the z-score of p 0.513 − 0.500 b p−p0 q = = 0.82. p0 q0 0.0158 n 9 Since this z-score lies in the nonrejection region, we do not have enough evidence to reject H0 . The race is too close to call. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 0.90 0.10 6 7 1.28 b = 0.513. Our sample had 513 successes out of 1000, so p Now p0 = 0.5, so q0 = 0.5; also, n = 1000. q q So the standard error is 8 pq n = (0.5)(0.5) 1000 = 0.0158. b, namely We calculate the z-score of p 0.513 − 0.500 b p−p0 q = = 0.82. p0 q0 0.0158 n 9 Since this z-score lies in the nonrejection region, we do not have enough evidence to reject H0 . The race is too close to call. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Example: Predicting an Election 0.90 0.10 6 7 1.28 b = 0.513. Our sample had 513 successes out of 1000, so p Now p0 = 0.5, so q0 = 0.5; also, n = 1000. q q So the standard error is 8 pq n = (0.5)(0.5) 1000 = 0.0158. b, namely We calculate the z-score of p 0.513 − 0.500 b p−p0 q = = 0.82. p0 q0 0.0158 n 9 Since this z-score lies in the nonrejection region, we do not have enough evidence to reject H0 . The race is too close to call. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Summary Do hypothesis tests for proportions as you did for means. Only do these for large sample sizes, using the z-table. (Different methods are needed for small sample sizes.) q Use p0nq0 for the standard error. Important For this test to work, both np and nq should be at least 10. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Summary Do hypothesis tests for proportions as you did for means. Only do these for large sample sizes, using the z-table. (Different methods are needed for small sample sizes.) q Use p0nq0 for the standard error. Important For this test to work, both np and nq should be at least 10. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance Outline 1 Hypothesis Testing for Proportions 2 Statistical Significance vs. Practical Significance Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Questions Is this statistically significant? Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Questions Is this statistically significant? Yes, with more than 99% confidence. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Questions Is this statistically significant? Yes, with more than 99% confidence. Is this of practical importance? Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction An Interesting Example Your company manufactures children’s 12-inch school rulers. You want to make sure the rulers do average exactly 12 inches long. H0 : µ = 1200 . HA : µ = 6 1200 . A sample of 35 rulers gives x = 12.0100 and s = 0.005. You choose α = 0.01, which leads to cutoffs of ±2.58. x − µ0 12.01 − 12.0 √ = √ Your z-score is = 11.83. s/ n 0.005/ 35 This lies far, far into the rejection region, so you are more than 99% confident that the true mean µ is not 1200 . Questions Is this statistically significant? Yes, with more than 99% confidence. Is this of practical importance? No, not at all. Hypothesis Testing for Proportions Statistical Significance vs. Practical Significance A distinction When we reject the null hypothesis using statistics, we are confident the null hypothesis is false. That is not remotely the same question as whether the null hypothesis is close to being true. We can be very, very sure from statistics that µ 6= µ0 , even if µ is very close to µ0 . In short, statistical significance does not necessarily imply practical significance.