A sample of 20 pages was taken without replacement from the 1,591-page phone directory Ameritech Pages Plus Yellow Pages. On each page, the mean area devoted to display ads was measured (a display as is a large block of multicolored illustrations, maps, and text). The data (in square millimeters) are shown below:
0 260 356 403 536 0 268 369 428 536
268 396 469 536 162 338 403 536 536 130
(a) Construct a 95 percent confidence interval for the true mean. (b)Why might normality be an issue here? (c) What s ample size would be needed to obtain an error of ±10 square millimeters with 99 percent confidence? (d) If this is not a reasonable requirement, suggest one that is. (Data are from a project by MBA student Daniel R.
Dalach.)
a) Construct a 95 percent confidence interval for the true mean.
As the population variance is understood and the test magnitude is smaller, I'll utilize the t-distribution. Excel Megastat computations are as below:
95%
346.5
170.38
20
2.093
79.740
426.240
266.760 confidence level mean standard. deviation. n t (df = 19) half-width upper confidence limit lower confidence limit
So, a 95 % assurance interval is between 266.760 and 426.240. b) Why might normality be an issue here
Normality could be a problem because ther e are extreme figures (0’s) in the facts set. This will impact the precision of the assurance intervals. c)
What sample size would be needed to obtain an error of ±10 square millimeters with 99 percent confidence?
As we don't have the measurements and sample magnitude, I will choose the zdistribution. Excel Megastat computations are listed below:
99% confidence level
346.50 mean
170.38 std. dev.
20 n
2.576 z
98.134 half-width
444.634 upper confidence limit
248.366 lower confidence limit
With a confidence interval of 99 %, the z-score is 2.576. An effective method to get our answer is:
(z-value X standard deviation/10) 2
(2.576 X 170.38/10) 2 = 1926.067
Excel Megastat computations are as under:
10 E, error tolerance
170.38 standard deviation
99% confidence level
2.576 z
1926.067 sample size
1927 rounded up
For this reason, a sample size of 1927 will be required to find an error of ±10 square mm with 99 % confidence. d) If this is not a reasonable requirement, suggest one that is.
It's not a practical necessity. The needed sample size of 1927 is bigger than the whole phone directory of just 1591 pages. Either reduce the confidence necessity or increase the tolerable error.
Biting an unpopped kernel of popcorn hurts! As an experiment, a self-confessed connoisseur of cheap popcorn carefully counted 773 kernels and put them in a popper.
After popping, the unpopped kernels were counted. There were 86. (a) Construct a 90 percent confidence interval for the proportion of all kernels that would not pop. (b)
Check the normality assumption. (c) Try the Very Quick Rule. Does it work well here?
Why, or why not? (d) Why might the sample not be typical? a) Construct a 90 percent confidence interval for the proportion of all kernels that would not pop.
N=773
P=86/773=0.1112
Z=1.645
Standard deviation is the square root of 0.1112 X 0.8887 = 0.1131
Margin of error is 0.01131 X 1.645 = 0.0186
Find the confidence interval by adding/subtracting the margin of error.
Excel Megastat computations are as under:
90% confidence level
0.111254851 proportion
773 n
1.645 z
0.019 half-width
0.130 upper confidence limit
0.093 lower confidence limit
For that reason, a 90 percent confidence interval is in between 0.093 and 0.130.
b) Check the normality assumption.
Sample size is big enough to use the usual assumption. c) Try the Very Quick Rule. Does it work well here? Why or why not?
As we are using a 90% Confidence level and the ratio is not close to 0.5, the
Very Quick Rule would not be relevant.
d) Why might this sample not be typical?
Though not particularly mentioned, I will suppose that the self-confessed experts got their sample from one particular sack or pack of popcorn. The cause this sample might not be representative is since it shows information from only 1 bag.
It is sounder to get sample from various bags.