252y0631 11/28/06 ECO252 QBA2 Name KEY

advertisement
252y0631 11/28/06
(Page layout view!)
ECO252 QBA2
THIRD HOUR EXAM
Nov 30 and Dec 1, 2006
Name KEY
Hour of Class Registered (Circle)
MWF 1 MWF 2 TR 12:30 TR 2
I. (8 points) Do all the following (2points each unless noted otherwise). Make Diagrams! Show your
work!
x ~ N 15, 9.3
20  15 

 Pz  0.54   Pz  0  P0  z  0.54   .5  .2054  .2946
1. Px  20   P  z 
9.3 

For z make a diagram. Draw a Normal curve with a mean at 0. Indicate the mean by a vertical line!
Shade the area above 0.54. Because this is on one side of zero we must subtract the area between 0 and 0.54
from the area above zero. If you wish, make a completely separate diagram for x . Draw a Normal curve
with a mean at 15. Indicate the mean by a vertical line! Shade the area above 20. This area is totally
above the mean so we must subtract the area between the mean and 20 from the area 50% above the mean.
This is how we usually find a p-value when the distribution of the test ratio is Normal.
14  15 
 0  15
z
 P 1.61  z  0.11  P1.61  z  0  P0.11  z  0
2. P0  x  14   P 
9.3 
 9.3
 .4463  .0438  .4025
For z make a diagram. Draw a Normal curve with a mean at 0. Indicate the mean by a vertical line!
Shade the area between -1.61 and -0.11. Because this is on one side of zero we must subtract the area
between -0.11 and zero from the area between -1.61 and zero. If you wish, make a completely separate
diagram for x . Draw a Normal curve with a mean at 15. Indicate the mean by a vertical line! Shade the
area between zero and 14. This area is totally below the mean so we must subtract the area between 14 and
the mean (15) from the area between zero and the mean.
16  15 
  16  15
z
 P 3.33  z  0.11  P3.33  z  0  P0  z  0.11
3. P16  x  16   P 
9
.
3
9.3 

 .4996  .0438  .5434
For z make a diagram. Draw a Normal curve with a mean at 0. Indicate the mean by a vertical line!
Shade the area between -3.33 and 0.11. Because this is on both sides of zero we must add together the area
between -3.33 and zero and the area between zero and 0.11. If you wish, make a completely separate
diagram for x . Draw a Normal curve with a mean at 15. Indicate the mean by a vertical line! Shade the
area between -16 and 16. This area includes the mean (15), and areas to either side of it so we add together
these two areas.
4. x.42 (Find z .42 first) Solution: (Do not try to use the t table to get this.) For z make a diagram. Draw a
Normal curve with a mean at 0. z .42 is the value of z with 42% of the distribution above it. Since 100 – 42
= 58, it is also the 58th percentile. Since 50% of the standardized Normal distribution is below zero, your
diagram should show that the probability between z .42 and zero is 58% - 50% = 8% or
P0  z  z.42   .0800 . The closest we can come to this is P0  z  0.20   .0783 . (0.21 is also acceptable
here.) So z.42  0.20. To get from z .42 to x.42 , use the formula x    z , which is the opposite of
x
. x  15  0.20 9.3  16 .86 . If you wish, make a completely separate diagram for x . Draw a

Normal curve with a mean at 20. Show that 50% of the distribution is below the mean (7). If 42% of the
distribution is above x.42 , it must be above the mean and have 8% of the distribution between it and the
mean. Note that there will be some questions on the Final where you will need odd values of z like z .125 .
z
16 .86  15 

 Pz  0.20   Pz  0  P0  z  0.20   .5  .0793
Check: Px  16 .86   P  z 
9.3 

 .4207  42%
1
252y0631 11/28/06
(Page layout view!)
II. (22+ points) Do all the following (2points each unless noted otherwise). Do not answer question ‘yes’
or ‘no’ without giving reasons. Show your work in questions that are not multiple choice. Look them
over first. The exam is normed on 50 points.
Note the following:
1. This test is normed on 50 points, but there are more points possible including the take-home.
You are unlikely to finish the exam and might want to skip some questions.
2. If you answer ‘None of the above’ in any question, you should provide an alternative
answer and explain why. You may receive credit for this even if you are wrong.
3. Use a 5% significance level unless the question says otherwise.
4. Read problems carefully. A problem that looks like a problem on another exam may be
quite different.
5. Use statistical tests! Just because two means or two proportions look different to you does not
mean that they are significantly different unless you prove that the probability of getting the
observed difference if the null hypothesis is true is very small.
1. Turn in your computer problems 2 and 3 marked to show the following: (5 points, 2 point penalty for
not doing.)
a) In computer problem 2 – what is tested and what are the results?
b) In computer problem 3 – what coefficients are significant? What is your evidence?
c) In the last graph in computer problem 3, where is the regression line?
[5]
2. (Abronovic) The distance that a baseball travels after being hit is a function of the velocity (in mph)
of the pitched ball. A ball is pitched to a batter with a 35 inch, 32 oz bat that is swung at 70mph from
the waist and at an angle of 35%. The experiment is repeated 9 times. A partial Minitab printout
appears below. Use   .01 throughout this problem.
DIST = ………
Predictor
Constant
VELOC
s = 1.185
+ ……… VELOC
Coef
StDev
t-ratio
p
31.311
0.999
……………
…………
0.74667 0.01529 ……………
…………
R-sq = ……… Rsq(adj) = ………
Analysis of Variance
SOURCE
DF
SS
Regression
1
3345.1
Error
7
9.8
Total
8
3354.9
MS
3345.1
1.4
F
………
p
…………
a) The fastest pitchers can throw at about 100 mph. How far will such a pitch be hit? (2)
b) What is the value of R-squared? (2)
c) Fill in the F space in the ANOVA and explain specifically what is tested and what are
the conclusions. (3)
d) Is the constant (31.311) significant? Why? (2)
[14]
ˆ
Solution: a) The equation given above is Y  b0  b1 X or DIST = 31.311 + 0.74667 VELOC
= 31.311 + 0.74667(100) = 105.98.
b) The ANOVA tells us that SSR  3345 .1 and SSR  SSY  3354 .9 . So it must be true that
SSR 3345 .1
R2 

 .9971
SST 3354 .9
c)
Analysis of Variance
2
252y0631 11/28/06
(Page layout view!)
SOURCE
Regression
Error
Total
DF
1
7
8
SS
3345.1
9.8
3354.9
1,7 
MS
3345.1
1.4
F
2389.36
The relevant table value of F is F.01  12.25 . Since our computed F is larger than the table value
of F, we reject the null hypothesis that there is no relationship between the dependent variable and
the independent variable(s). For a simple regression this null hypothesis is equivalent to saying that
if Y   0  1 X   , we have H 0 : 1  0 . Yˆ  b0  b1 X
H 0 :  0   00
b   00
d) The outline says that if we test 
, use t  0
. So if we test for
s b0
H 1 :  0   00
H 0 :  0  0
significance the null hypothesis is that  0 is insignificant or 
, we use
H 1 :  0  0
b  0 31 .311
t 0

 2047 .8 . The degrees of freedom are n  2  7 . We cannot reject the null
s b0
0.01529
7
7
 3.449 and t .005
 3.449 .
hypothesis if the computed value of the t statistic is between  t .005
Since the computed t is not between these values we must say that  0 is significant.
3. (Render) A firm renovates homes in upstate New York. Sales and payroll for the region for a random
sample of years are given below. Sales are in $100,000 and Payroll is in $0.1 billions.
Current sales are $210000.00 and because of the opening of several new plants payroll is anticipated to
be about 0.6 billion dollars for the foreseeable future. Will average sales be significantly different from
current sales? To answer this question.
a) Complete the XY column (1)
b) Find the regression equation (4)
c) Find an appropriate interval for sales (3)
d) State and justify your conclusion (2)
[24]
Y – Sales
2.0
3.0
2.5
2.0
2.0
3.5
15.0
X - Payroll
1
3
4
2
1
7
18
X2
1
9
16
4
1
49
80
Y2
4
9
6.25
4
4
12.25
39.50
XY
Solution: a)
Y – Sales
2.0
3.0
2.5
2.0
2.0
3.5
15.0
X - Payroll
1
3
4
2
1
7
18
X2
1
9
16
4
1
49
80
Y2
4
9
6.25
4
4
12.25
39.50
XY
2
9
10
4
2
24.5
51.5
b) n  6,
 X  18,  Y  15,  XY  51 .5,  X
2
 80 and
Y
2
 51 .5.
3
252y0631 11/28/06
Means: X 
(Page layout view!)
 X  18  3.00
n
Spare Parts: SS x 
SS y 
6
X
Y
2
2
Y 
 Y  15  2.5
n
6
 nX  80  63  26
2
2
 nY 2  39.5  62.52  2  SST (Total Sum of Squares)
 XY  nXY  51.5  632.5  6.50
S
 XY  nXY  6.5  0.25
Coefficients: b 

SS
 X  nX 26
S xy 
xy
1
2
x
2
b0  Y  b1 X  2.5  0.25  3  1.75 .
So Yˆ  1.75  0.25 X
The following computation may be useful, but is not needed:
R2 
S xy 2
6.52  .8125
SSR b1 S xy 0.256.5


 .8125 

SST
SSy
2
SS x SS y
262
c) Since we are finding an interval for average sales, the confidence interval is appropriate and the
SSE SST  SSR SS y  b1 S xy 2  0.256.5
significance interval is not. s e2 



 .09375
n2
n2
n2
4
s e  .09375  .3062
The Confidence Interval is  Y0  Yˆ0  t sYˆ , where

1 X X
sY2ˆ  s e2   0
n
SS x

2   .09375  1  6  32 


6

26


 .09375 0.166667  0.346154   0.0480769 so
4
 4.604
s ˆ  0.0480769  0.21926 t .005
Y
Yˆ  1.75  0.256  3.25
 y  3.25  4.604.21926  3.25  1.01 or 2.24 to 4.26.
d) In hundreds of thousands, current sales are 2.1. since this is not on the confidence interval, sales
will be significantly different from 2.1.
4
252y0631 11/28/06
(Page layout view!)
4. The following data may look familiar. It is the service method data from the last exam. 11 service
methods were tried to see which is the fastest. I ran a multiple comparison of the 11 methods. Data were
stacked as ‘time’ in column 12 with the column labels ‘tel’ in column 13. This was run on Minitab in three
different ways, as a one-way ANOVA, using the Kruskal-Wallis test and using the Mood median test. I have
no idea how to do a Mood median test but I know this much. The null hypothesis is equal medians. The
assumptions are comparable to the Kruskal-Wallis test. The Mood median test is less powerful than the
Kruskal-Wallis test but is less affected by outliers. The Mood median test produces a chi-squared statistic
with degrees of freedom equal to the number of columns less 1.
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1
2.9
3.9
2.6
3.1
3.9
2.6
3.3
3.0
3.5
3.1
3.2
2.4
4.0
4.3
2
2.8
2.6
2.6
2.9
2.9
2.8
2.3
2.4
2.0
2.5
2.4
2.0
4.1
4.3
3
2.6
2.7
3.2
2.8
3.6
2.1
2.3
2.6
2.6
2.9
2.4
2.3
0.5
4.7
4
7.7
2.9
4.3
2.7
3.4
4.4
5.5
3.4
3.4
3.5
4.0
3.4
4.1
3.8
5
2.4
13.4
5.8
1.5
9.8
2.7
2.7
4.5
2.3
5.8
4.8
4.2
5.8
6.1
6
6.6
3.7
9.7
1.9
10.1
4.5
2.9
9.9
3.0
31.5
3.5
5.3
9.8
5.3
7
3.5
8.4
4.3
3.3
11.9
3.7
3.0
2.9
3.6
5.4
4.4
3.0
4.3
5.4
8
3.4
8.3
4.2
3.2
11.0
3.6
2.9
2.8
3.5
5.3
4.3
2.9
4.2
5.3
9
3.5
3.4
3.8
3.4
3.6
3.5
4.8
3.5
5.3
3.7
3.4
3.6
3.8
3.8
10
2.3
6.9
3.3
5.3
3.0
3.3
6.1
3.1
2.6
4.4
15.0
6.9
2.1
10.4
11
3.4
4.4
3.1
3.6
4.4
3.1
3.8
3.5
4.0
3.6
3.7
2.9
4.5
4.8
The edited Minitab output follows. Note that the p-values are missing and for you to think about.
Results for: 252x06031-01.MTW
MTB > Name c72 "RESI1"
#Residuals are stored for Normality test.
MTB > Oneway c12 c13;
SUBC>
Residuals 'RESI1';
SUBC>
GNormalplot;
SUBC>
NoDGraphs.
One-way ANOVA: time versus tel
Source
DF
SS
MS
F
P
tel
10
284.43 28.44
………
…………
Error
143 1228.86
8.59
Total
153 1513.28
S = 2.931
R-Sq = 18.80%
R-Sq(adj) = 13.12%
Level
tel 1
tel 10
tel 11
tel 2
tel 3
tel 4
tel 5
tel 6
tel 7
tel 8
tel 9
N
14
14
14
14
14
14
14
14
14
14
14
Mean
3.271
5.336
3.771
2.757
2.664
4.036
5.129
7.693
4.793
4.636
3.793
StDev
0.580
3.630
0.580
0.677
0.909
1.265
3.214
7.446
2.503
2.331
0.561
Individual 95% CIs For Mean Based on
Pooled StDev
------+---------+---------+---------+--(-----*-----)
(-----*------)
(-----*-----)
(-----*-----)
(------*-----)
(-----*-----)
(------*-----)
(-----*-----)
(-----*-----)
(------*-----)
(-----*-----)
------+---------+---------+---------+--2.5
5.0
7.5
10.0
Pooled StDev = 2.931
Normplot of Residuals for time (This plot was identical to the one shown below)
5
252y0631 11/28/06
(Page layout view!)
MTB > Kruskal-Wallis c12 c13.
Kruskal-Wallis Test: time versus tel
Kruskal-Wallis Test on time
tel
N Median Ave Rank
tel 1
14
3.150
60.1
tel 10
14
3.850
86.8
tel 11
14
3.650
85.6
tel 2
14
2.600
33.3
tel 3
14
2.600
33.5
tel 4
14
3.650
85.4
tel 5
14
4.650
90.3
tel 6
14
5.300
107.2
tel 7
14
4.000
94.7
tel 8
14
3.900
89.5
tel 9
14
3.600
86.1
Overall 154
77.5
H = 41.97
Z
-1.53
0.82
0.72
-3.89
-3.87
0.70
1.12
2.61
1.51
1.06
0.75
#H is the Kruskal-Wallis statistic that I
#taught you about.
MTB > Mood c12 c13.
Mood Median Test: time versus tel
Mood median test for time
Chi-Square = 23.34
DF = 10
tel
tel
tel
tel
tel
tel
tel
tel
tel
tel
tel
tel
1
10
11
2
3
4
5
6
7
8
9
N<=
10
7
5
12
12
7
5
4
5
6
6
N>
4
7
9
2
2
7
9
10
9
8
8
Median
3.15
3.85
3.65
2.60
2.60
3.65
4.65
5.30
4.00
3.90
3.60
Q3-Q1
1.08
4.00
1.08
0.53
0.68
0.93
3.25
6.45
2.18
2.18
0.33
P = ………
Individual 95.0% CIs
-+---------+---------+---------+----(*--)
(--*------------)
(*--)
*-)
(*-)
(*-)
(-------*---)
(------*-----------------)
(--*-----)
(--*----)
*)
-+---------+---------+---------+----2.5
5.0
7.5
10.0
Overall median = 3.50
MTB > NormTest c72;
SUBC>
KSTest.
Probability Plot of RESI1
#This is a plot of the differences between means of
#each columns and the actual data.
6
252y0631 11/28/06
(Page layout view!)
MTB > Vartest c12 c13;
SUBC>
Confidence 95.0.
Test for Equal Variances: time versus tel (This plot was not needed.)
Bartlett's Test (normal distribution)
Test statistic = 190.25, p-value = 0.000
Levene's Test (any continuous distribution)
Test statistic = 3.29, p-value = 0.001
a) Are the means significantly different? Identify the test that answers this question,
complete it and answer the question. Explain! (3)
Solution: The null hypothesis of the ANOVA is that the means are all equal. The
28 .44
 3.3108 . The F statistic has 10 and 143 degrees of
computed F statistic is F 
8.59
10,125  1.91 and F 10,200  1.88 . So F 10,143
freedom. According to the F table F.05
.05
.05
must be between these two values. Since the computer F statistic exceed the table value,
reject the null hypothesis and say that the means are significantly different.
b) Are the medians significantly different? Complete one or both of the remaining tests
and answer the question. Explain! (4)
Solution: The number of columns makes the Kruskal-Wallis table inapplicable. Under
these circumstances the H statistic has the Chi-squared distribution with degrees of
10
freedom one less than the number of columns. According to our table  2 .05  18.3070.
The printout says that H  41.97 . Since the computed value exceeds the table value, we
can reject the null hypothesis of equal medians.
The Chi-square statistic  2  23.34 produced by the Mood median test also has 10
degrees of freedom. Since the computed value exceeds the table value, we can reject the
null hypothesis of equal medians.


c) At the end of the printout there are a K-S (actually Lilliefors) test, a Bartlett and a
Levene test. What do they tell us about the applicability of the ANOVA, Kruskal-Wallis
or Mood test to the problem? What conclusion is the most reliable? (3)
Solution: The p-value of less than 1% for the Lilliefors test means that we can reject the
null hypothesis of Normality at the 5% or 1% level. This means that the ANOVA should
not be used. The probability plot and a check of the numbers both imply the presence of
outliers so that the Mood test is most applicable.
d) On the basis of all this, which of the service methods are best? (1)
[35]
Solution: The Mood test printout says that the medians differ and tells us that the service
methods with the lowest median time are methods 2 and 3.
7
252y0631 11/28/06
(Page layout view!)
5. Four experts rated 4 brands of Columbian coffee. By adding together ratings on a seven point scale for
taste, aroma, richness and acidity, each coffee is given a rating on a 28 point scale. The following table
gives these summed ratings. Assume that the underlying distribution is not Normal and test for a difference
in the ratings of the four brands. (5) Note:   .10 
Row
1
2
3
4
Brand A
Brand B
Brand C
Brand D
x1
x2
x3
x4
24
27
19
24
26
27
22
27
25
26
20
25
22
24
16
23
Solution: (This is an abbreviated version of Problem 12.95) Since the underlying distribution is not
Normal and the data is cross classified this is a Friedman test. Note that much time was wasted on this
because people wanted to believe it was an ANOVA. ANOVA is not appropriate on data from non-Normal
distributions unless they are very well-behaved.
H0:  1   2   3   4 Where 1 is A, 2 is B, 3 is C and 4 is D.
H1: At least one of the medians differs.
First we rank the data within rows. The data appears below in columns marked x1 to x 4 and the
ranks are in columns marked r1 to r4 .
Row
Brand A
x1
1
2
3
4
SRi
24
27
19
24
Brand B
r1
x2
2
3.5
2
2
9.5
26
27
22
27
Brand C
r2
Brand D
x3
4
3.5
4
4
15.5
25
26
20
25
r3
3
2
3
3
11
x4
22
24
16
23
r4
1
1
1
1
4
To check the ranking, note that the sum of the four rank sums is 9.5 + 15.5 + 11 + 4 =
rcc  1 445
SRi 

 40 .
40, and that the sum of the rank sums should be
2
2
 12

SRi2   3r c  1
Now compute the Friedman statistic  F2  
 rc c  1 i


 


 12
9.52  15 .52  112  42   345   12 467 .5  60  10 .125 .

 80

 4 45

Since the size of the problem is one shown in Table 8, we can use that table, which
appears, in part, below.
c4,
r 4
 F2
9.900
10.200
p  value
.006
.003
Since 10.125 lies between 9.9 and 10.2, the p-value is between .006 and .003, both of
which are below .10, so reject the null hypothesis.
8
252y0631 11/28/06
(Page layout view!)
ECO 252 QBA2
THIRD EXAM
Nov 30 and Dec 1, 2006
TAKE HOME SECTION
Name: _________________________
Student Number: _________________________
Class days and time: _________________________
Please Note: Computer problems 2 and 3 should be turned in with the exam (2). In problem 2, the 2 way
ANOVA table should be checked. The three F tests should be done with a 5% significance level and you
should note whether there was (i) a significant difference between drivers, (ii) a significant difference
between cars and (iii) significant interaction. In problem 3, you should show on your third graph where the
regression line is. Check what your text says about normal probability plots and analyze the plot you did.
Explain the results of the t and F tests using a 5% significance level. (2)
III Do the following. Note: Look at 252thngs (252thngs) on the syllabus supplement part of the website
before you start (and before you take exams). Show your work! State H 0 and H 1 where appropriate.
You have not done a hypothesis test unless you have stated your hypotheses, run the numbers and
stated your conclusion. (Use a 95% confidence level unless another level is specified.) Answers without
reasons or accompanying calculations usually are not acceptable. Neatness and clarity of explanation
are expected. This must be turned in when you take the in-class exam. Note that from now on
neatness means paper neatly trimmed on the left side if it has been torn, multiple pages stapled and
paper written on only one side. Because so much of this exam is based on student numbers, there is a
penalty for failing to state your correct student number.
1) Bassett et al. give the following numbers for the year and the number of pensioners in the United
Kingdom. Pensioners are in millions. The 2000 number is a bit shaky, so subtract the last digit of your
student number divided by 10 from the 12.00 that you see there. Label your answer to this problem with a
version number. (Example: Good ol’ Seymour’s student number is 123456, so the 12.000 becomes 12.000 0.6 = 11.400 and he labels it Version 6.) 'Pensioners' is the dependent variable and 'Year' is the independent
variable, so what you are going to get is a trend line. If you don’t know what dependent and independent
variables are, stop work until you find out.
1
2
3
4
5
6
7
8
Year
1966
1971
1975
1976
1977
1978
1979
2000
Pensioners
6.679
7.677
8.321
8.510
8.637
8.785
8.937
12.000
Bassett et. al. strongly suggest that you change the base year to something other than the year zero. They
recommend that you subtract 1970 from every number in the ‘Year’ column, so that 1966 becomes -4 and
2000 becomes 30. This will make your computations easier.
a. Compute the regression equation Y  b0  b1 x to predict the number of pensioners in each year. (3).You
may check your results on the computer, but let me see real or simulated hand calculations.
b. Compute R 2 . (2)
c. Compute s e . (2)
d. Compute s b1 and do a significance test on b1 (2)
e. Use your equation to predict the number of pensioners in 2005 and 2006. Using the 2006 number, create
a prediction interval for the number of pensioners for that year. Explain why a confidence interval for the
number of pensioners is inappropriate. (3)
9
252y0631 11/28/06
(Page layout view!)
f. Make a graph of the data. Show the trend line clearly. If you are not willing to do this neatly and
accurately, don’t bother. (2)
g. What percent rise in pensioners did the equation predict for 2006? What percent rise does it predict for
2050? The population of the United Kingdom grew at roughly 0.31% a year over the last quarter of the 20 th
century. Can you intelligently guess what is wrong? (1)
[15]
Solution: Version 0
i
X
1
2
3
4
5
6
7
8
-4
1
5
6
7
8
9
30
62
6.679
7.677
8.321
8.510
8.637
8.785
8.937
12.000
69.546
To summarize n  8,
Y
2
16
1
25
36
49
64
81
900
1172
XY
Y2
-26.716
7.677
41.605
51.060
60.459
70.280
80.433
360.000
644.798
44.609
58.936
69.239
72.420
74.598
77.176
79.870
144.000
620.848
 X  62,  Y  69 .546 ,  XY  644 .798 ,  X
2
 1172 and
 620 .848 . df  n  2  8  2  6.
 X  62  7.75
Means: X 
n
8
X
Spare Parts: SS x 
2
Y 
 Y  69.546  8.69325
n
8
 nX  1172  87.752  691.50
2
Y  nY  620.848  88.69325  16.2673  SST
  XY  nXY  644 .798  87.75 8.69325   105 .8165
SS y 
S xy
X2
Y
2
2
2
(Total Sum of Squares)
Coefficients:
b1 
S xy
SS x

 XY  nXY
 X  nX
2
2

105 .8165
 0.1530
691 .500
b0  Y  b1 X  8.69325  0.1530  7.75   7.5075
So our equation is Yˆ  7.5075  0.1539 X
b. Compute R 2 . (2) Solution: SSR  b1 Sxy  0.1530 105 .8165 16.1899
S xy 2
105 .8165 2  .9954
SSR b1 S xy 16 .1899
2
R 


 .9952 or R 

SST
SSy 16 .2673
SS x SS y 691 .516 .2673
2
c. Compute s e . (2) Solution: SSE  SST  SSR  16.2673  16.1899  .07740
s e2 
SSE .07740

 .01290 s e  .01290  0.11358
n2
6
Note also that if R 2 is available s e2 

SS y 1  R 2
n2
  16.2673 1  .9954   .01247
6
and
s e  .01247  0.11168 . I will compromise and use s e2  .0127
10
252y0631 11/28/06
(Page layout view!)
 1   0.0127 
d. Compute s b1 and do a significance test on b1 . (2) Solution: s b21  s e2 
  0.00001837

 SS x   691 .5 
b 0
0.1539
s b  0.00001837  0.004286 . t  1

 35 .908 . If we assume that   .05 , compare this
1
s b1
0.004286
6
with t .025
 2.447. Since the computed t is larger than the table t in absolute value, we reject the null
hypothesis of no significance and say that the slope is significant.
e. Use your equation to predict the number of pensioners in 2005 and 2006. Using the 2006 number, create
a prediction interval for the number of pensioners for that year. Explain why a confidence interval for the
number of pensioners is inappropriate. (3). Solution: Our base year was 1970 so the value of X for 2005 is
2005 – 1970 = 35. Our equation is Yˆ  7.5075  0.1539 X and for 2005 it gives Ŷ  7.5075  0.153935 
12.894. Since the slope is 0.1539, add this to the 2005 value to get 13.048 for 2006. The formula for the
2


X0  X
2
2 1
ˆ
Prediction Interval is Y0  Y0  t sY , where sY  s e

 1 . For 2006, X 0  36 . It’s time to
n

SS x




remember that s e  .01247  0.11168 , n  8, X  7.75 and SS x 
X
2
 nX 2  691 .50 .
 1 36  7.75 2

So sY2  .1247  
 1  .1247 0.1250  1.1541  1  .1247 2.2791   0.2842
8

691 .5


s  0.2842  0.5331 . We already know that t 6   2.447 and that for 2006 Yˆ  13 .048 . So that our
Y
.025
prediction interval is Y0  Yˆ0  t sY  13.048  2.4470.5331  13.048  1.304 , or between 11.7 and 14.3
million people. The confidence interval is inappropriate for this type of problem. To use the problem
demonstrated in class, a confidence interval done when X 0  5 gives us a likely range in which the average
number of children will fall for the average couple that wants 5 children. The prediction interval gives us a
likely range in which the number of children will fall for one couple that wants 5 children. There is no
average year 2006, since it will be 2006 only once, so an interval for an average makes no sense.
f. Make a graph of the data. Show the trend line clearly. If you are not willing to do this neatly and
accurately, don’t bother. (2) Suggestions: I don’t have the graphical power to do this but here’s what I
would do. Since our years vary from 1966 to 2006 and the number of pensioners varies from 6.68 to a
projected 13.0 million. Your x axis should be marked from 1966 to 2006, but I would probably only mark
the years 1970 to 2010 by 5-year intervals. I might consider stretching the whole thing out to 2050 because
of the next question. The y-axis might go from 6 to 14 million with marks for every two million. I can plot
the regression line Yˆ  7.5075  0.1539 X by noting that for 1970 it gives us 7.5 million which grows to 13
million in 2006. these two points can be connected to give us the regression line and the 8 points that we
used to get the regression equation can be plotted around it.
g. What percent rise in pensioners did the equation predict for 2006? What percent rise does it predict for
2050? The population of the United Kingdom grew at roughly 0.31% a year over the last quarter of the 20 th
century. Can you intelligently guess what is wrong? (1) Solution: Our base year was 1970 so the value of
X for 2049 is 2049 – 1970 = 79. Our equation is Yˆ  7.5075  0.1539 X and for 2049 it gives
Yˆ  7.5075  0.153979  19.666 . Since the slope is 0.1539, add this to the 2049 value to get 19.819 for
2050. We thus have the following.
Year
Number Per cent growth.
2005
12.894
2006
13.048
1.19
2049
19.666
2050
19.819
0. 78
11
252y0631 11/28/06
(Page layout view!)
It is reasonable to expect the growth rate to fall as the absolute number of additional pensioners stays
constant but the number of pensioners grows. However, we are predicting that for an 80 year period the
number of pensioners will grow faster than population. Aside from the fact that this results in a substantial
rise in the number of pensioners per worker that may be politically impossible, it’s hard to believe that the
country will produce old people at a rate substantially higher than population grows for over 80 years. This
is a basic problem with using a trend line to make predictions. A given slope may be appropriate for a long
while, but it is no more appropriate to say it will last forever, than it is to say that Wal-Mart’s sales will
continue to grow at a rate way above total retail sales for decades. Sooner or later Wal-Mart’s sales will be
such a large part of retail sales that the only way to grow Wal-Mart’s sales at such a high rate is to grow all
retail sales at a higher rate, which just won’t happen if we are all drawing our wages from Wal-Mart.
2) The Lees in their text ask whether experience makes a difference in student earnings and present the
following data for student earnings versus years of work experience. To personalize these data, take the
second to last digit of your student number call it a . Clearly label the problem with a version number based
on your student number. Then take your a , multiply it by 0.5 and add it to the 13 in the lower left corner. .
(Example: Good ol’ Seymour’s student number is 123456, so the 13 becomes 13 + 0.5(5) = 13 + 2.5 = 15.5
and he labels it Version 5.) Each column is to be regarded as an independent random sample.
Years of Work Experience
1
2
3
16
19
24
21
20
21
18
21
22
13
20
25
a) State your null hypothesis and test it by doing a 1-way ANOVA on these data and explain whether the
test tells us that experience matters or not. (4)
b) Using your results from a) present two different confidence intervals for the difference between earnings
for those with 1 and 3 years experience. Explain (i) under what circumstances you would use each interval
and (ii) whether the intervals show a significant difference. (2)
b) What other method could we use on these data to see if years of experience make a difference? Under
what circumstances would we use it? Try it and tell what it tests and what it shows. (3)
[24]
c) (Extra Credit) Do a Levene test on these data and explain what it tests and shows. (4)
Solution – Version 0: You should be able to do the calculations below. Only the three columns of
numbers were given to us.
1
16
21
18
13
Years
2
19
20
21
20
3
24
20
22
25
Sum
68 +
80 +
92
 240 
nj
4+
4+
4
 12  n
x j
17
20
23
SS
1190 +
1602 +
2126
x 2j
SST 
 x  nx
2
ij
289 
2
400 +
Sum
529
 x
ij
20  x
 x
 1218   x
 4918 
2
ij
2
j
 4918  12202  4918  4800  118
12
252y0631 11/28/06
SSB 
(Page layout view!)
 n x  nx  417
2
j .j
2
2
Source
SS
Between
72

 4202  4232  12202  41218  12202  4872  4800  72
DF
MS
F
F.05
2
36
7.04 s
F 2,9  4.26
H0
Column means equal
Within
46
9
5.1111
Total
118
11
Because our computed F statistic exceeds the 5% table value, we reject the null hypothesis of equal means
and conclude that experience matters.
b) The following material is modified from the solution to the last graded assignment.
Types of contrast between means. Assume   0.05 . m  4 is the number of columns.
Individual Confidence Interval
If we desire a single interval, we use the formula for the difference between two means when the variance is
known. For example, if we want the difference between means of column 1 and column 2.
1   2  x1  x2   tn  m s
2
1
1
, where s  MSW . The within degrees of freedom are

n1 n2
9
 2.262 . n1  n 2  n3  n4  4 , so
n  m  9, , so we use t .025
s2

1
n1 
1
n2

s 2  1 4  1 4   s 2 0.5 and our interval will be 1   3  x1  x3   2.262 s 2 .5
Scheffé Confidence Interval
If we desire intervals that will simultaneously be valid for a given confidence level for all possible intervals

1
1 
between column means, use 1   2  x1  x2   m  1Fm1,n  m   s
where s  MSW .

 n
n 2 
1

The degrees of freedom for columns are m  1  2 . The within degrees of freedom are n  m  9, so we
2,9   4.26 .
use F.05
n1  n 2  n3  n4  4 , so
s2

1
n1

1
n2

s 2  18  18   s 2 0.25  and our interval
will be 1   3  x1  x3   24.26   s 2 .5   x1  x2   2.9289  s 2 .5 




Tukey Confidence Interval
This also applies to all possible differences.
1   2  x1  x2   q m,n  m 
s
1
1
. where s  MSW . This gives rise to Tukey’s HSD

n1 n 2
2
(Honestly Significant Difference) procedure. Two sample means x .1 and x .2 are significantly different if
x.1  x.2 is greater than q m,n  m 
s
2
1
1


n1 n 2
s2
2
s
2
1
1
3,9 
3,9 
 3.95 .

. We will need q .05
. The table says q.05
n1 n 2
1 1
2
2
    .25 s and the interval will be 1   3  x1  x3   3.95 0.5s .5
4
4


 x1  x2   2.7931 s 2 .25 
13
252y0631 11/28/06
(Page layout view!)
Contrasts for 3 and 1 .
s 2 .5  .5.1111 .5  2.555556  1.5986 . Intervals for differences between
Note that in all contrasts
means that include zero show no significant difference. x1  x3   17  23  6.
Individual – Used when you want only one interval.
1   3  x1  x3   2.262 s 2 .5  6  2.262 1.5986   6  3.61
Scheffé – Used when a collective confidence level is sought.
1   3  x1  x2   2.9289  s 2 .5   6  2.9289 1.5986   6  4.68


Tukey – More powerful, but similar to the Scheffé.
1   3  x1  x2   2.7931 s 2 .25   6  2.7931 1.5986   6  4.47 .
All contrasts seem significant.
c) The alternative to one-way ANOVA for situations in which the parent distribution is not Normal is the
2.0 4.0 11 .0
16 19 24
8 .0 5 .5 8 .0
21 20 21
Kruskal-Wallis test. The original data
is replaced by ranks 3.0 8.0 10 .0 . Note that the
18 21 22
1.0 5.5 12 .0
13 20 25
14 .0 23 .0 41 .0
12 13 
 78 , so that we can check our ranking by noting that 14 + 23 +
2
 SRi 2 

  3n  1
 n i 


sum of the first twelve numbers is
 12
41 = 78. H  
 nn  1 i
 12  14 2 23 2 412 

  313   1  1 196  529  1681   39  46 .2692  39  7.2692 . Since the







12
13
4
4
4
13  4 



Kruskal-Wallis table for 4, 4, 4 says that 5.6923 has a p-value of .049 and 7.5385 has a p-value of .011, the
p-value of 7.2692 must be below 5%, so that we can reject the null hypothesis of equal medians.
16 19 24

21 20 21
, which has column medians
18 21 22
13 20 25
1 1
1
4 0 2
of 17, 20 and 23, is replaced by the numbers with the column medians subtracted
. Absolute
1 1 1
4 0
2
1 1 1
4 0 2
values are taken, so the columns become
. An ANOVA is done on these 3 columns.
1 1 1
4 0 2
Source
SS
DF
MS
F.05
H0
F
d) The Levene test is a test for equal variances. The original data
Between
8
2
4
3.27 ns
F 2,9  4.26
Column means equal
Within
11
9
1.2222
Total
19
11
Because we cannot reject the null hypothesis, we cannot say that the variances are not equal.
14
252y0631 11/28/06
(Page layout view!)
3) (Abronovic) A group of 4 workers produces defective pieces at the rates shown below during different
times of the day. Personalize the data by subtracting the last digit of your student number from the 14 in the
lower right corner. Use the number subtracted to label this as a version number. (Example: Good ol’
Seymour’s student number is 123456, so the 14 becomes 14 - 6 =8 and he labels it Version 6.)
Time
Worker’s Name
Apple
Plum
Pear
Melon
Early
10
11
8
12
Morning
Late
9
8
7
10
Morning
Early
12
13
11
11
Afternoon
Late
13
14
10
14
Afternoon
Sum of Row 1 = 41, SSQ of Row 1 = 429, Sum of Column 1 = 44, SSQ of Column 1 = 494,
Sum of Row 2 = 34, SSQ of Row 2 = 294, Sum of Column 2 = 46, SSQ of Column 2 = 550,
Sum of Row 3 = 47, SSQ of Row 3 = 555, Sum of Column 3 = 36, SSQ of Column 3 = 334,
a) Do a 2-way ANOVA on these data and explain what hypotheses you test and what the conclusions are.
(6)
b) Using your results from a) present two different confidence intervals for the difference between numbers
of defects for the best and worst worker and for the defects from the best and second best times. Explain
which of the intervals show a significant difference and why. (3)
c) What other method could we use on these data to see if time of day makes a difference while allowing for
cross-classification? Under what circumstances would we use it? Try it and tell what it tests and what it
shows. (3)
[36]
a)
Time
Apple
Plum
Pear
Melon
1
2
3
4
Sum
nj
10
9
12
13
44
4
11
8
13
14
46
4
8
7
11
10
36
4
12
10
11
14
47
4
41
34
47
51
173
16
x j 
11
11.5
9
11.75
(10.8125)
x
SS
494
121
550
132.25
334
81
561
138.0625
1939
472.3125
2
 xijk
x j  2
Sum
ni
SS
x i 
4
4
4
4
16
n
10.25
8.50
11.75
12.75
(10.8125)
x
429
294
555
661
1939
2
 xijk
xi 2
105.0625
72.25
138.0625
162.5625
477.9375
 x i 2
 x .2j .
 x  nx  19391610.8125  19391870.5625  68.4375
SSR  C x  nx  4477.9375  1610.8175  1911.75  1870.5625  41.1875
SSC  R x  nx  4472.3125  1610.8125  1889.25  1870.5625  18.6875
SST 
2
2
2
2
i.
2
2
2
.j
2
2
Source
SS
DF
MS
F
F.05
F 3,9   3.86
F 3,9   3.86
Rows
41.1875
3
13.7292
14.43 s
Columns
18.6875
3
6.2292
6.55 s
H0
Row means equal
Column means equal
Within
8.5625
9
0.95139
Total
68.4375
15
As is shown by the computed F statistics and the table values, both computed Fs exceed the table values,
meaning that we will reject the null hypotheses. A version of this problem has appeared on almost every
Final. Using the format above is necessary if you want to use sums computed for you.
15
252y0631 11/28/06
(Page layout view!)
b) The material on confidence intervals comes from the outline for 2-way ANOVA. Note that for columns
Pear is the best worker (9) and Melon the Worst (11.75), so that the difference is 2.75. For rows Time 2 is
the best time (8.50) and Time 1 (10.25) is the second best, so that the difference is 1.75. R  1C  1  9
20.95139 
2 MSW

 0.6897 ,
C
4
not significant. ‘s’ stands for significant.
2MSW

R
MSW  0.95139 ,
20.95139 
 0.6897 . ‘ns’ stands for
4
i. A Single Confidence Interval
If we desire a single interval we use the formula for a Bonferroni Confidence Interval with m  1 . . Note
that since P  1 , we must replace RC P  1 with
R  1C  1 .
For row means 1   2  x1  x 2   t R1C 1
2
1.75  2.262 0.6897   1.75  1.56 . s
2MSW
. So we have
C
For column means  1   2  x1  x2   t R1C 1
2
2.75  2.262 0.6897   2.75  1.56 . s
t.9025  2.262
2MSW
. So we have
R
ii. Scheffé Confidence Interval
If we desire intervals that will simultaneously be valid for a given confidence level for all possible intervals
between means, use the following formulas. Note that since P  1 , we must replace RC P  1 with
R  1C  1 .
3,9   3.86
F.05
For row means, use 1   2  x1  x 2  
 x1  x 2  
R  1FR 1,R 1C 1
R  1FR 1,R 1C 1 2MSW
2MSW
So we have 1.75  33.86 0.6897   1.75  2.34 ns.
C
For column means, use  1   2  x1  x2  
 x1  x2  
C  1FC 1,R 1C 1
C
C  1FC 1,R 1C 1 2MSW
R
2MSW
 2.75  2.34 s
R
iii. Bonferroni Confidence Interval – not worth the effort.
iv. Tukey Confidence Interval
Note that since P  1 , we must replace RC P  1 with
For row means, use 1   2  x1  x 2   qR ,R 1C 1
= x1  x 2   0.5qR ,R 1C 1
R  1C  1 .
MSW
C
2MSW
. So we have 1.75  0.70711 4.410.6897   1.75  2.15 . ns
C
For column means, use  1   2  x1  x2   qC ,R 1C 1
 x1  x2   0.5qC ,R 1C 1
4,9 
q.05
 4.41
MSW
R
2MSW
 2.75  2.15 s
R
16
252y0631 11/28/06
(Page layout view!)
c) The alternative to 2-way ANOVA with one measurement per cell is a Friedman test. Remember that it is
10 11 8 12
9 8 7 10
, time of day was represented by rows.
12 13 11 11
13 14 10 14
10 9 12 13
11 8 13 14
If we transpose the array, columns represent time of day.
. Now replace the original data
8 7 11 10
12 10 11 14
2 1 3 4
2 1 3 4
with rank within rows and sum the columns. 2 1 4 3 . As a check note that the column sums should
rows that we want to compare. In the original data
3
9
1 2 4
4 12 15
445
 40 and that 9 + 4 + 12 + 15 = 40. The Friedman formula reads
2
 12


SRi2   3r c  1
 rc c  1 i

add to
 F2
 


 12
92  42  12 2  15 2   345   3 466   60  69 .90  60  9.90

 20

 445

The Friedman table says that 9.90 has a p-value of .006. Since this is less than 5% or 1% we reject the null
hypothesis that the median number of defects is the same regardless of time of day.
17
Download