Solutions to Exercises in Chapter 3

advertisement
1
Chapter 3
Solutions to Exercises
Solutions to Exercises in Chapter 3
3.1
(a)
y  1  2 x 2  e
linear in the parameters 1 and 2 ,
nonlinear in the variables.
(b)
y  1 x2  e
nonlinear in the parameters 1 and 2 .
(c)
y  exp(1   2 x  e)
nonlinear form; this model can be written in a linear in-theparameters form which is the log-linear model,
ln y  1  2 x  e.
(d)
y  exp(1  2 x )  e
nonlinear.
(e) ln( y )  1  2 x  e
log-linear model which is linear in the parameters 1 and
2 .
(f)
y  1 x2 exp(e)
nonlinear in parameters 1 and 2 ; however, this model can
be rewritten as the log-log model
ln y  ln 1  2 ln x  e     2 ln x  e
where   ln 1. The log transformation has converted the
model into one which is linear in the parameters  and 2 .
5
3.2
(a)
5
 xt2  15
 xt yt  20
t 1
(b) b2 
t 1
5  xt yt   xt  yt
5  xt2    xt 
b1  y  b2 x 
2

5
 xt
t 1
5
5
 yt
t 1
 10
x 1
y2
 5 20    510   50  1
50
 515  52
10
5
 1    1
5
5
(c) See Figure 3.1.
(d) The quantities b1 and b2 are least squares estimates for the parameters  1 and  2 in the
linear model yt  1   2 xt  et where et is a random error term. The value b1 is an
estimate of the intercept; it suggests that when xt = 0, yt = 1. The value b2 is an estimate
of the slope; it suggests that when xt increases by 1 unit, the corresponding change in yt
will be 1 unit.
(e) See Figure 3.1.
2
Chapter 3
Solutions to Exercises
y =x +1
6
4
Y
2
0
-3
-1
1
3
5
-2
-4
Figure 3.1 Graph for Question 3(c)
3.3
Note that b1  y  b2 x or y  b1  b2 x . Thus,  y , x  lies on the fitted line.
3.4
Consider the fitted line yt  b1  xt b2 . Averaging over T, we obtain
y =
 y t
T

1
1
b1  xt b2   b1T  b2  xt   b1  b2

T
T
 xt
T
 b1  b2 x
From Exercise 3.3, we also have y  b1  b2 x . Thus, y  y .
3.5
(a) Your computer should yield the following results:
 xt
 120
b2 
T
 yt  42.08  xt yt  418.53
15418.53  12042.08 1228.35


 0.29246
2
4200
151240  120
 xt2  1240
 xt yt   xt  yt
2
T  x t2    x t 
 42.08 
 120 
b1  y  b1 x  
  0.29246
  0.4656
 15 
 15 
(b) Strictly speaking, the intercept coefficient, b1 = 0.4656, gives the level of output when
the feed input is zero. However, because a zero feed input is unrealistic, this
interpretation must be treated with caution; it would be expected that, when the feed
input is zero, output would be zero. The slope coefficient, b2 = 0.29246, indicates that,
for a 1 unit change in the feed input, there will be a corresponding change of 0.29 in the
poultry food output.
(c) See Figure 3.2.
(d) In terms of feed used, total cost (TC) is TC = 6x. However, a total cost function in
economics conventionally relates TC to output. Since
3
Chapter 3
Solutions to Exercises
y
y  0.466  0.292 x
0.292
1
0.466
x
Figure 3.2 Estimated Production Function for Poultry Feed Example
y  1
2
and the total cost function in terms of output becomes
y  1  2 x
TC 
6 y  1 

2
x
61
6

y
2 2
The corresponding marginal cost (MC) function is given by
MC 
d TC
dy

6
2
Using the least squares estimates for  1 and  2 obtained in part (a) we have
TC 
60.4656
0.29246

6
y = 9.552 + 20.52y
0.29246
and MC = 20.52.
(e) From part (d) the relationship between total feed costs and poultry output is given by
TC   1   2 y where
1 
6 1
2
and
2 
6
2
Solving for the production function parameters  1 and  2 in terms of the cost function
parameters 1 and 2 we have
2 
6
2
and
1  
1
2
4
Chapter 3
Solutions to Exercises
Thus, estimates of the production function parameters can be found by applying least
squares to the cost function to obtain estimates  1 and  2 and then using the above
expressions for  1 and  2 to obtain estimates  and  . Following this procedure with
1
2
the available data, we have  1 = 8.1273 and  2 = 20.007 and thus,
 2 
6
81273
.
 0.300 and  1  
 0.406
20.007
20.007
Note that these estimates differ slightly from the least squares estimates b1 = 0.466 and b2
= 0.292 obtained directly from the production function. The difference occurs because
application of the least squares rule to the cost function is a different problem than
application of the least squares rule to the production function.
3.6
(a)
yt = ln qt
6.793466
6.919684
6.966024
6.984670
6.522093
6.605298
6.695799
7.150701
6.852243
6.773080
6.579251
6.999422
Y
xt = ln pt
0.207014
0.139762
0.095310
0.182322
0.300105
0.223144
0.246860
-0.010050
0.198851
0.223144
0.262364
0.048790
7.2
7.0
6.8
6.6
X
6.4
-0.05
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Figure 3.3 Observations on Y=ln(q) and X= ln(p)
5
Chapter 3
Solutions to Exercises
(b) Calculating values for b1 and b2 is typically done on the computer. Some intermediate
results and the estimates are:
 yt xt
 yt
 xt2 = 0.4661609
 xt = 2.117616
= 14.24835
= 81.75173
b2 
T  xt y t   x t y t
T  xt2    xt 
2
 1.9273
 y t2
= 557.3354
T = 12
b1  y  b2 x  7.1528
(c) The value b2 = 1.93 is an estimate of the elasticity of demand for hamburgers. It
suggests that a 1% increase in price will decrease quantity demanded by 1.93%.
(d) If demand is elastic ( 2 < 1), then decreasing price will increase total receipts; with an
inelastic demand ( 2 > 1), decreasing price will decrease total receipts. Our elasticity
estimate of b2 = 1.93, suggests Makkers should decrease price.
3.7
If the intercept in the simple linear regression model is zero, the model becomes
y t   x t  et
To save on subscript notation we have set  2 = .
(a) The sum-of-squares function becomes
S  
T
 yt
t 1
 x t  
2
  y t2
T
t 1

 2x t y t   2 x t2 
 y t2
 2 x t y t   2  x t2
(b) To find that value of  that minimizes S() we obtain
dS
 2  x t y t  2 x t2
d
Setting this derivative equal to zero, we have
b x t2   x t y t
b
or
 xt yt
 x t2
(c) From Exercise 3.5, we have
 yt2
= 142.6064
 xt yt
= 418.53
 x t2
= 1240
Thus, the sum of squares function is
S() = 142.6064  837.06  + 1240  2
From Figure 3.4 we can see that the minimizing value is about b = 0.34. This is the least
squares estimate because, from part (b)
b
 xt yt
 x t2

418.53
 0.33752
1240
(d) When  2 = 0, but  1  0, the model becomes
yt  1  et
6
Chapter 3
Solutions to Exercises
S 
600
300
1
0.85
0.68
0.51
0.34
0.17
0
-0.17
-0.34
0

Figure 3.4 Sum of Squares Function.
The sum-of-squares function becomes
T
S  1     y t   1    y t2  2 1  y t  T  12
2
t 1
To get the minimizing value, we have
S  1 
1
 2 y t  2T  1
Setting this derivative to zero yields
 yt  Tb1  0 or b1 
 yt
T
y
When we have only an intercept, and no explanatory variable, the least squares estimator
for the intercept is the sample mean y . For the data in Table 3.2 the sum of squares
function is
S( 1) = 142.6064  84.16  1 + 15  12
From Figure 3.5, we see that the minimizing value is y = 2.8053.
3.8
(a) The plots of u against q and ln(u) against ln(q) appear in Figures 3.6 and 3.7. The two
plots are quite similar in nature.
(b)
b1 = 6.0191
b2 = 0.3857
Since ln(u1) =  1, an estimate of u1 is
u1  expb1   exp6.0191  411208
.
This result suggests that 411.2 was the cost of producing the first unit. The estimate b2 =
0.3857 suggests that a 1% increase in cumulative production will decrease costs by
0.386%. The numbers seem sensible.
7
Chapter 3
Solutions to Exercises
S()
160
120
80
40

0
0
0.7
1.4
2.1
2.8
3.5
4.2
4.9
5.6
Figure 3.5 Sum of Squares Function.
u
30
25
20
15
1000
q
2000
3000
4000
Figure 3.6 Learning Curve Data.
3.3
ln(u)
3.0
ln(q)
2.7
7.0
7.4
7.8
Figure 3.7 Learning Curve Data in Logs.
8.2
8
Chapter 3
3.9
Solutions to Exercises
(a) The model is a simple regression model because it can be written as y   1   2 x  e
where y = rj  rf , x = rm  rf ,  1 = j and  2 =  j.
(b) The estimate of Mobil Oil's beta is b2 = 0.7147. This value suggests Mobil Oil's stock is
defensive.
(c) b1 =  j = 0.00424. This small value seems consistent with finance theory.
(d) Given j = 0, the estimate of Mobil's beta is b2 = 0.7211. The restriction j = 0 has led to
only a small change in the beta.
3.10
When xt = c, the denominator in equation 3.3.8a is
T  xt2 
 xt 2  T  c2   c2  T Tc2   Tc2  T 2c2  T 2c2  0
The numerator is
T  xt yt   xt  yt  T  cyt   c yt  Tc yt  Tc yt  0
Thus, the least squares estimator cannot be calculated when xt  c for all t.
The observations on y and x and the least-squares line estimated in part (b) are graphed in
Figure 3.8. The line drawn for part (a) will depend on each student’s subjective choice about
the position of the line. For this reason, it has been omitted.
(b) Preliminary calculations yield:
 xt  21
 yt  44
 xt yt  176
 xt2  91
y  7.3333
The least squares estimates are
b2 
T  xt yt   xt  yt
T  x    xt 
2
t
2

6  176  21  44
 1.2571
6  91  212
b1  y  b2 x  7.3333  1.2571  3.5  2.9333
12
10
8
Y
3.11
6
4
2
0
2
4
6
8
X
Figure 3.8
Observations and Fitted Line for Exercise 3.11
x  12
9
Chapter 3
Solutions to Exercises
1.0
0.5
0.0
-0.5
-1.0
1
2
3
4
5
6
Y Residuals
Figure 3.9
(c)
Plot of residuals against x for Exercise 3.11
y   yt T  44 6  7.3333
x   xt T  21 6  3.5
The predicted value for y at x  x is
yˆ x  b1  b2 x  2.9333  1.2571  3.5  7.3333
We observe that yˆ x  y . That is, the predicted value at the sample mean x is the sample
mean of the dependent variable y . The least-squares estimated line passes through the
point ( x , y ) .
(d) The values of the least squares residuals, computed from eˆ  yt  b1  b2 xt , are:
eˆ1  0.19048
eˆ2  0.55238
eˆ3  0.29524
eˆ4  0.96190
eˆ5  0.21905
eˆ6  0.52381
Their sum is
 eˆt  0.
(e) The least-squares residuals are plotted against x in Figure 3.9. They seem to exhibit a
cyclical pattern. However, with only six points it is difficult to draw any general
conclusions.
3.12
(a) If 1  0, the simple linear regression model becomes
yt   2 xt  et
(b) Graphically, setting 1  0 implies the mean of the simple linear regression model
E ( yt )  2 xt passes through the origin (0, 0).
(c) The least-squares estimators in (3.3.8) are no longer appropriate if we know 1  0
because, in this case, a different least squares “sum of squares” function needs to be
minimized.
10
Chapter 3
Solutions to Exercises
(d) To save on subscript notation we set  2  . The sum of squares function becomes
T
T
t 1
t 1
S ()   ( yt  xt ) 2   ( yt2  2xt yt  2 xt2 )   yt2  2 xt yt   2  xt2
 352  2  176   912  352  352   91 2
A plot of this quadratic function appears in Figure 3.10. The minimum of this function is
approximately 12 and occurs at approximately 2  1.95. The significance of this value
is that it is the least-squares estimate.
(e) To find the value of  that minimizes S () we obtain
dS
 2 xt yt  2 xt2
d
Setting this derivative equal to zero, we have
b xt2   xt yt
b
or
 xt y t
 xt2
Thus, the least-squares estimate is
b2 
176
 1.9341
91
which agrees with the approximate value of 1.95 that we obtained geometrically.
(f) The computer generated value is also b2  1.9341.
(g) The fitted regression line is plotted in Figure 3.11. Note that the point ( x , y ) does not lie
on the fitted line in this instance.
(h) The least squares residuals, obtained from eˆt  yt  b2 xt are:
eˆ1  2.0659
eˆ2  2.1319
eˆ3  1.1978
eˆ4  0.7363
eˆ5  0.6703
eˆ6  0.6044
 eˆt  3.3846.
Note this value is not equal to zero as it was for 1  0.
40
12
35
10
30
8
* (3.5, 7.333)
Y1
SUM_SQ
Their sum is
25
6
20
4
15
2
10
0
1.6
1.8
2.0
2.2
BETA
Figure 3.10 Sum of squares for 2
2.4
0
1
2
3
4
5
6
X1
Figure 3.11 Fitted regression: x = 3.5, y = 7.333 .
11
Chapter 3
(a) If  2  0, the simple linear regression model becomes
yt   1  et
(b) Graphically,  2  0 implies that the mean function for yt is a horizontal straight line that
is constant for all values of x. That is
E ( yt )  1
(c) Having  2  0 implies there is no relationship between y and x.
(d) The formulas in (3.3.8) are not appropriate if we know  2  0 because they are obtained
by minimizing a sum-of-squares function that includes  2 .
(e) The sum-of-squares function becomes
T
S (1 )   ( yt  1 )2  yt2  21  yt  T 12  352  881  612
t 1
This quadratic function is plotted in Figure 3.12. The value of 1 that minimizes S (1 )
is the least squares estimate. From the graph we see that this value is approximately
1  7.35.
(f) To find the formula for the least squares estimate of 1 we differentiate S (1 ) yielding
S (1 )
 2 yt  2T 1
1
Setting this derivative to zero yields
 yt  Tb1  0
b1 
or
 yt
T
y
When we have only an intercept, and no explanatory variable, the least squares estimator
for the intercept is the sample mean y. Its value is b1  y  7.333 which is close to the
approximate value that was obtained geometrically.
31.0
30.5
SS_BETA1
3.13
Solutions to Exercises
30.0
29.5
29.0
7.0
7.2
7.4
7.6
7.8
BETA1
Figure 3.12 Sum of squares function for 1 for Exercise 3.13
12
Chapter 3
Solutions to Exercises
9000
8000
SALES
* weekly averages
7000
6000
5000
4000
0
100
200
300
400
500
600
ADVERT
Figure 3.13 Graph of sales-advertising regression line for Exercise 3.14
3.14
(a) The consultant’s report implies that the least squares estimates satisfy the following two
equations
b1  450b2  7500
b1  600b2  8500
Solving these two equations yields
b2 
1000
 6.6667
150
b1  4500
(b) A graph of the estimated regression line appears in Figure 3.13.
3.15
(a) The intercept estimate b1  240 is an estimate of the number of sodas sold when the
temperature is 0 degrees Fahrenheit. A common problem when interpreting the
estimated intercept is that we often do not have any data points near X  0. If we have
no observations in the region where temperature is 0, then the estimated relationship may
not be a good approximation to reality in that region. Clearly, it is impossible to sell
240 sodas and so this estimate should not be accepted as a sensible one.
The slope estimate b2  6 is an estimate of the increase in sodas sold when temperature
increases by 1 Fahrenheit degree. This estimate does make sense. One would expect the
number of sodas sold to increase as temperature increases.
(b) If temperature is 80 degrees, the predicted number of sodas sold is
yˆ  240  6  80  240
(c) If no sodas are sold, y  0, and
0  240  6  x
or
x  40 .
Thus, she predicts no sodas will be sold below 40F.
(c) A graph of the estimated regression line appears in Figure 3.14.
13
Chapter 3
Solutions to Exercises
300
200
SODAS
100
0
-100
-200
-300
0
20
40
60
80
TEMP
Figure 13.4 Graph of regression line for soda sales and temperature for Exercise 3.15
(a)
Variable
price
sqft
Mean
St. Devn.
Minimum
Maximum
82133
1794.6
12288
200.02
52000
1402.0
111002
2100.0
(b) The plot in Figure 3.15 suggests a positive relationship between house price and size.
(c) The estimated equation is
price  426.71  46.005 sqft
The coefficient 46.005 suggests house price increases by approximately $46 for each
additional square foot of house size. The intercept, if taken literally, suggests a house
with zero square feet would cost $426. The model should not be accepted as a serious
one in the region of zero square feet, a meaningless value.
(d) A plot of the residuals against square feet appears in Figure 3.16. There appears to be a
slight tendency for the magnitude of the residuals to increase for larger-sized houses.
(e) The predicted price of a house with 2000 square feet of living space is
pricef  426.71  46.005  2000  91,583
30000
120000
20000
100000
RESID
10000
PRICE
3.16
80000
0
-10000
60000
-20000
40000
1200
1400
1600
1800
2000
2200
SQFT
Figure 3.15 Price against square feet
-30000
1200
1400
1600
1800
2000
2200
SQFT
Figure 3.16 Residuals against square feet
Download