Given name:____________________ Family name:___________________ Student #:______________________

advertisement

Given name:____________________ Family name:___________________

Student #:______________________

BUEC 333 FINAL

Multiple Choice (2 points each)

1) Which of the following is not an assumption of the CLNRM? a) The errors are uniformly distributed b) The model is correctly specified c) The independent variables are exogenous d) The errors have mean zero e) The errors have constant variance

2) In the regression model ln Y i

 

0

 

1 ln X i

  i

: a) β

1 b) β

1 c) β

1

measures the elasticity of Y with respect to X

measures the elasticity of X with respect to Y

measures the percentage change in Y for a one unit change in X d) the marginal effect of X on Y is constant e) none of the above

3) If two random variables X and Y are independent: a) their joint distribution equals the product of their conditional distributions b) the posterior distribution of X given Y equals the marginal distribution of X c) E [ XY ] = E [ X ] E [ Y ] d) b and c e) none of the above

4) The power of a test is the probability that you: a) reject the null when it is true b) reject the null when it is false c) fail to reject the null when it is false d) fail to reject the null when it is true e) none of the above

5) R-squared is: a) the residual sum of squares as a fraction of the total variation in the independent variable b) the explained sum of squares as a fraction of the total sum of squares c) one minus the answer in a) d) one minus the answer in b) e) none of the above

6) In the linear regression model, the least squares estimator: a) maximizes the value of R

2 b) minimizes the sum of squared residuals c) features the smallest possible sample variance d) all of the above e) only a) and b)

1

7) If q is an unbiased estimator of Q , then: a) Q is the mean of the sampling variance of q b) q is the mean of the sampling distribution of Q c) Var[ q ] = Var[ Q ] / n where n = the sample size d) q = Q e) none of the above

8) In the Capital Asset Pricing Model (CAPM): a)

β

measures the sensitivity of the expected return of a portfolio to systematic risk b) β measures the sensitivity of the expected return of a portfolio to specific risk c) β is greater than one d) α is less than zero e) R

2

is meaningless

9) In the regression specification, Y i

 

0

 

1

X i

  i

, which of the following is a justification for including epsilon? a) it accounts for potential non-linearity in the functional form b) it captures the influence of all omitted explanatory variables c) it incorporates measurement error in Y d) it reflects randomness in outcomes e) all of the above

10) Suppose [L( X ), U( X )] is a 95% confidence interval for a population mean. Which of the following is/are true? a) Pr

L

X

U

0 .

90 b) c)

Pr

Pr

X

L

  

Pr

U

 

X

0.95

0 .

05 d) a and c e) none of the above

11) Omitting a constant term from our regression will likely lead to: a) higher R

2

, higher F stat, and biased estimates of the independent variables when β

0 b) higher R

2

, lower F stat, and biased estimates of the independent variables when β

0

≠ 0

≠ 0 c) higher R d) higher R

2

2

, lower F stat, and unbiased estimates of the independent variables when β

0

≠ 0

, higher F stat, and unbiased estimates of the independent variables when β

0

≠ 0 e) none of the above

12) In order for an independent variable to be labelled “exogenous” which of the following must be true: a) E ( ε i

) = 0 b) Cov ( X i

,ε i

) = 0 c) Cov ( ε i

,ε j

) = 0 d) Var ( ε i

) = σ

2 e) none of the above

2

13) Pure serial correlation: a) relates to the persistence of errors in the regression model b) can be detected with the RESET test statistic c) is caused by mis-specification of the regression model d) b and c e) all of the above

14) The sampling variance of the slope coefficient in the regression model with one independent variable: a) will be smaller when there is more variation in X b) will be smaller when there is more variation in ε c) will be larger when there is less variation in ε d) will be larger when there is more co-variation in ε and X e) none of the above

15) Suppose you compute a sample statistic q to estimate a population quantity Q . Which of the following is/are true?

[1] the variance of Q is zero

[2] if q is an unbiased estimator of Q , then q = E(Q)

[3] if q is an unbiased estimator of Q , then Q is the mean of the sampling distribution of q

[4] a 95% confidence interval for q contains Q with 95% probability a) 1 only b) 3 only c) 1 and 3 d) 1, 2, and 3 e) 1, 2, 3, and 4

16) Suppose the monthly demand for tomatoes (a perishable good) in a small town is random. With probability 1/2, demand is 50; with probability 1/2, demand is 100. You are the only producer of tomatoes in this town. Tomatoes sell for a fixed price of $1, cost $0.50 to produce, and can only be sold in the local market. If you produce 60 tomatoes, your expected profit is: a) $15 b) $35 c) $55 d) $75 e) none of the above

17) The OLS estimator is said to be unbiased when: a) Assumptions 1 through 3 are satisfied b) Assumptions 1 through 6 are satisfied c) Assumptions 1 through 3 are satisfied and errors are normally distributed d) Assumptions 1 through 6 are satisfied and errors are normally distributed e) all of the above

18) The RESET test is designed to detect problems associated with: a) specification error of an unknown form b) heteroskedasticity c) multicollinearity d) serial correlation e) none of the above

3

19) The Durbin-Watson test is only valid: a) with models that include an intercept b) with models that include a lagged dependent variable c) with models displaying multiple orders of autocorrelation d) all of the above e) none of the above

20) The consequences of multicollinearity are that the OLS estimates: a) will be biased while the standard errors will remain unaffected b) will be biased while the standard errors will be smaller c) will be unbiased while the standard errors will remain unaffected d) will be unbiased while the standard errors will be smaller e) none of the above

4

Short Answer #1 (10 points)

Suppose you have observations on a dependent variable, Y , and an independent variable, X . a) Provide a plot of your X and Y values with a regression line through the points that would indicate the presence of heteroskedasticy in the errors of the regression model. b) Is your graph above indicative of a model with pure heteroskedasticity or impure heteroskedasticity? Discuss. c) Explain the consequences of using OLS estimation if the errors terms in the regression model are heteroskedastic.

5

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

6

Short Answer #2 (10 points)

Consider the following regression model:

Y i

 

0

 

1

X i

  i

.

Suppose you have 101 observations and know the following summary statistics:

(

( Y i

Y )

2 

45 e i

2

X i

X

)

2 

50

15, 000

Cov X Y i i

50 a) In the most general terms possible, what is the expression for a (1-α)% confidence interval for the unknown population slope parameter? (2 points) b) Using a critical value of 2.5, numerically construct a (1-α)% confidence interval for the unknown population slope parameter. (4 points) c) What is the precise interpretation of the confidence interval given in b)? (4 points)

7

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

8

Short Answer #3 (10 points)

Consider the following regression model:

Y i

 

0

 

X

1 1

 

X

2 2

  i

.

Suppose you forget to include the variable X

2 in the regression you estimate. a) Derive an expression for the omitted variable bias resulting from your estimation.

b) If you could not obtain data on X

2

, what can you do to eliminate or diminish the omitted variable bias?

9

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

10

Short Answer #4 (10 points)

Consider the simple univariate regression model,

Y i

 

0

 

1

X i

  i

.

Demonstrate that the sample regression line passes through the sample mean of both X and Y.

11

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

12

Short Answer #5 (10 points)

Consider the following regression model:

Y i

 

0

 

1

X

1

 

2

X

2

  i

.

a) State the underlying assumptions for the classical linear regression model given above.

b) Which of these assumptions are necessary for our estimator to be unbiased and which are necessary for it to be efficient? c) Graphically illustrate the following assumptions: E ( ε i

) = 0, Cov ( ε i

,ε j

) = 0, and Var ( ε i

) = σ

2 d) Sometimes, the seventh assumption related to the normality of

is used which implies that the

 ’s are also normally distributed. But when we estimate via OLS, we always arrive at a single number and not a distribution of values. Explain why this is the case.

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

13

Short Answer #6 (10 points)

14

Consider the following set of results for a log-log specification of NHL salaries on two independent variables, age and points.

For the following statistical tests, specify what the null hypothesis of the relevant test is and provide the appropriate interpretation given the results above: a) the t test associated with the independent variable “points” (use a critical value of 2.58) b) the F test associated with “age” and “points” in combination (use a critical value of 4.61) c) the RESET test using the F statistic (use a critical value of 6.64) d) the Durbin-Watson test (use a lower critical value of 1.55 and upper critical value of 1.80)

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

15

Page intentionally left blank. Use this space for rough work or the continuation of an answer.

16

Useful Formulas:

17

X

Pr( X

E

Y | X

 x )

 i k k 

 i

1 p x i i

1

Pr

X

 x , Y

 x

 i k 

1 y i

Pr

Y

Var ( Y | X

 y i

 y i

| X

 x )

 i k 

1

 y i

E

Y | X

 x

 x

 

2

Pr

Y

2

X

 y i

| X

 x

( )

E

X

Pr( Y

 

X

2 y | X

 x )

 k 

1 X

 x i

 x ,

X

Y

Pr( X

 x )

2 p i y )

E

 i m 

1

E

Y | X

 x i

X

 x i

E

 a

 bX

 cY

 a

 bE ( X )

 cE ( Y )

XY

Corr

X , Y

 

XY

 k m   x j

 

X

 i

1 j

1

Cov

Var X

X , Y

   

 y i

 

Y

Pr

Var

X

 j

,

 y i

 aX

 bY

 a

2

Var ( X )

E

 

Var ( Y )

E ( Y )

2

Cov

 a

 bX

Var

 a

 bY

 b

2

Var ( Y )

 b

2

Var ( Y )

 cV , Y

2 abCov ( X , Y ) bCov ( X , Y )

 cCov ( V , Y )

E

 

Cov ( X , Y )

E ( X ) E ( Y )

X

1 n i n 

1 x i s

2

X

 n

1

1 i n 

1

 x i

 x

2 t

X s / s

XY

 

 n

Z

X

 n

1

1 n  i

1

 x i

 x

 y i

 y

X ~ N

,

 n

2 r

XY

 s

XY

/ s

X s

Y

For the linear regression model Y i

 

0

 

1

X i

 i

ˆ

1

 i n 

1

X i

X



Y i

Y

&

β

ˆ

0

Y i n 

1

X i

X

2

 ˆ

1

X

Y

ˆ i

  ˆ

0

  ˆ

1

X

1 i

  ˆ

2

X

2 i

    ˆ k

X ki

R

2 s

2

Z

ESS

TSS

RSS

TSS TSS

 ˆ  j

H

Var [

 ˆ j

]

~ N

 

RSS

TSS

 i e i

2

1

 where E s

2

 

 

2

 i

 i e i

2

Y i

Y

2

R

2

 ˆ

1

 i t

 i

 i

 i e i

2 e

2

/ ( n k i

 

Y i

Y

/

X i

X

 

1

. .(

 ˆ

1

H

)

2

2

1

1)

/ ( n

1)

~ t n k 1

Pr[

 ˆ j

 t

*

/2

. .(

 ˆ j

)

  j

  ˆ j

 t

*

/2

 s e

 ˆ j

  

F

RSS / (

 

1)

RSS

 

1) k d

 t

T

2

 e t

 e t

 t

T

1 e t

2

1

2

2(1

 

)

18

Download