Econ 2280 Introductory Econometrics Tutorial 2 Review Notes The simple Regression Model (I) 1 The simple regression model Population regression model: – The general expression y= 0 + 1x + u: 1 xi + ui : – The expression for individual i yi = 0 + Population regression line: – The general expression E (yjx) = 0 + 1 x: 0 + 1 xi : – The expression for individual i E (yi jxi ) = Sample regression model: 1 – The general expression y = b0 + b1 x + u b: – The expression for individual i Sample regression line: yi = b 0 + b 1 xi + u bi : – The general expression yb = b 0 + b 1 x: – The expression for individual i ybi = b 0 + b 1 xi : The Gauss Markov Assumptions Assumption 1 (SLR.1) Linear in parameters In the population model, the dependent variable y is related to the independent variable x and the error term u, i.e., y= where 0 and 1 0 + 1x +u (2.47) are the population intercept and slope parameters respectively. Assumption 2 (SLR.2) Random sampling We have a random sample of size n,f(xi ; yi ) : i = 1; 2; :::; ng;following the population model in (2.47). Assumption 3 (SLR.3) Sample variation in the regressor 2 The sample outcomes on x, i.e.,fxi : i = 1; 2; :::ng are not all the same value. Assumption 4 (SLR.4) Zero mean independence The error u has an expected value of zero given any values of the explanatory variables, i.e., E (ujx) = 0: Assumption 5 (SLR.5) Homoskedasticity The error term u has the same variance given any value of the explanatory variable, i.e., V ar (ujx) = 2 2 : Desirable properties of estimators 1. Unbiasedness Unbiasedness of estimator means the bias of estimator equals 0, where bias of estimator=E (estimator) value of the parameter to be estimated. 2. Reliability/E¢ ciency The smaller the variability of the estimator, the higher the reliability the estimator is, and the more e¢ cient the estimator is when comparing to other estimators. Mean square error (MSE) criterion Choose the estimator with the smallest MSE. MSE of estimator=Variance of estimator+ (bias of estimator)2 : 3. Consistency 3 Consistency is about how the distribution of the estimator changes when sample size n goes to in…nity, i.e., describing the asymptotic distribution of the estimator. Consistent estimator will converge in probability to the population parameter that is being estimated. 2.1 Best Linear Unbiased Estimator (BLUE) Under SLR.1-SLR.4, the OLS estimator is unbiased, i.e., E b1 = 1. Under SLR.1-SLR.5, the OLS estimator has the smallest variance among all linear unbiased estimators, which means it is BLUE. Interpretation of slope coe¢ cient under di¤erent regression forms Model Dependent variable Independent variable interpretation of level-level y x y= level-log y log (x) log-level log (y) x log-log log (y) log (x) y= 1 1 100 % y = (100 % y=( x % x 1) 1) % An example. log (wage) = log wage [ y : wage; 0 + 1 educ +u (1) = b 0 + b 1 educ (2) x : educ (year of education). This should be the log-level form. The interpretation of 1 : wage increases by 100 additional year of education. 4 1 1 percent for every x x The interpretation of b 1 : wage is expected to increase by 100 b 1 percent for every additional year of education. 3 Review of partial di¤erentiation/partial derivative Partial di¤erentiation works similarly as the di¤erentiation with single variable. We apply this to functions with more than one variable. Most of the di¤erentiation rules learnt in introductory mathematics course are applicable to partial di¤erentiation as well. If y = f (x1 ; x2 ; :::xn ) and we want to …nd the partial derivative of y with respect to x1 ;then we have to treat variable x2 ; :::xn as constants. We denote this partial derivative as @y @x1 : Example 1 y = a + bx + u @y @ (a + bx + u) @a @bx @u @x @u @u = = + + =0+b + =b+ . @x @x @x @x @x @x @x @x Example 2 log (y) = a + bx + u @y @ log (y) @y = = @x @ log (y) @x 1 @ log(y) @y @ log (y) 1 = @x 1=y b+ @u @x =y b+ Example 3 y = (a + bx + u)2 @y @y @ (a + bx + u) @u = = 2 (a + bx + u) b + @x @ (a + bx + u) @x @x 5 . @u @x .