ECONOMETRICS: RELAXATION OF CLRM ASSUMPTIONS ASSUMPTIONS HETEROSCEDASTICITY MULTICOLLINEARITY AUTOCORRELATION NATURE NON-CONSTANT VARIANCE LINEAR RELATIONSHIP AMONG EXPLANTORY VARIABLES CORRELATION AMONG ERROR TERMS CAUSES Following error learning models Discreationary income Improvement in data collection techniques Outliers Model specification Skewness in distribution of regressors Data collection method employed Constrains on the model Model specification Overdetermined model (K>n) Inertia Model specification Cobweb phenomena Lags Transformation of Data Manipulation of Data Estimators are still Unbiased Large variance hence inefficient Relatively large confidence interval Large variances and covariances, making precise estimation difficult Large confidence interval leading to Type II error Statistically insignificant T-ratios of one or more coefficients High R-squared althogh t-ratios may be statistically insignificant OLS estimators may be sensitive to small data changes The residual variance is likely to be underestimated An overestimated R-squared Var(B) could be underestimated Usual test of significance may not be valid Graphical Method Park Test Glejser Test Spearman’s Rank Correlation Test Goldfeld–Quandt Test BPG Test White's General Test Graphical Method High Pairwise correlation High R-squared but insignificant T-ratios Examination of partial correlation Run Auxiliary regression and obtain R-squared J to test significance Tolerance and VIF Scatter plots Graphical Method Runs Test Durbin–Watson d Test The Breusch–Godfrey Test Apply GLS if σ2i is know Apply Robust Standard errors if σ2i is unknown Do nothing Using known information Combining cross-sectional and time series data (pooling data) Dropping a variable(s) and specification bias (if MC is severe) Multiply ρ by the lagged model and subtract the product from actual regression and apply OLS First difference approach * ρ Based on Durbin–Watson d Statistic (Preferably for large samples ρ Estimated from the Residual CONSEQUNCES DETECTION RESOLVING