This course deals with the problem of quantifying the causal effects of an intervention, or
“treatment”, when agents’ selection decisions determine their exposure. It will focus on the empirical methods currently used in the applied econometrics research literature for causal analysis.
The course will last for 2 full days. The mornings will be dedicated to the theoretical discussion of the evaluation problem and the most prominent empirical approaches to it. Each method will be critically discussed with a focus on the underlying assumptions, identification strategy, parameters identified under different conditions, and relative merits and weaknesses for policy evaluation. Different methods will be related in a common framework.
The afternoons will start with the revision of one or two important related application papers followed by a hands-on computer session using real and simulated data. There will be the opportunity to implement each method, to discuss and appraise estimates, and to investigate why different approaches may yield different results. The examples will be set to support the discussion of practical issues that can be determinant for the conclusions, including how to do inference depending on the data structure, how to deal with biased-sampling and what to do in the presence of missing data.
Practical sessions will use STATA and students are expected to have a working knowledge of it.
1
Microeconometric evaluation methods
Course outline
09.00-10.30
Introduction to the evaluation problem; Parameters of interest; Randomised experiments
10.30-11.00
Coffee break
11.00-12.30
Matching: assumptions, propensity score matching, implementation and inference
12.30-13.30
Lunch
13.30-15.00
Difference in Differences; Inference and cluster sampling
15.00-15.30
Coffee break
15.30-17.00
Practical session
09.00-10.30
Instrumental variables: assumptions, identification and limitations
10.30-11.00
Coffee break
11.00-12.30
Heterogeneous treatment effects and LATE
12.30-13.30
Lunch
13.30-15.00
Marginal Treatment Effects
15.00-15.30
Coffee break
15.30-17.00
Practical session
2
Microeconometric evaluation methods
Course outline
Blundell, R. and M. Costa Dias (2009). “Alternative Approaches to Evaluation in Empirical
Microeconomics.”, Journal of Human Resources, vol. 44(3): 565-640
Heckman, J. and R. Robb (1985). “Alternative methods for evaluating the impact of interventions.” In Longitudinal Analysis of Labor Market Data . New York: Wiley
Heckman, J., R. Lalonde and J. Smith (1999). “The economics and econometrics of active labor market programs.” In O. Ashenfelter and D. Card (eds), Handbook of Labor Economics , vol 3:
1865-2097
Ashenfelter, Orley. 1978. “Estimating the Effect of Training Programs on Earnings.” Review of Economics and Statistics 60(1): 47-57.
Athey, Susan, and Guido Imbens. 2006. “Identification and Inference in Nonlinear Difference-
In-Differences Models.” Econometrica 74(2): 431-97.
Bertrand, Marianne, Esther Duflo and Sendhil Mullainathan. 2004. “How Much Should We
Trust Differences-in-Differences Estimates?” The Quarterly Journal of Economics, 119(1): 249-
275.
Hahn, Jinyong. 1998. “On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects.” Econometrica 66(2): 315-31.
Heckman, James, Hideniko Ichimura, and Petra Todd. 1997. “Matching as an Econometric
Evaluation Estimator: Evidence from Evaluating a Job Training Program.” Review of Economic Studies 64(4): 605-54.
LaLonde, Robert. 1986. “Evaluating the Econometric Evaluations of Training Programs with
Experimental Data.” American Economic Review 76(4): 604-20.
Moulton, Brent. 1986. “Random Group Effects and the Precision of Regression Estimates.”
Journal of Econometrics 32: 385-97.
3
Microeconometric evaluation methods
Course outline
Rosenbaum, Paul, and Donald Rubin. 1983. “The Central Role of the Propensity Score in
Observational Studies for Causal Effects.” Biometrika 70(1): 41-55.
Day 2
Carneiro Pedro, James Heckman and Edward Vytlacil, 2010. “Evaluating Marginal Policy
Changes and the Average Effect of Treatment for Individuals at the Margin.” Econometrica
78(1): 377-394
Deaton, Angus. 2010. “Instruments, Randomization, and Learning about Development.” Journal of Economic Literature 48(2): 424-55
Heckman, James and Edward Vytlacil. 2005. “Structural Equations, Treatment Effects, and
Econometric Policy Evaluation.” Econometrica 73(3): 669-738
Imbens, Guido. 2010. “Better LATE Than Nothing: Some Comments on Deaton (2009) and
Heckman and Urzua (2009).” Journal of Economic Literature 48(2): 399-423
Imbens, Guido, and Joshua Angrist. 1994. “Identification and Estimation of Local Average
Treatment Effects.” Econometrica 62(2): 467-75
Moffitt, Robert. 2008. “Estimating Marginal Treatment Effects in Heterogeneous Populations.”
Annals of Economics and Statistics 91/92 (Econometric Evaluation of Public Policies: Methods and Applications): 239-261
Vytlacil, Edward. 2002. “Independence, Monotonicity, and Latent Index Models: An Equivalence Result.” Econometrica 70(1): 331-41
4