14.382 Adventures in Econometrics: an Introduction March 3, 2021 Chapter 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Abstract Here we overview the least squares from several interesting angles. We discuss Frisch-Waugh-Lovell partialling out and point out its adaptivity property in establishing approximate normality of the regression estimators of a set of target regression coefficients. We then discuss construction of simultaneous confidences sets for this set. We make use of the methods to analyze the gender wage gap and the impact of reemployment incentives on the duration of unemployment. Notation ∞ For two sequences of real numbers, {π π }∞ π=1 and {π π } π=1 , the notation π π . π π means there exists πΆ such that for all π we have that π π ≤ πΆπ π , for some constant πΆ that does not depend on π . For a vector π£ = (π£ 1 , π£ 2 , ..., π£ π )0 ∈ β π , the β 2 and β 1 norms are denoted by k · k 2 (or simply k · k ) and k · k 1 , respectively, kπ£k 2 := π X π=1 ! 1/2 π£ 2π , kπ£k 1 := π X |π£ π |. π=1 The β 0 -“norm", k · k 0 , denotes the number of non-zero components of a vector, and the k · k ∞ denotes the max norm: kπ£k 0 := π X 1{π£ π ≠ 0}, kπ£k ∞ := max{|π£ π | : π ∈ {1 , ..., π}}. π=1 When applied to a matrix, k · k denotes the operator norm, namely kπ΄k := max{kπ΄π£k : kπ£k ≤ 1}. We use the notation π ∨π = max(π, π) and π ∧π = min(π, π). We use π₯ 0 to denote the transpose of a column vector π₯ . In what follows we use the notation πΌπ π (π) to abbreviate the 1 1.1 Least Squares . . . . . . . 3 1.2 Frisch-Waugh-Lovell Theorem . . . . . . . . . . . . . . . 4 1.3 Approximate Distributions for π½ˆ 1 . . . . . . . . . . 7 1.4 Gender Wage Gap in 2015 . . . . . . . . . . . . . . 10 1.5 Joint/Simultaneous Confidence Bands for π½ 1 . . . . 14 1.6 The Pennsylvania reemployment bonus experiment . . . . . . . . . . . . . 16 1.7 Some Technical Results∗ 21 1.8 Homework Problems . 22 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference empirical expectation of π (π) as π ranges over the sample (ππ )ππ=1 : π 1X πΌπ π (π) = π (ππ ), π π=1 1.1 Least Squares Let π be a scalar random variable and π be a π -vector of covariates called regressors. We observe π i.i.d. copies {(ππ , ππ0)} ππ=1 of (π, π 0). Note that independence is not needed in many places, as is clear from the context. Throughout we assume that Eπ 2 and Eππ 0 are finite. We then define least squares or projection parameter π½ in the population as the solution of the following prediction problem: π½ := arg minπ E(π − π 0π)2 π∈β where π½ obeys the first-order condition: E(π − π 0 π½)π = 0 , and provided that Eππ 0 is of full rank, which amounts to absence of the multicollinearity, has the closed form expression: π½ = (Eππ 0)−1 Eππ, Defining π = π−π 0 π½ , we obtain the decomposition identity π ≡ π 0π½ + π, E ππ = 0. Observe that we did not need any linearity assumption to obtain this decomposition. We define least squares estimator or projection estimator π½ˆ in the sample as the solution of the following prediction problem: π½ˆ := arg minπ πΌπ (π − π 0π)2 π∈β which obeys the first-order condition: ˆ = 0, πΌπ (π − π 0π½)π and has the closed form solution π½ˆ = (πΌπ ππ 0)−1 πΌπ ππ, 3 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference provided that πΌπ ππ 0 is of full rank, which amounts to absence of the multicollinearity in the sample. Defining πˆ π = ππ − ππ0π½ˆ , we obtain the decomposition identity ππ ≡ ππ0π½ˆ + πˆ π , ˆ = 0. πΌπ ππ Note that the least squares estimator makes sense only if π is not bigger than π . If π > π other estimators must be used – for example, penalized least squares estimators or post-selection least squares estimators, which we discuss later in the course. 1.2 Frisch-Waugh-Lovell Theorem This is an important tool that provides conceptual understanding of least squares as well as a very practical tool for estimation and visualization of results. We partition vector of regressors π into two groups: π = (π· 0 , π 0)0 , where π 1 -dimensional subvector π· represents “target" regressors of interest, and π 2 -dimensional subvector π represents other regressors, sometimes called the controls. For example, in wage gender gap analysis, where π is wage, π· is the gender indicator, and π are various other variables explaining variation in wages. In program evaluation, π· is often a treatment or policy variable and π are controls. Write π = π· 0π½ 1 + π 0π½2 + π. (1.1) Comment 1.2.1 What does the regression coefficient π½ 1 measure here? It measures how our linear prediction of π changes if we increase π· by a unit, holding the controls π fixed (for example, as we switch the gender variable π· from 0 to 1, holding the controls π fixed). We can call this the linear predictive effect (PE), as it measures the linear impact of a variable on the prediction we make. PE is a measure of statistical dependence or association between variables suggesting that π· predicts π , even if we control linearly 4 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference for π . The PE should not be in general interpreted as a causal or treatment effect (TE), since correlation is not equivalent to causation. We shall study assumptions needed for causal interpretability of the estimates later in the course. An important case where π½ 1 measures TE is the case of randomized control trials, where π· is randomly assigned, and is therefore independent of π . In the population, define the partialling-out operator with respect to a vector π such that E kπ k 2 < ∞ as an operator that takes a random variable π such that Eπ 2 < ∞ and creates the residual π˜ according to the rule: π˜ = π − π 0 πΎππ , πΎππ = arg minπ E(π − π 0π)2 . π∈β 2 When π is a vector, we apply the operator componentwise. The vector π needs to have finite second moments in order for this to be well-defined. It is not difficult to see that the partialling-out operator is linear in the sense that ˜, π = π + π =⇒ π˜ = π˜ + π provided of course the operator is well defined. For this we need all variables to which this operator is applied to have bounded second moments. Hence we apply this operator to both sides of the identity (1.1) to get: ˜ 0π½1 + π ˜ 0π½ 2 + π, ˜ π˜ = π· which implies that ˜ 0π½ 1 + π, π˜ = π· ˜ = 0. Eππ· (1.2) ˜ = 0, which holds by definiThe last line follows from: π tion; from π˜ = π, which holds because of the orthogonality ˜ = 0, which holds since π· ˜ is a linear E ππ = 0; and from E π π· combination of components of π . ˜ = 0 is the first-order condition Equation (1.2) states that E π π· ˜ . That is, the projection for the population regression of π˜ on π· coefficient π½ 1 can be recovered from the regression of π˜ on 5 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference ˜: π· ˜ 0π)2 = (Eπ· ˜π· ˜ 0)−1 Eπ· ˜ π. ˜ π½ 1 = arg minπ E(π˜ − π· π∈β 1 Comment 1.2.2 This is a remarkable fact, known as Frisch-Waugh-Lovell (FWL) theorem. It asserts that π½ 1 is a regression coefficient of π on π· after taking out or partialing-out the linear effect of π from π and π· . In other words, it measures the linear predictive effect (PE) of π· on π , after taking out the linear predictive effect of π on both of these variables. In the sample, partialling-out operation works similarly. Define it as an operator that converts sample values ππ into πΛ π via πΛ π = ππ − ππ0 πΎˆ ππ , πΎˆ ππ = arg minπ πΌπ (π − π 0π)2 . π∈β 2 Similarly to the population case, the operator is linear. Hence application of the operator to the decomposition identity ππ ≡ π·π0π½ˆ 1 + ππ0π½ˆ 2 + πˆ π gives Λ 0π½ˆ 1 + πˆ π , πΛπ = π· π πΌπ πˆ π·Λ = 0. This implies that Λ 0π)2 = (πΌπ π· Λπ· Λ 0)−1 πΌπ π· Λ π. Λ π½ˆ 1 = arg minπ πΌπ (πΛ − π· π∈β 1 This is the sample version of the FWL Theorem. The partialling-out operation defined above works well when the dimension of π is low in relation to the sample size. When the dimension is high we need to use variable selection or penalization for regularization purposes. We shall get to that later in the course. We summarize the discussion as a theorem. Frisch-Waugh-Lovell Theorem 1.2.1 Work with the set-up above. The population projection coefficient π½ 1 can be recovered from the population 6 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference ˜ : regression of π˜ on π· ˜π· ˜ 0)−1 Eπ· ˜ π, ˜ π½ 1 = (E π· ˜π· ˜ 0 is of full rank. The sample projection coefficient assuming Eπ· Λ π: π½ˆ 1 can be recovered from the sample regression of πΛπ on π· Λπ· Λ 0)−1 πΌπ π· Λ π, Λ π½ˆ 1 = (πΌπ π· Λπ· Λ 0 is of full rank. assuming πΌπ π· 1.3 Approximate Distributions for π½ˆ 1 It is of interest to examine the behavior of the estimator π½ˆ 1 . In what follows, we can assume that dimension π 1 of the target parameter π½ 1 is fixed, but the dimension π 2 of the nuisance parameter π½ 2 may be moderately large, meaning that π 2 → ∞ as π → ∞ so that π 2 /π → 0. In practical terms, the latter condition simply means that π 2 is small compared to π . Consider the sample projection coefficient π½ˆ 1 obtained from Λ π: the sample regression of sample residuals of πΛπ on π· Λπ· Λ 0)−1 πΌπ π· Λ π, Λ π½ˆ 1 = (πΌπ π· which is also our least squares estimator of π½ 1 . Adaptivity Property for Partialling Out Lemma 1.3.1 There exist regularity conditions such that, provided that the dimension π 2 is small compared to π , namely π2 /π → 0 , we have the following approximation: √ √ ˜π· ˜ 0)−1 π(πΌπ π·π) ˜ + π π (1) π(π½ˆ 1 − π½ 1 ) = (πΌπ π· That is, the estimator is not affected by the estimation errors in partialling out steps, and they are approximately negligible. In ˜ other words, the estimator adapts to the unknown residuals π· and π˜ . 7 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Then we conclude that under mild regularity conditions √ a ˜π· ˜ 0)−1 π(πΌπ π·π) ˜ ∼ (πΌπ π· π(0, π11 ) a where ∼ reads as “approximately distributed", √ ˜ Eπ· ˜π· ˜ 0)−1 . ˜π· ˜ 0)−1 Var( π πΌπ π·π)( π11 = (Eπ· Here we usually invoke a law of large numbers to claim πΌπ π·˜ π·˜ 0 = Eπ·˜ π·˜ + π π (1), and the central limit theorem under i.i.d. sampling to claim: √ ˜ ∼ π(0 , Eπ· ˜π· ˜ 0 π2 ), π(πΌπ π·π) a with the overall result following by the continuous mapping theorem. For dependent data, other central limit theorem can be invoked to assert that: √ √ a ˜ ∼ ˜ π(πΌπ π·π) π(0 , Var( π πΌπ π·π)), where the variance formula takes the more general form. Given the equivalence stated in Lemma above we further conclude that √ π(π½ˆ 1 − π½1 ) ∼ π(0 , π11 ). a Theorem 1.3.2 There exist regularity conditions such that, provided that π 2 /π → 0, we have that √ π(π½ˆ 1 − π½ 1 ) ∼ π(0 , π11 ), a as π → ∞, namely that √ sup P π΄∈A π(π½ˆ 1 − π½ 1 ) ∈ π΄ − P (π(0, π11 ) ∈ π΄) → 0 , where A is a collection of sets in β π1 (e.g. convex sets or rectangles). The proof of this result is simple under fixed π 2 (see homework), and is rather technical when π 2 → ∞ (see the references for regularity conditions and proofs). 8 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Remark 1.3.1 Alternatively, the result above could also be derived or conjectured from the statement that the whole parameter vector is approximately normally distributed as follows: √ a π(π½ˆ − π½) ∼ π(0 , π), (1.3) Here π = π −1 Ωπ −1 , π = Eππ 0 , √ Ω = Var( π πΌπ ππ). Then π11 corresponds to the π 1 × π 1 upper-left block of π . This result is straightforward when π is fixed as π → ∞. On the other hand, when π is increasing with π , proving that the √ whole π -dimensional parameter vector π(π½ˆ − π½) is normally distributed is usually much more demanding, in terms of regularity conditions and needing to restrict the sense in which normal approximations hold. We shall rely on a suitable estimator πˆ 11 of π11 , for example, the Eicker-White estimator Λπ· Λ 0)−1 πΌπ (π· Λπ· Λ πˆ 2 )(πΌπ π· Λπ· Λ 0)−1 , πˆ 11 = (πΌπ π· under independent sampling or the Newey-West estimator for the time series case. We then shall use the normal law π(0 , πˆ 11 /π) for quantification of uncertainty about π½ 1 , that is, for building confidence bands for π½ 1 and various functionals of π½ 1 . With πˆ 11 used instead of π11 the statement √ a π(π½ˆ 1 − π½ 1 ) ∼ π(0 , πˆ 11 ) is defined to mean the following: √ sup P π΄∈A ¯ ∈ π΄ | ¯ ˆ →π 0 . π(π½ˆ 1 − π½ 1 ) ∈ π΄ − P π(0, π) π=π11 Basically, we just insert πˆ 11 wherever π11 previously appeared and we require the same statements to hold stochastically. Using Estimated Variance is Ok. √ a Lemma 1.3.3 Suppose that π(π½ˆ 1 − π½ 1 ) ∼ π(0 , π11 ) and πˆ 11 −1 is consistent for π11 , namely πˆ 11 π11 →π πΌ and π11 is bounded √ a away from zero. Then π(π½ˆ 1 − π½ 1 ) ∼ π(0 , πˆ 11 ). This lemma is a consequence of the Gaussian vector π(0 , π11 ) 9 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference having bounded probability density function, so that estimation errors in πˆ 11 have a negligible effect on probabilities of the containment events. Suppose π½ 1 is scalar or we are interested in the π -th component q of π½ 1 . The above results mean that we can report as (estimated) standard errors for π½ 1 π , and report [β π , π’ π ] = q q πˆ 11,π π /π π½ˆ 1 π − π§ πˆ 11,π π /π, π½ˆ 1 π + π§ πˆ 11,π π /π , where π§ is (1 − πΌ/2)-quantile of the standard normal variable π(0 , 1), as the approximate (1 − πΌ)× 100% confidence interval for π½ 1 π . That this is a confidence interval follows from a more general result we discuss below. 1.4 Gender Wage Gap in 2015 We consider an empirical application to gender wage gap using data from the U.S. March Supplement of the Current Population Survey (CPS) in 2015. We select white non-hispanic individuals, aged 25 to 64 years, and working more than 35 hours per week during at least 50 weeks of the year. We exclude self-employed workers; individuals living in group quarters; individuals in the military, agricultural or private household sectors; individuals with inconsistent reports on earnings and employment status; individuals with allocated or missing information in any of the variables used in the analysis; and individuals with hourly wage below $3.1 The resulting sample consists of 32 , 523 workers including 18 , 137 men and 14 , 386 of women. The variable of interest π is the logarithm of the hourly wage rate constructed as the ratio of the annual earnings to the total number of hours worked, which is constructed in turn as the product of number of weeks worked and the usual number of hours worked per week. Table 1.1 reports descriptive statistics for the variables used in the analysis. Working women are less likely to be married and more highly educated than working men, but have slightly less experience. The unconditional average gender wage gap is 24%. To estimate the gender wage gap, we consider the linear 1: The sample selection criteria is similar to [1]. 10 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Table 1.1: Descriptive Statistics log wage female married widowed separated divorced never married lhs hsg sc cg ad ne mw so we experience All 3.16 0.44 0.70 0.01 0.02 0.12 0.16 0.02 0.25 0.28 0.28 0.17 0.19 0.26 0.33 0.22 21.21 Men 3.26 0.00 0.73 0.01 0.01 0.09 0.16 0.03 0.28 0.27 0.27 0.15 0.19 0.26 0.33 0.23 21.35 Women 3.02 1.00 0.65 0.02 0.02 0.15 0.16 0.01 0.21 0.30 0.29 0.19 0.20 0.26 0.34 0.21 21.03 Source: March Supplement CPS 2015 regression model: π = π·π½ 1 + π 0π½2 + π, E π(π·, π 0)0 = 0 , where π is the log hourly rate, π· is an indicator for female worker, and π is a set of π = 1 , 082 controls including 5 marital status indicators (widowed, divorced, separated, never married, and married); 5 educational attainment indicators (less than high school graduates, high school graduates, some college, college graduate, and advanced degree); 4 region indicators (midwest, south, west, and northeast); a quartic term (first, second, third, and fourth power ) in potential experience constructed as the maximum of age minus years of schooling minus 7 and zero, i.e., π π₯ππππππππ = max(π ππ − πππ’πππ‘πππ − 7 , 0); 22 occupa- 11 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference tion indicators;∗ 21 industry indicators;† and all the two-way interactions between the previous variables. Table 1.2 reports the results of a regression analysis using the CPS data, carried out using programming language R. The first row obtains the coefficient of π· from the OLS regression of π on π· . This is the estimated unconditional predictive effect of gender on wages. The subsequent rows report the estimated conditional predictive effects: The second row obtains the coefficient of π· from the OLS regression of π on π = (π·, π); the third row obtains the same estimate using the Frisch-Waugh-Lovell theorem for partialing-out the controls via OLS; and the fourth row obtains the coefficient of π· using a variant of the procedure in [2] that partials-out the controls via LASSO instead of OLS.2 We will study this procedure later in the course. All the standard errors are computed with the R package sandwich and are robust to heteroskedasticity. Using Lasso for partialing out here gives similar results as using OLS. Lasso is a penalized OLS estimator and it produces high-quality estimates of the regression function especially in the high-dimensional settings. The penalty takes the form of the sum of the absolute values of the coefficients times a penalty level. What do the estimated regression coefficients π½ 1 measure here? The first row measures the unconditional gender gap, i.e. the difference in the average wage of working women and men. The rest measure how our linear prediction of wage changes if we set the gender variable π· from 0 to 1, holding the controls π fixed. We can call this the predictive effect (PE), ∗ The occupation categories are: management; business and financial operations; computer and mathematics; architecture and engineering; life, physical, and social science; community and social service; legal; education, training, and library; arts, design, entertainment, sports, and media; healthcare practitioners and technical; healthcare support; protective service; food preparation and serving; building and grounds cleaning and maintenance; personal care and service; sales; office and administrative support; farming, fishing, and forestry; construction and extraction; installation, maintenance, and repair occupations; production; and transportation and material moving. † The industry categories are: mining; utilities; construction; nondurable goods manufacturing; durable goods manufacturing; durable goods wholesale; nondurable goods wholesale; retail trade; transportation and warehousing; information; finance and insurance; real estate, rental and leasing; professional, scientific, and technical services; management of companies and enterprises; administrative, support and waste management services; educational services; health care and social assistance; arts, entertainment, and recreation; accommodation and food services; other services except public administration; and public administration. 2: We use the R package hdm to obtain the estimates in the fourth row. 12 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference no controls all controls partial reg partial reg with lasso † Estimate -0.239 -0.185 -0.185 -0.195 Std. Error† 0.0067 0.0069 0.0069 0.0068 Table 1.2: Regression Analysis of the Wage Gap Standard errors are robust to heteroskedasticity as it measures the impact of a variable on the prediction we make. Overall, we see that the unconditional wage gap for women of 24% decreases to around 19-20% after controlling for worker characteristics. The PE should not be in general interpreted as a causal or treatment effect, since correlation is not equivalent to causation. The causal interpretation of PE here could suggest that π½ 1 is solely a measure of discrimination, while in reality it may reflect discrimination, selection effects (e.g., sorting of women and men into different occupation), and other unobserved differences in the actual working experience. We repeat the analysis for the more homogeneous subpopulation of never married workers. Table 1.3 reports descriptive statistics for the corresponding subsample from the CPS 2015 data. There are 5,150 never married workers, 2,861 men and 2,289 women. Never married working women are relatively more educated than working men, as was the case overall. Compared to Table 1.1, never married workers earn lower average wages, and have much lower experience than the rest of the workers. The regression analysis in Table 1.4 shows that the unconditional gender wage gap is less than 4% for this group. This gap increases to 6-7% once we control for worker characteristics.3 A possible explanation of the lower wage gap found in this case is that never married women tend to be young and less likely to have children. They can therefore be more career oriented and have more actual working experience (not interrupted by childbearing or childcare) than married women. 3: Without the marital status indicators, there are π = 775 controls. 13 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference . All 2.97 0.44 0.02 0.24 0.28 0.32 0.14 0.23 0.26 0.30 0.22 13.76 log wage female lhs hsg sc cg ad ne mw so we experience Male 2.99 0.00 0.03 0.29 0.27 0.29 0.11 0.22 0.26 0.30 0.22 13.78 Table 1.3: Descriptive Statistics: Never Married Workers Female 2.95 1.00 0.01 0.18 0.28 0.35 0.18 0.24 0.26 0.29 0.21 13.73 Source: March Supplement CPS 2015 No controls All controls partial reg partial reg via lasso † Table 1.4: Regression Analysis of the Wage Gap: Never Married Workers Std. Error† 0.016 0.015 0.015 0.015 Estimate -0.038 -0.061 -0.061 -0.070 Standard errors are robust to heteroskedasticity 1.5 Joint/Simultaneous Confidence Bands for π½ 1 Consider a π 1 -dimensional subvector π½ 1 of the coefficient vector π½ . Assume, without loss of generality, that these are the first π 1 components. Assume that a π½ˆ 1 − π½1 ∼ π(0 , π11 /π), where π11 is the upper-left π 1 × π 1 sub-block of π , in the sense that √ sup P π΄∈A π(π½ˆ 1 − π½1 ) ∈ π΄ − P(π(0 , π11 ) ∈ π΄) → 0 , π → ∞, (1.4) where A is a collection of rectangles in β π1 . Suppose we want to build simultaneous confidence bands π for all the components (π½ 1 π ) π=1 1 of π½ 1 . To give a context, suppose that π· represents a collection of indicator (“dummy") variables, capturing different types of treatment. For instance, in the Pennsylvania treatment experiments example below, 14 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference the components of π· describe various kinds of incentives that participants received to find a job quicker. We want to π create a confidence set [β , π’] = × π=1 1 [β π , π’ π ] such that P(π½ 1 ∈ [β , π’]) = P π½ 1 π ∈ [β π , π’ π ] for all π → 1 − πΌ. Using such confidence bands allows us to answer a number of very interesting questions about both economic and statistical significance of the components of π½ 1 . For example, we can create a confidence set for a set of treatments that result in more than some target level of impact. We consider confidence bands in the form of the rectangle: [β π , π’ π ] = q q π½ˆ 1 π − π π11,π π /π, π½ˆ 1 π + π π11,π π /π , for π = 1 , ..., π 1 , where we set the critical value π such that the previous display holds. Here we use π11,π π to denote the (π, π)-th element of matrix π11 . The value of π can be determined as (1 − πΌ)-quantile of the random variable kπ(0 , πΆ)k ∞ , where π(0 , πΆ) denotes the normal vector we mean 0 and variance πΆ = π−1π11 π −1 where π = diag(π11 ) is a diagonal matrix of standard deviations, with the square root of the diagonal of π11 in its diagonal and zeroes elsewhere. Observe that πΆ is the correlation matrix associated with π11 . p The constant π can be approximated by simulation. This critical value π specified above is the right one by the π following argument. Let [−π, π]π1 = × π=1 1 [−π, π], then4 P(π½ 1 ∈ [β , π’]) √ = P( π(π½ 1 − π½ˆ 1 ) ∈ π[−π, π]π1 ) = P(π(0 , π11 ) ∈ π[−π, π]π1 ) + π(1) = P(π −1 π(0, π11 ) ∈ [−π, π]π1 ) + π(1) = P(kπ(0, π−1π11 π −1 )k ∞ ≤ π) + π(1) = 1 − πΌ + π(1), where the second equality holds by (1.4), because π[−π, π]π1 is a rectangular set in β π1 . Note that in practice we shall need to 4: Here we interpret a product of a matrix π΄ = π with the set π΅ = [−π, π]π1 as the Minkowski (pointwise) product, namely π΄π΅ = {ππ, π ∈ π΄, π ∈ π΅}. 15 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference replace π11 with a consistent estimator πˆ 11 . This replacement does not affect the approximate coverage property of the confidence regions in view of Lemma 1.3.3. We summarize the discussion as follows. Joint Confidence Band For Target Coefficients a Theorem 1.5.1 Suppose that π½ˆ 1 − π½ 1 ∼ π(0 , π11 /π) in the sense of (1.4). We have that the confidence band [β π , π’ π ] = q q π½ˆ 1 π − π π11,π π /π, π½ˆ 1 π + π π11,π π /π , π = 1 , ..., π 1 , with π = (1 −πΌ)-quantile of kπ(0 , πΆ)k ∞ , where πΆ is the correlation matrix associated to π11 , jointly covers all target parameter π values (π½ 1 π ) π=1 1 with probability approaching the nominal level, that is, as π → ∞, P π½ 1 π ∈ [β π , π’ π ] for all π → 1 − πΌ. The results continue to hold if π11 is replaced by πˆ 11 , such that −1 πˆ 11 π11 →π πΌ and π11 is bounded away from zero. 1.6 The Pennsylvania re-employment bonus experiment Here we re-analyze the Pennsylvania re-employment bonus experiment, which was previously studied in [3] , among others. Note that the inferential results on simultaneous bands we report below will be new. These experiments were conducted in the 1980s by the U.S. Department of Labor to test the incentive effects of alternative compensation schemes for unemployment insurance (UI). In these experiments, UI claimants were randomly assigned either to a control group or one of five treatment groups.5 In the control group the current rules of the UI applied. Individuals in the treatment groups were offered a cash bonus if they found a job within some pre-specified period of time (qualification period), provided that the job was retained for a specified duration. The treatments differed in the level of the bonus, the length [3]: Bilias (2000), ‘Sequential testing of duration data: the case of the Pennsylvania ‘reemployment bonus’ experiment’ 5: There are six treatment groups in the experiments. Following [3] we merge the groups 4 and 6. 16 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference of the qualification period, and whether the bonus was declining over time in the qualification period; see [3] for further details on data. To evaluate the impact of the treatments on unemployment duration, we consider the linear regression model: π = π· 0π½ 1 + π 0π½2 + π, E π(π· 0 , π 0)0 = 0 , where π is the log of duration of unemployment, π· is a vector of 5 treatment indicators, and π is a set of π 2 = 16 controls including age group dummies, gender, race, number of dependents, quarter of the experiment, location within the state, existence of recall expectations, and type of occupation. Comment 1.6.1 The assignment of units to treatment π· is random. We commonly refer to such case as the randomized control trial (RCT). Under RCT, the projection coefficient π½ 1 has the interpretation of the causal effect of the treatment on the average outcome. We thus refer to π½ 1 as the average treatment effect (ATE). Note that the covariates π here are independent of the treatment π· , so we can identify π½ 1 by just linear regression of π on π· , without adding covariates. However we do add covariates in an effort to improve the precision of our estimates of the average treatment effect. Figure 1.1 shows 90% confidence sets for the five treatment effects π½ 1 , constructed using a sample of 13,913 observations. I The critical value for the simultaneous bands, π = 2.27, is greater than the pointwise critical value, 1.65. I It is less than the critical value from the Bonferroni ¯ 2) quantile of correction, 2.33, obtained as the (1 − πΌ/ the normal distribution with πΌ¯ = πΌ/5. The idea of Bonferroni correction is to use the union bound π P(∪ π=1 1 event π ) ≤ π1 X P(event π ) π=1 to bound the noncoverage event, i.e. event π = {π½ π ∉ [β π , π’ π ]}. 17 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Comment 1.6.2 In this case, from the three treatment levels with statistically significant effect on unemployment duration based on pointwise confidence intervals, only one remains significant after accounting for simultaneous inference. Next we consider a more flexible version of the more basic model, where we take controls to include the original set of controls as well as all two-way interactions, giving us a total of π 2 = 120 controls. We repeat the exercise we have given above with roughly similar conclusions. Figure 1.2 shows 90% confidence intervals for the five treatment effects π½ 1 . We see that the addition of many more controls does not change the inferential results noticeably. This highlights the robustness of the conclusions with respect to enriching the set of controls, and is also in-line with our asymptotic theory, which states that the inference is not impacted in the regime where the number of controls π 2 is much smaller than π , despite the fact the number of controls is substantial here. 18 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference 0.00 −0.05 −0.10 −0.15 Average Effect (log of weeks) 0.05 Treatment Effects on Unemployment Duration 90% Simultaneous Confidence Interval 90% Pointwise Confidence Interval T1 T2 T3 T4 T5 Treatment Figure 1.1: 90% Confidence Intervals for Treatment Effects on Unemployment Duration. Number of controls is 16. Critical value for simultaneous confidence interval obtained by simulation with 100,000 repetitions. 19 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Treatment Effects on Unemployment Duration 0.10 Regression Estimate 90% Simultaneous Confidence Interval 90% Pointwise Confidence Interval Average Effect (log of weeks) 0.05 0.00 −0.05 −0.10 −0.15 T1 T2 T3 T4 T5 Treatment Figure 1.2: 90% Confidence Intervals for Treatment Effects on Unemployment Duration. Number of controls is 120. Critical value for simultaneous confidence interval obtained by simulation with 100,000 repetitions. 20 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference Notes Least squares were invented by both Legendre and Gauss around 1800. Frisch, Waugh, and Lovell discovered the partialling out interpretation of the least squares coefficients in 1930s. The adaptivity results of Lemma 1.3.1 went unnoticed by empiricists, and also manage to escape statistics and econometrics textbooks; we note this property here though. We will get to causality and treatment effects later in the course. Regularity conditions under which Lemma 1.3.1 and Theorem 1.3.2 hold under fixed π asymptotics can be found in the introductory econometrics texts, for example, [4] , and under π → ∞ and π/π → 0 asymptotics in [5, 6] . The results of the latter reference allow for π/π → π , which introduces an additional asymptotic variance term when π > 0; the case with π = 0 recovers Theorem 1.3.2. 1.7 Some Technical Results∗ The starred sections contain a more technical material, of interest to some students, which requires some background knowledge in probability theory or other areas. The following lemma shows that the usual convergence in distribution to a normal vector, denoted by , implies convergence in the π sense ∼, as used in the text. Useful implications of CLT in β π Lemma 1.7.1 Consider a sequence of random vectors π π in β π , where π is fixed, such that π π π = N(0 , πΌπ ). The elements of the sequence and the limit variable need not be defined on the same probability space. Then lim sup | P(π π ∈ π ) − P(π ∈ π )| = 0 , π→∞ π ∈R where R is the collection of all convex sets in β π . Proof. Let π denote a generic convex set in β π . Let π π = {π§ ∈ β π : π(π§, π ) ≤ π} and π −π = {π§ ∈ π : π΅(π§, π) ⊂ π }, where π is the Euclidean distance and π΅(π§, π) = {π¦ ∈ [4]: White (2014), Asymptotic theory for econometricians [5]: Chernozhukov et al. (2015), ‘Some New Asymptotic Theory for Least Squares Series: Pointwise and Uniform Results’ [6]: Cattaneo et al. (2015), ‘Inference in Linear Regression Models with Many Covariates and Heteroskedasticity’ 21 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference β π : π(π¦, π§) ≤ π}. The set π −π may be empty. By Theorem 11.3.3 in [7] , π π := π(π π , π) → 0, where π is the Prohorov metric. The definition of the metric implies that P(π π ∈ π ) ≤ P(π ∈ π π π ) + π π . By the reverse isoperimetric inequality [Prop 2.5. [8] ] | P(π ∈ π π π ) − P(π ∈ π )| ≤ π 1/2 π π . Hence P(π π ∈ π ) ≤ P(π ∈ π ) + π π (1 + π 1/2 ). Furthermore, for any convex set π , (π −π π )π π ⊂ π (interpreting the expansion of an empty set as an empty set). Hence for any convex π we have P(π ∈ π −π π ) ≤ P(π π ∈ π ) + π π by definition of Prohorov’s metric. By the reverse isoperimetric inequality | P(π ∈ π −π π ) − P(π ∈ π )| ≤ π 1/2 π π . Conclude that P(π π ∈ π ) ≥ P(π ∈ π ) − π π (1 + π 1/2 ). 1.8 Homework Problems 1. Briefly explain partialling-out and the adaptivity property for the linear regression model, and use the gender wage gap data to illustrate your points. Present your discussion as a brief section of a professionally done empirical paper. 2. Briefly explain the idea of joint confidence bands, and use the Penn data to replicate the second set of results of our re-analysis of Pennsylvania re-employment experiment. Present your results as a brief section of a professionally done empirical paper. 3. In the wage gap example and reemployment experiment, discuss whether the empirical results have a causal or treatment effect interpretation. Does the estimate wage gap measure discrimination? Perhaps in part? Do the reductions in unemployment duration have a causal meaning? Present your discussion as a brief section of a professionally done empirical paper. 4. Explain why in randomized control trials, where assigned treatment π· is independent from controls π , we can estimate the linear predictive effect of π· on π controlling linearly for π without actually controlling for π . However, including π may still be a good idea, because using π can lower (and does not increase) the asymptotic variance of the least squares estimator. [7]: Dudley (2002), Real analysis and probability [8]: Chen et al. (2011), ‘Multivariate Normal Approximation by Stein’s Method: The Concentration Inequality Approach’ 22 1 Least Squares, Adaptive Partialling-Out, Simultaneous Inference 5. Prove that the population partialing-out operator is linear on the space of random variables with finite second moments, namely for π and π such that Eπ 2 + Eπ 2 < ∞, ˜ π ≡ π + π =⇒ π˜ ≡ π˜ + π. 6. Provide a set of sufficient regularity conditions for Lemma 1.3.1 and Theorem 1.3.2 and prove them. Extra credit is given for handling the case where π 2 → ∞ as π → ∞, but don’t spend too much time on this, as this is difficult. 23 Bibliography Here are the references in citation order. [1] Casey B. Mulligan and Yona Rubinstein. ‘Selection, Investment, and Women’s Relative Wages Over Time’. In: The Quarterly Journal of Economics 123.3 (2008), pp. 1061– 1110. doi: 10.1162/qjec.2008.123.3.1061 (cited on page 10). [2] A. Belloni, V. Chernozhukov, and C. Hansen. ‘Inference on Treatment Effects After Selection Amongst HighDimensional Controls’. In: Review of Economic Studies 81 (2 2014), pp. 608–650 (cited on page 12). [3] Yannis Bilias. ‘Sequential testing of duration data: the case of the Pennsylvania ‘reemployment bonus’ experiment’. In: Journal of Applied Econometrics 15.6 (2000), pp. 575–594 (cited on pages 16, 17). [4] Halbert White. Asymptotic theory for econometricians. Academic press, 2014 (cited on page 21). [5] Victor Chernozhukov, Denis Chetverikov, and Kengo Kato. ‘Some New Asymptotic Theory for Least Squares Series: Pointwise and Uniform Results’. In: Journal of Econometrics 186 (2015), pp. 345–366 (cited on page 21). [6] M. D. Cattaneo, M. Jansson, and W. K. Newey. ‘Inference in Linear Regression Models with Many Covariates and Heteroskedasticity’. In: ArXiv e-prints (July 2015) (cited on page 21). [7] Richard M Dudley. Real analysis and probability. Vol. 74. Cambridge University Press, 2002 (cited on page 22). [8] Louis HY Chen and Xiao Fang. ‘Multivariate Normal Approximation by Stein’s Method: The Concentration Inequality Approach’. In: arXiv preprint arXiv:1111.4073 (2011) (cited on page 22).