Essays on Veterans Disability Compensation and the Effects of Military Service
by
Kyle Greenberg
ARCHIVES
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
B.S. Mathematics
United States Military Academy, 2005
OCT 152015
Submitted to the Department of Economics
in Partial Fulfillment of the Requirements for the Degree of
LIBRARIES
Doctor of Philosophy in Economics
at the
Massachusetts Institute of Technology
September 2015
2015 Kyle Greenberg. All right reserved.
The author hereby grants to MIT permission to reproduce and to distribute publicly paper and electronic
copies of this thesis document in whole or in part in any medium now known or hereafter created.
Signature redacted
Signature of Ai uthor:
'f
Certified by:
Signature redacted
,Signature redacted
Department of Economics
August 15, 2015
Joshua Angrist
Ford Professor of Economics
Thesis Supervisor
-
Certified by:
Signature redacted
David Autor
Professor of Economics
Thesis Supervisor
Accepted by:
Ricardo Caballero
Ford International Professor of Economics
Chairman, Departmental Committee on Graduate Studies
2
Essays on Veterans Disability Compensation and the Effects of Military Service
by
Kyle Greenberg
Submitted to the Department of Economics
on 15 August, 2015 in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy in
Economics
ABSTRACT
This dissertation consists of three empirical studies, each using administrative data from the U.S. Army, the
Department of Veterans Affairs (VA), and the U.S. Social Security Administration. The first chapter
investigates the correlation between local labor markets and VA Disability Compensation (DC) receipt
among National Guard veterans who deployed to a combat zone between 2003 and 2006. I find that veterans
from hometowns with low employment-to-population ratios are more likely to receive DC for both PTSD
and physical conditions than veterans from hometowns with high employment-to-population ratios, but this
association is stronger for PTSD than it is for physical conditions. PTSD awards that result in monthly
benefit payments of at least $1,500 account for most of the correlation between employment-to-population
ratios and PTSD, while only physical awards that generate relatively low payments are associated with
employment-to-population ratios. The second chapter, a joint project with David Autor, Mark Duggan, and
David Lyle, analyzes the effect of the DC program on Vietnam veterans' labor force participation and
earnings. Exploiting the 2001 Agent Orange decision, which expanded DC eligibility for Vietnam-era
veterans who served in-theater but not for other Vietnam-era veterans, we assess the causal effects of DC
eligibility by contrasting the outcomes of these two Vietnam-era veteran groups. We estimate that benefits
receipt reduced labor force participation by 18 percentage points among veterans enrolled due to the policy,
though measured income net of transfer benefits rose on average. The third chapter exploits enlistment test
score cutoffs in a fuzzy regression-discontinuity (RD) design to evaluate the causal effect of military service
on mortality for individuals who applied to the active duty U.S. Army from 1996 through 2007. Fuzzy RD
estimates suggest that military service does not increase mortality, but I cannot rule out positive effects.
Ordinary Least Squares (OLS) estimates that compare applicants who did not enlist to soldiers with similar
characteristics provide additional evidence that military service does not increase mortality. OLS estimates
also indicate that a soldier's first year in the Army is associated with substantial reductions in mortality
relative to nonveteran applicants.
Thesis Supervisor: Joshua Angrist
Title: Ford Professor of Economics
Thesis Supervisor: David Autor
Title: Professor of Economics
3
4
Acknowledgements
I am grateful to Joshua Angrist and David Autor for their invaluable guidance and support throughout
my time in graduate school. Josh and David are incredible economists, inspiring teachers, and probably the
hardest workers I know. Having the opportunity to learn from them has not only influenced this dissertation,
but also shaped my approach to research. I would also like to thank several other faculty members who
always found time to offer me valuable advice, including Jim Poterba, Heidi Williams, Daron Acemoglu,
Amy Finkelstein, Michael Greenstone, and Parag Pathak.
I would not be writing this were it not for the help and friendship of my fellow graduate students. Alex
Bartik, Brendan Price, and Richard McDowell continue to amaze me with their knowledge, generosity, and
humor. I generated many ideas for research while talking to Sally Hudson, Ashish Shenoy, and Miikka
Rokkanen, often on our way back from intramural hockey games. I am also privileged to be friends with
people like Enrico Cantoni, Annalisa Scognamiglio, Ludovica Gazze, Tommaso Denti, Manasi Deshpandi,
Melanie Wasserman, Adam Sacarny, Christopher Palmer, and many others.
Luke Gallagher, Whitney Dudley, Johan Gorr, and Kevin Friedman of the U.S. Army's Office of
Economic and Manpower Analysis provided outstanding research assistance throughout my time in graduate
school. I would also like to thank Mike Yankovich and David Lyle for their valuable advice and mentorship
over the past four years. I also thank Colonel (Retired) Jeffery Peterson for giving me the opportunity to
study economics at MIT.
Most importantly, I must thank Anne for her love, patience, and encouragement during the many ups and
downs I experienced while writing this dissertation. I dedicate this to her.
5
6
Contents
1. Why Do So Many Post-9/11 Veterans Receive Disability Compensation? Evidence from Local
Labor Markets
1.1
Introduction..................................................................................................
10
1.2
Disability Compensation Program Details..............................................................
15
1.3.
Theoretical Framework.....................................................................................18
1.4.
Data and Descriptive Statistics ...............................................................................
1.5.
Results .......................................................................................................
1.6.
Can Health Factors Explain the Correlation Between PTSD and Local Economic Conditions......37
1.7.
C onclusion ...................................................................................................
1.8.
Chapter 1 Appendix.........................................................................................44
22
. 27
41
2. The Impact of Disability Benefits on Labor Supply: Evidence from the VA's Disability
Compensation Program
65
2.1
Introduction..................................................................................................
2.2
The Veterans Disability Compensation Program: Eligibility, Benefits, and Work Incentives......70
2.3.
The 2001 Agent Orange Decision, Type 2 Diabetes, and 'Service-Connectedness'..................74
2.4.
Data and Analytic Sample................................................................................
2.5.
The Impact of the Agent Orange Policy on Receipt of Disability Benefits ......................... 79
2.6.
Consequences for Labor Force Participation.............................................................85
2.7.
Comparing Labor Supply and Enrollment Impacts: Instrumental Variables Estimates ............. 88
2.8.
Impacts on Total Earnings................................................................................
91
2.9.
C onclusion ......................................................................................................
92
76
2.10. Chapter 2 Appendix.........................................................................................94
3. The Impact of Voluntary Military Service on Mortality: Evidence from Active Duty U.S. Army
Applicants
3 .1
Introduction ....................................................................................................
3.2
Institutional Background.....................................................................................126
3.3.
Data and Descriptive Statistics..............................................................................127
3.4.
E mpirical Strategy............................................................................................130
3 .5.
R esults..........................................................................................................134
3.6.
Mortality Trends over Time by Application Cohorts....................................................142
3.7.
C onclusion .....................................................................................................
3.8.
C hapter 3 Appendix..........................................................................................146
7
123
145
8
Chapter 1. Why Do So Many Post-9/11 Veterans Receive Disability Compensation?
Evidence from Local Labor Markets*
Kyle Greenberg
July 2015
Abstract
U.S. veterans who served after September 2001 are nearly twice as likely to receive Disability Compensation
(DC) as veterans of earlier wartime eras. Post-Traumatic Stress Disorder (PTSD) award rates are almost
double those of Vietnam-era veterans even though casualty rates in Iraq and Afghanistan were a fraction of
those in Vietnam. Using administrative data from the U.S. Army and the Department of Veterans Affairs, I
investigate the correlation between local labor markets and DC receipt among National Guard veterans who
deployed to a combat zone between 2003 and 2006. I find that veterans from hometowns with low
employment-to-population ratios are more likely to receive DC for both PTSD and physical conditions than
veterans from hometowns with high employment-to-population ratios, but this association is stronger for
PTSD than it is for physical conditions. PTSD awards that result in monthly benefit payments of at least
$1,500 account for most of the correlation between employment-to-population ratios and PTSD, while only
physical awards that generate relatively low payments are associated with employment-to-population ratios.
Combat injuries, mortality, and Department of Defense PTSD diagnoses are not correlated with labor market
conditions. Furthermore, the effects of combat exposure on DC receipt do not vary with local employment
rates, suggesting that vulnerability to combat does not explain the relationship between local economic
conditions and PTSD. An alternative explanation is that imprecise screening and the implicit employment
criteria associated with mental disorders make veterans in weak labor markets more likely to receive PTSD
benefits than veterans in strong labor markets.
* I am grateful to Joshua Angrist and David Autor for invaluable guidance and support. I would like to thank Alex
Bartik, Sally Hudson, Chris Palmer, Brendan Price, Miikka Rokkanen, Annalisa Scognamiglio, Ashish Shenoy, and
seminar participants at MIT and the United States Military Academy. Luke Gallagher of the U.S. Army's Office of
Economic and Manpower Analysis provided outstanding research assistance. The findings and conclusions expressed
herein are those of the author and do not represent the position of the Department of Defense, the United States Army,
or the United States Military Academy.
9
1.1
Introduction
Nearly one-third of all veterans who served after September 2001 received Disability Compensation
(DC) from the Department of Veterans Affairs (VA) in 2013, an unprecedented share (Veterans Benefits
Administration, 2014). Table 1.1 shows that post-9/11 veterans are more likely to receive DC than are
veterans of the Vietnam, Korea, or World War II eras despite suffering fewer fatalities and wounds than
previous wartime cohorts. Concern for the physical and emotional well-being of soldiers and young veterans
has elicited important policy changes. In March 2008, Secretary of Defense Robert Gates developed the
Wounded Warrior Task Force to improve services for soldiers suffering from physical wounds, posttraumatic stress, and traumatic brain injury (Gates, 2014, Chapter 4). In February 2015, the U.S. Senate
unanimously passed the Clay Hunt Suicide Prevention for American Veterans Act, which requires annual
independent evaluations of VA mental health care and suicide prevention programs.
Compensation for Post-Traumatic Stress Disorder (PTSD) accounts for much of the unusually high rate
of disability receipt among recent veterans. Post-9/11 veterans are twice as likely to receive payments for
PTSD as are Vietnam-era veterans (Table 1.1). This is particularly noteworthy because the number of
Vietnam-era veterans receiving compensation for PTSD increased almost four fold between 1999 and 2013,
from 90,695 to 348,164 (Veterans Benefits Administration, 2000 and 2014) A veteran's monthly disability
payment increases with his Combined Disability Rating (CDR), which, in turn, is a function of conditionspecific ratings for each disability that is present. Table 1.2 reveals that service-connected mental health
conditions, nearly 60 percent of which are PTSD, result in much higher ratings than common serviceconnected physical conditions.
Unsurprisingly, veterans with high disability ratings work less. Table 1.3 reports the share of employed
veterans by CDR ranges for male, post-9/11 veterans in the 2013 American Community Survey (ACS) who
were born between 1974 and 1986. Forty-two percent of veterans who reported a combined rating of 70 or
higher were employed in 2013, which was half of the employment rate of young veterans who did not
receive VA disability payments. Both health and the complicated incentives built into the DC program may
10
contribute to this association. In 2013, the average annual monetary compensation for veterans with
combined ratings between 70 and 100 was over $29,800 (Veterans Benefits Administration, 2014) Such
large payments, which are not subject to federal or state taxes and include free access to VA health care,
could reduce labor supply through a non-distortionary income effect. Additionally, two key aspects of the
program generate an implicit tax on income. First, veterans with a single disability rated at 60 or higher, or
with a combined rating of 70 or higher and at least one disability rated at 40 or higher, are eligible for an
Individual Unemployability (IU) designation, which grants payments at the 100 CDR level but prohibits
participation in substantially gainful employment. Second, ratings for mental health disorders are based not
only on the severity of symptoms, but also on the degree of social and occupational impairment due to such
symptoms. Thus, veterans who claim mental disorders have a clear incentive to reduce their labor supply to
achieve larger benefits.
If nonemployment increases the appearance of occupational impairment, thus leading to higher disability
ratings and potentially IU status, then veterans from hometowns with weak labor markets should be more
likely to receive benefits for PTSD than veterans from hometowns with strong labor markets. This same
result could occur if DC application costs are large and veterans from regions with better economic
conditions face a higher opportunity cost of applying. If screening for PTSD is just as precise as screening
for physical conditions, and if the application costs for PTSD are equal to the application costs for physical
conditions, then local economic conditions should exhibit the same correlation with PTSD as they do for
physical conditions that have ratings similar to PTSD. However, under the assumption that application costs
for PTSD and physical conditions are the same, but screening for PTSD is less precise, the magnitude of the
correlation between local employment rates and physical conditions provides a lower bound on the
correlation between local employment rates and PTSD that is due to application costs. In other words, if
some veterans obtain PTSD benefits because nonemployment increases the appearance of occupational
impairment and screening for PTSD is relatively imprecise, then the association between PTSD and local
economic conditions should be stronger than the association between physical conditions and local economic
conditions.
11
This chapter uses individual-level data from the Army National Guard and the Department of Veterans
Affairs to investigate whether the employment criteria associated with mental disorders contributes to the
high rate of PTSD benefits among male National Guard soldiers who deployed to a combat zone between
2003 and 2006. The results of my investigation suggest that the employment-to-population ratio in a soldier's
hometown is more strongly correlated with benefits for PTSD than benefits for physical conditions. The
impact of local employment rates on receipt of any PTSD benefits is 50 percent larger than the impact of
local employment rates on receipt of any benefits for physical conditions. Adding the intensive margin
greatly strengthens this result. The correlation between local economic conditions and PTSD ratings, where
ratings combine the extensive (i.e. receipt of benefits) and intensive (i.e. severity of rating) margins, is
statistically significant and substantial. The impact of a standard deviation decrease in local employment
rates on PTSD ratings is 65 percent as large as the effect of a standard deviation increase in military unit
casualty rates, where casualty rates reflect an arguably exogenous measure of exposure to situations that
should elicit symptoms of post-traumatic stress.' In contrast to the sharp correlation between local
employment rates and PTDS ratings, local employment rates exhibit no statistically significant correlation
with physical disability ratings, where a soldier's physical rating is his CDR based on physical conditions
alone.
Part of the reason local employment rates are more strongly correlated with PTSD ratings than they are
with physical ratings is mechanical. If few physical conditions result in ratings commensurate with ratings
for PTSD, then PTSD ratings will naturally exhibit stronger intensive margin correlation with local
employment rates than physical ratings will exhibit even if PTSD and physical disabilities are equally
correlated with local employment rates on the extensive margin. Yet almost all of the correlation between
local employment rates and PTSD is concentrated in PTSD awards rated 70 or higher, the ratings that require
the greatest appearance of social and occupational impairment. Moreover, the association between local
employment rates and PTSD ratings of 70 or higher is significantly stronger than the association between
Prior to October 2013, the VAk's criteria for a PTSD diagnosis was based on the Diagnostic and Statistical Manual of
Mental Disorders (DSM-IV). DSM-IV defines PTSD as "intense fear, helplessness, or horror" resulting from a person
experiencing, witnessing or being "confronted with an event or events that involved actual or threatened death or
serious injury, or a threat to the physical integrity of self or others" (pp. 427--428).
12
local employment rates and physical ratings of 30 or higher, even though the percentage of veterans within
my sample who have PTSD ratings of 70 or higher is less than the percentage of veterans with physical
ratings of at least 30. I therefore interpret the responsiveness of relatively high PTSD ratings to local
employment rates as primarily the result of the implicit employment criteria associated with benefits for
PTSD rather than DC application costs.
I also investigate leading alternatives to this interpretation. One alternative is that soldiers from regions
with poor economic conditions are exposed to more combat than soldiers from regions with strong economic
conditions. I find no correlation between local employment rates and casualties due to hostile action, which
argues against this hypothesis. A second alternative is that the same traumatic experience might be more
likely to trigger PTSD in veterans from weak labor markets than in veterans from strong labor markets. This
is not consistent with my results because the effect of combat exposure on PTSD ratings does not vary with
local employment rates. A third alternative is that local employment rates are correlated with factors that
influence behavioral health. I find little evidence of this hypothesis as local employment rates are not
associated with three measures of health that are independent of the DC program: 1) PTSD disability awards
granted by the far more stringent DoD Disability Evaluation System (DES), 2 2) mortality as of May 2012,
and 3) the percentage of veterans within the same geographic area used to construct employment rates who
report symptoms consistent with post-traumatic stress.
Recent research on the DC program has found that Vietnam-era veterans with lower Armed Forces
Qualification Test (AFQT) scores and lower education levels are more likely to receive DC, but were no
more likely to serve in combat, than veterans with higher AFQT scores and education levels (Angrist, Chen,
and Frandsen, 2010, Autor Duggan, and Lyle, 2011). A negative correlation between disability receipt and
measures of human capital is potentially consistent with the health effects of military service because low
cognitive ability is a risk factor for post-traumatic stress (McNally and Shin, 1995, Macklin et al., 1998). I
also find a negative relationship between casualty rates and AFQT scores among National Guard soldiers,
which raises the possibility that lower-skilled Vietnam veterans were more likely to suffer injuries
The DES usually only grants disability payments for conditions that make a soldier unfit for service in a combat zone.
Additional details on this program are discussed below.
2
13
conditional on service in a combat zone. My findings therefore provide more credible evidence that factors
other than health have contributed to the substantial growth in DC rolls over the past decade. Additionally,
while several studies have documented the responsiveness of Social Security Disability Insurance (SSDI) to
local economic conditions (Black, Daniel, Sanders, 2002, Autor and Duggan, 2003), few have investigated
this relationship in the context of DC. 3 My results also complement recent work by Coile, Duggan, and Guo
(2015), who find that since 2010, the labor force participation of veterans between the ages of 25 and 34 has
been more sensitive to economic shocks than that of non-veterans of the same age.
This investigation also provides estimates of the effects of combat exposure on disability receipt. Several
studies have estimated the effects of combat exposure on diagnoses or symptoms of post-traumatic stress, but
not on disability receipt for PTSD (Cesur, Sabia, and Tekin, 2013, Hoge et al., 2004, Hoge, Auchterlonie,
and Milliken, 2006, Kolkow, Spira, Morse, and Grieger, 2007, Tanielian and Jaycox, 2008). Most of these
studies rely on self-reported measures of exposure, which could be endogenous to DC if surveys about
combat experience are conducted after a veteran has had an opportunity to apply for disability benefits, a
possibility discussed in Benitez-Silva et al. (2004). But even surveys that have no bearing on DC can be very
inaccurate. For example, the highly publicized RAND report, The Invisible Wounds of War, derived
conclusions from a survey where 23 percent of service members reported being injured in a manner that did
not require hospitalization, 10.7 percent reported being injured severely enough to require hospitalization,
and 9.5 percent reported "engaging in hand to hand combat" (Tanielian and Jaycox, 2008, Table 7.3). These
rates seem extraordinarily high considering that fewer than 2.3 percent of the 1.9 million service members
deployed to Iraq or Afghanistan prior to April 2009 suffered wounds as a result of hostile action (Institute of
Medicine, 2010).4
Understanding how combat exposure and factors other than health contribute to disability receipt is
3A
2006 working paper by Duggan, Rosenheck, and Singleton (2006) suggests that total DC program expenditures
became more sensitive to local economic conditions after a 2001 policy change that expanded DC eligibility to Vietnam
veterans who served in theater. The final version of this paper, Duggan, Rosenheck, and Singleton (2010), does not
report this finding.
4 The Defense Manpower Data Center (DMDC) maintains a monthly tally of military casualties by conflict at
https://www.dmdc.osd.mil/dcas/pages/casualties.xhtml (accessed 5 March 2015). This tally indicates that 42,000
service members suffered wounds prior to January 2011.
14
critical for determining whether disability payments are the most effective way to rehabilitate veterans with
service-connected injuries. The large financial awards associated with high disability ratings could act as a
disincentive for treatment of post-traumatic stress, a possibility discussed in Gros, Price, Yuen, and Acierno
(2013), Bradley, Greene, Russ, Dutra, and Westen (2005), and McNally and Frueh (2013) (among others).
The work disincentives built into the DC program could also counteract policies designed to increase veteran
employment, such as the VOW to Hire Heroes Act and the Post-9/11 G.I. Bill. Using the 2001 Agent Orange
decision that expanded DC eligibility for Vietnam-era veterans who served in theater, Autor, Duggan,
Greenberg, and Lyle (2015a) estimate that DC receipt reduced labor force participation of older veterans by
18 percentage points. If similar effects persist among more recent veterans, then there could be negative
repercussions on the U.S. economy's aggregate labor supply given that 8 percent of American men between
the ages of 25 and 34 have served or are currently serving in the military (2013 ACS). Furthermore, the longterm costs of veteran disability payments are substantial and growing. Between 2000 and 2013, real annual
DC payments to all veterans increased by 146 percent, from $20.0 billion to $49.2 billion, despite a 20
percent decline in the overall veteran population during the same time period (Veterans Benefits
Administration, 2013 and 2014).
Section 1.2 of this chapter describes the DC program and section 1.3 outlines a theoretical framework for
characterizing the relationships between local economic conditions, combat exposure, application costs, and
decisions to apply for benefits. Section 1.4 describes my data and the construction of unit-level casualty rates
as a measure of combat exposure. Section 1.5 discusses estimates linking local employment rates and combat
exposure to DC take-up. Section 1.6 investigates whether health factors contribute to the correlation between
labor market conditions and PTSD, and section 1.7 concludes.
1.2
Disability Compensation Program Details
DC provides monetary compensation and health care to veterans suffering from diseases or injuries that
are sustained or aggravated during military service. Veterans apply for all conditions that are potentially
15
service-related to one of 56 VA Regional Offices (VARO), which generally service veterans who live in a
particular state.5 For each condition claimed, a Rating Veterans Service Representative (RVSR) determines if
there is sufficient evidence to support the claim and whether the disability is service-connected. The RVSR
then assigns a rating on a scale of 0 to 100, in increments of 10, for each verified condition. A rating of 0
affords an applicant free VA health care to treat that particular condition, but does not result in monthly
benefit payments. Disability payments for DC are primarily a function of a veteran's Combined Disability
Rating (CDR), which is an increasing, concave function of condition-specific disability ratings. As an
example, a veteran awarded a 70 percent rating for PTSD and a 20 percent rating for back pain will receive a
CDR of 80: 70 for PTSD and 6 percent more for the back pain, which represents 20 percent of the
"remaining" 30 percent of ability. Since all CDRs are in increments of 10, 76 is then rounded up to 80. In
2014, a veteran with a spouse and one child received $131 per month for a CDR of 10, $1506 per month for
a CDR of 70, and $3134 per month for a CDR of 100. Veterans with a CDR of 30 or higher receive modest
additional payments if they have dependents.6
Unlike SSDI, DC is not explicitly work-contingent: a veteran may participate in the labor force while
still receiving benefits. A key exception to this rule is the Individual Unemployability (IU) benefit. Veterans
who receive a disability rating of 60 or more for one condition, or who have a CDR of 70 or more and at
least one condition with a disability rating of at least 40, may qualify for the IU benefit. IU status results in
payments at the 100 CDR level, but prohibits participation in "substantially gainful employment." DC
payments continue throughout a veteran's lifetime, unlike SSDI benefits, which terminate when a recipient
turns 65. DC payments are also exempt from federal and state taxation. Thus, each dollar of DC payments is
equivalent to $1.30 to $1.50 in pre-tax income, depending on a recipient's marginal tax bracket.
Access to VA healthcare increases with a veteran's CDR. The priority for enrollment in VA health care is
based on a veteran's Priority Group, which ranges from
1 through 8, with 1 being the highest priority for
enrollment. After leaving the military, all veterans receive 5 years of free VA health care under VA Priority
5 California has three VAROs while Texas, New York, and Pennsylvania all have two. Washington, D.C., has one.
6 The VA also adjusts payments annually according to the Consumer Price Index.
An exact table of payments can be
found at http://www.benefits.va.gov/COMPENSATION/resources
16
comDO I11 .asD.
Group 6. The VA places veterans with a CDR of 10 or 20 in Priority Group 3, veterans with a CDR of 30 or
40 in Priority Group 2, and veterans with a CDR of 50 or higher in Priority Group 1.
Service members on active duty are not allowed to receive DC until they have departed the service, but
National Guard soldiers may receive benefits as long as they temporarily terminate their benefits while
performing active duty service through either a deployment or a full-time National Guard position. There are
no time limits associated with disability applications: a veteran may submit an application just prior to
leaving active duty service, several years after leaving the service, or both if multiple conditions develop at
different times. Once awarded, DC benefits are rarely retracted. In fact, CDRs usually increase over time as
veterans request a reevaluation of existing conditions or apply for new conditions (Autor et al., 2015a,
Congressional Budget Office, 2014).
Mental disorders, 58 percent of which are PTSD, receive much higher ratings than other conditions.
Table 1.2, which reports the distribution of ratings for the five most common types of disabilities among all
DC recipients as of September 2013, indicates that 36 percent of mental disorders are rated 70 or higher,
relative to less than 2 percent of conditions for the other common disability types. There is also evidence that
concurrent SSDI receipt is more common among DC beneficiaries with mental disorders than it is among DC
beneficiaries with physical disorders. A 2009 General Accounting Office report found that the majority of
wounded veterans who received SSDI benefits were eligible for SSDI because of mental disorders,
predominately PTSD (General Accounting Office, 2009).
Unlike ratings for physical disabilities, which, except for IU status, are based exclusively on the severity
of symptoms, ratings for mental disorders are contingent on both the severity of symptoms and the degree of
social and occupational impairment resulting from those symptoms. The general rating formula for mental
disorders indicates that a rating of 100 is warranted for mental disorders that result in "total occupational and
social impairment, due to such symptoms as: gross impairment in thought processes or communication;
persistent delusions or hallucinations; grossly inappropriate behavior; persistent danger of hurting self or
others; intermittent inability to perform activities of daily living (including maintenance of minimal personal
hygiene); disorientation to time or place; memory loss for names of close relatives, own occupation, or own
17
name." Likewise, a rating of 70 is warranted for mental disorders resulting in "occupational and social
impairment, with deficiencies in most areas, such as work, school, family relations, judgment, thinking, or
mood" due to symptoms such as suicidal ideation, obsessive rituals, and near-continuous panic or depression
affecting the ability to function independently, among others. The three other possible ratings for mental
disorders, 50, 30, and 10, have gradually lower thresholds, both in terms of occupational impairment and
symptom severity.7 Thus, mental disorders could potentially distort labor supply decisions more than
physical disorders do, partly because nonemployment increases the appearance of occupational impairment
and partly because higher ratings for mental disorders make these conditions more likely to qualify for the IU
benefit.
1.3
Theoretical Framework
This section offers a theoretical framework for understanding how a veteran's employment status and
exposure to trauma influence his decision to apply for benefits. The model explains how a combination of
imprecise screening and a clear financial gain from disability benefits could cause veterans with poor
employment prospects to receive benefits at higher rates. However, the same outcome is consistent with
precise screening, but large application costs. I therefore conclude this section by offering a set of conditions
that enables an empirical test to distinguish between these two explanations.
A veteran decides to apply for a disability condition, denoted by d, then an RVSR determines whether an
award should be granted. For each award granted, the RVSR also assigns a disability rating, r. RVSRs base
their decisions on a veteran's condition-specific health, hd. If hd falls below a specific threshold, Td r), then
the veteran is granted a monetary payment of Br, which increases with r. If hd < Td(10), then the veteran
receives no monetary payment since 10 is the lowest possible rating. A veteran's health is weakly decreasing
in his exposure to trauma, m. When the condition of interest is PTSD, exposure to trauma can be thought of
as proximity to fatal or dangerous events because this is the primary channel through which a soldier can
7 The
general rating formula for mental disorders can be found in Subpart B-Disability Ratings, 38 U.S.C. 4.130
(2015).
18
acquire a clinical diagnosis of PTSD. When the condition pertains to a physical impairment, however, trauma
reflects physical injuries from dangerous situations and other aspects of military service, such as wearing
heavy body armor, participating in strenuous physical training, or exposure to potentially dangerous
chemicals.
I initially assume that application costs are zero and that VA evaluators are perfectly informed of h'. In
this scenario, a veteran will apply for conditions for which hd < Td(r). The fraction of veterans who receive
disability for condition d at rating r or higher is:
(1.1)
f d(r)(m) = Pr(hd(m) < Td(r))
Under the assumptions of zero application costs and perfect screening, fd(r) (m) is increasing in exposure to
trauma.
I now relax the assumption of no application costs. A veteran who applies for disability condition d pays
application cost, cd (e), which represents the time costs associated with applying for condition d. Application
costs are decreasing in a veteran's employment status e, where a larger value of e represents more hours
worked, because veterans who are not in the labor force, unemployed, or working part-time face a lower
opportunity cost of applying.
A veteran will apply for condition d if the expected utility of applying, minus application costs, exceeds
the utility of not applying, which I normalize to zero. Thus, a veteran will apply for condition d if the
expected utility gain exceeds the cost of applying:
(1.2)
E[UdIApplyd] > cd(e)
The fraction of veterans who receive disability for condition d at rating r or higher is now:
19
(1.3)
fdM)(e,m) = Pr(Receive Award Apply) x Pr(Apply)
= Pr(hd(m) < Td(r)) x Pr(cd(e) < [Br x Pr(hd(m) < Tdar)
With application costs, f d(r) (e, m) is increasing in m and decreasing in e.
I now relax the assumption that RVSRs are perfectly informed of an applicant's health by allowing
applicants to pay a cost, gd(e), to seek higher disability payments. A positive value of gd(e) could result
from seeking additional medical examinations if the first one is not favorable to the applicant, appealing
ratings decisions, or exaggerating symptoms. It could also result from veterans who stop working to increase
the appearance of occupational impairment, which could generate higher ratings for mental disorders or
possibly IU status.
gd (e) is decreasing in e for two reasons. First, the implicit tax on labor force participation generated by
an IU benefit and high mental disorder ratings is lower for veterans with poor employment prospects.
Second, nonemployed veterans face a lower opportunity cost of seeking more favorable disability ratings.
When an applicant pays gd(e), the RVSR's assessment of the applicant's health decreases by ydgd(e),
where yd represents the effectiveness, or rate of return, of seeking benefits. Conditions that are more difficult
to diagnose (i.e., conditions where screening is less precise) have large values of yd while conditions with
perfect screening have yd = 0.
It is important to emphasize that a positive value of gd (e) does not necessarily imply malingering on the
part of the veteran. RVSRs who adjudicate mental health decisions could also influence
gd(e) if they
incorrectly assume a veteran's financial situation is directly attributable to post-traumatic stress, a possibility
discussed in Worthen and Moering (2011). Likewise, since higher combined ratings provide DC recipients
with better access to VA health care, some medical examiners might produce more favorable reports for
jobless disability applicants. In the context of this model, exaggeration on the part of the RVSR or the
medical examiner can be thought of as a very high rate of return (i.e. high yd) at minimal additional cost (i.e.
gd(e)) to the applicant.
20
Now the fraction of veterans who receive disability for condition d at rating r or higher is:
(1.4)
fd( (e,m) = Pr(Receive Award IApply) x Pr(Apply)
= Pr(hd(m) - ydgd(e) < Td(r))
x Pr(cd(e) + gd(e) < [Br x Pr(hd(m) - ydgd(e) < Td(r)])
In section 5, I use local employment rates as a source of variation in a veteran's potential employment
status that is not correlated with the health effects of military service. Equations (1.3) and (1.4) illustrate how
correlation between disability receipt and local labor markets is consistent with the presence of application
costs, but could also result from a combination of imprecise screening and the possibility of financial gain
from disability benefits. However, under the assumption that application costs for PTSD and physical
conditions are the same, but screening for PTSD is less precise than screening for physical conditions, the
correlation between local employment rates and physical conditions provides an upper bound on the impact
of application costs.
To see this last point more clearly, consider a veteran whose opportunity cost of applying for DC is equal
to the present-discounted value of a lifetime stream of $200 per month when he lives in a hometown with
high employment rates, but is only equal to the present-discounted-value of a lifetime stream of $100 per
month when he lives in a hometown with low employment rates. Suppose that this veteran has a physical
ailment that will result in a rating of 10 if he applies for DC. Since a single disability rated at 10 results in a
monthly benefit of $131, this veteran will not apply for DC when he lives in a hometown with high
employment rates but will apply when he lives in a hometown with low employment rates. In other words,
application costs make DC benefits for physical conditions less likely when economic conditions are good.
Now suppose this veteran has a set of physical ailments that would result in a CDR of 30. In this case, the
veteran will apply for DC regardless of the economic conditions in his hometown because a CDR of 30
results in a monthly benefit of more than $400. However, if this same veteran does not suffer from any
21
physical ailments, but instead receives a rating of 30 for PTSD only when he lives in a hometown with low
employment rates, then application costs could not explain the correlation between local economic
conditions and the probability of receiving a PTSD rating of 30.
Assuming equal application costs for PTSD and physical conditions is probably reasonable because the
VA's application process is similar for all types of conditions. A 2005 Office of the Inspector General (OIG)
survey of RVSRs suggests that RVSRs have more difficulty rating claims for mental disorders than claims
for cardiovascular, respiratory, auditory, and eye conditions, supporting my assumption that screening for
PTSD claims is less precise (VA Office of the Inspector General, 2005). Nevertheless, physical conditions
like back pain, joint pain, migraine headaches, or Traumatic Brain Injury (TBI) can be difficult to diagnose,
which is why attributing all correlation between $e$ and physical disabilities to application costs is likely to
be an overestimate.
1.4
Data and Descriptive Statistics
A. Data and Sample Selection
My data combine National Guard administrative records with DoD casualty information and VA
disability rolls. The National Guard data are from a monthly panel of enlisted military records that contain
demographic information, military service information, state and zip code of residence, and mobilization
information. 8 The United States Military Academy's Office of Economic and Manpower Analysis used social
security numbers to merge military data to VA disability records, DoD casualty records, and the SSA Death
Master File (DMF). VA disability records consist of a single cross-section of individuals who were enrolled
in the DC program as of January 2014. Data on each beneficiary include a list of all disabilities for which the
VA approved compensation and the rating for each disability. Unfortunately, VA records do not indicate
whether a veteran applied for an award but was ultimately denied. They also do not indicate when a veteran
first started to receive compensation. Casualty data come from the Defense Casualty Information and
Processing System, the official casualty database of the DoD. The appendix to this chapter describes in detail
8Unlike
the Active Duty Army, where assignment of new soldiers is based on the needs of the Army, National Guard
soldiers sign enlistment contracts to join a specific military unit that is usually near their residence.
22
how casualties are recorded in DCIPS and explains why casualty records are not easy to manipulate. The
DMF indicates whether an individual was no longer alive as of May 2012.
I use employment information of non-veteran men in the American Community Survey (ACS) between
the years 2005 and 2009 to construct employment-to-population ratios (local employment rates) at the race
(white or non-white) by Super Public Use Microdata Area (PUMA) level.9 A veteran's Super PUMA is based
on his zip code of residence at the start of his first deployment. I mapped each five-digit zip code to a
specific Super PUMA using the zip-code-to-PUMA crosswalk file (based on year 2000 geographic
boundaries) provided by the Geographic Correspondence Engine of the Missouri Census Data Center.' 0 For
each combination of race and Super PUMA, I constructed the mean employment rate using non-veterans
born between 1974 and 1986, which captures 92 percent of the birth years for veterans used in the sample for
the analysis below. Part B of the appendix describes the construction of local employment rates in more
detail.
Part C of the appendix describes the construction of the analysis sample, which I henceforth refer to as
the National Guard sample. The final sample consists of 55,939 male, enlisted National Guard soldiers who
deployed to a combat zone between January 2003 and December 2006. It only includes veterans who joined
the National Guard with no prior military service and soldiers who deployed within ten years of joining the
Guard." Since I exploit within-state variation in employment rates at the Super PUMA level, the sample
does not include veterans from Alaska, North Dakota, South Dakota, Wyoming, Vermont, or Washington,
D.C., all of which have only one Super PUMA.
9 Super PUMAs are geographic areas with 400,000 or more residents. Each Super PUMA is comprised of two to six
PUMAs, which are nested within super PUMAs.
10 This file is available at http://mcdc2.missouri.edu/websas/geocorr2k.html. If a zip code spanned more than one Super
PUMA, I assigned it to the Super PUMA with the largest allocation factor (i.e., the Super PUMA where most
individuals within the zip code belonged).
" Soldiers who serve beyond ten years have reenlisted at least twice and are likely to serve until they qualify for
retirement benefits. This restriction is also a data necessity because I do not have service information on soldiers who
joined the National Guard prior to 1993.
23
B. Descriptive Statistics
Table 1.4 reports summary statistics for soldiers in the National Guard sample. Column
1 presents
statistics for all in the sample and columns 2 through 5 present statistics for the following subsets: non-DC
recipients, DC beneficiaries for conditions other than PTSD, DC beneficiaries for PTSD and possibly other
conditions, and DC beneficiaries with a CDR of 100 or IU status. Nearly 33 percent of soldiers received DC
payments from the VA in January 2014, with 19 percent receiving compensation for PTSD and 1.5 percent
receiving an IU benefit. DC recipients were exposed to more combat, are slightly older, and have lower
human capital attributes than non-DC beneficiaries. Whereas 40 percent of veterans who did not receive DC
served in a combat occupation during their first deployment, this fraction is 51 percent among veterans
receiving PTSD benefits and 50 percent among veterans receiving benefits at the 100 CDR level. Similarly,
even though only 3.3 percent of veterans in my sample were ever wounded in action (WIA), WIA rates were
5.1 percent for veterans with DC for conditions other than PTSD, 8.8 percent for veterans with PTSD
benefits, and 12.8 percent for veterans with an IU benefit or a CDR of 100.12 DC receipt, especially when it
involves PTSD, IU benefits, and Combined Disability Ratings of 100, is also negatively correlated with test
scores and education levels. The average AFQT score for veterans who did not receive DC in January 2014
is 60.4, but the average score is 59.0 among DC recipients without PTSD benefits, 54.2 among DC recipients
with PTSD benefits, and 51.4 among DC recipients with an IU benefit or a CDR of 100.
PTSD beneficiaries are disproportionately more likely to receive high combined ratings. On average,
veterans with PTSD have combined ratings that are twice as large, and have monthly payments that are 2.5
times as large, as DC recipients who are not compensated for PTSD. Veterans in the former group are also
six times as likely to have an IU designation. Interestingly, the two groups have similar physical ratings,
where a veteran's physical rating is the CDR he would receive after excluding ratings for mental health
conditions and tinnitus (ringing in the ears).1 3 Thus, differences in combined ratings are due almost entirely
No DC beneficiaries were Killed in Action (KIA) or deceased because benefits terminate with death.
13 For example, a veteran with a 30 percent rating for PTSD, a 20 percent rating for knee
arthritis, and a 10 percent
rating for tinnitus would have a CDR of 50 and a physical rating of 20. The physical rating is meant to have a lower
probability of type 2 screening errors than are ratings for PTSD. I exclude tinnitus from the construction of physical
12
24
.
JL.T
to benefits for PTSD. Similarly, PTSD is the primary condition for 81 percent of DC beneficiaries with IU
status or a CDR of 100 (column 5) and for 78 percent of beneficiaries with a CDR of 60 or higher (not
included in Table 1.4). Although not indicated in Table 1.4, it is worth noting that 22 percent of veterans with
a physical rating of 100 were wounded in combat, compared to only 9 percent of veterans with a PTSD
rating of 100.
Figure 1.1 plots PTSD and physical disability rates for veterans in the National Guard sample against
their local employment rates. The plots indicate a stronger correlation between local employment rates and
receipt of any PTSD award than between local employment rates and receipt of any physical award. Figure
1.2 plots average PTSD ratings and average physical ratings against local employment rates. Veterans who
do not receive benefits for PTSD have a PTSD rating of 0 and veterans who do not receive benefits for
physical conditions have a physical rating of 0. Thus, Figure 1.2 combines both the extensive (receipt of
benefits) and intensive (degree of rating) margins. Although veterans from Super PUMAs within the highest
quintile of employment rates have average PTSD ratings that are comparable to their average physical ratings
(7.6 for PTSD ratings, 6.6 for physical ratings), veterans from Super PUMAs within the lowest quintile of
employment rates have average PTSD ratings that are 65 percent larger than their average physical ratings
(13.2 for PTSD ratings, 8.0 for physical ratings). The plots in Figure 1.3 further investigate the possibility of
an intensive margin relationship between PTSD or physical ratings and local employment rates by displaying
the correlation between local employment rates and PTSD or physical ratings of 70 or more, both of which
make a veteran eligible to receive an IU award. The incidence of PTSD ratings of at least 70 is larger than
the incidence of physical ratings of at least 70 for veterans of all employment rates, but this is especially true
for veterans from hometowns with low employment rates. The plots in Figures 1.1, 1.2, and 1.3 provide some
initial descriptive evidence that PTSD is more strongly correlated with local economic conditions than are
physical conditions, and that much of the correlation between PTSD and local employment rates is
concentrated in PTSD awards rated 70 or higher. I explore these possibilities in greater detail below.
ratings because Title 38, Chapter 1, Section 4.87 of the electronic Code of Federal Regulations (accessed 10 December
2014) instructs medical examiners to evaluate tinnitus based on whether an applicant perceives ringing in one ear, both
ears, or the head.
25
C. Company Casualtiesas a Proxyfor Combat Exposure
Estimating the effects of combat exposure on disability receipt not only has important policy
implications, but also provides a useful benchmark for interpreting the association between local
employment rates and disability receipt. Since self-reported measures of combat exposure are susceptible to
bias and misinterpretation, as discussed in the introduction, I construct a proxy measure of combat exposure
using administrative casualty records that are less prone to error. The measure reflects the percentage of
casualties within each soldier's company during his first deployment. Specifically, I sum the number of
wounded in action and killed in action (KIA) in soldier i's company, c, over each month, t, that i is deployed,
leaving out the individual observation for i. Then I divide by the average number of deployed soldiers in i's
company over each month that i is deployed:
E 1 = 100
T=1T
t
Z
e cit)(KIAjt + WIAjt)
ifj
T 1
T
1 Ni ifj
E c t)
where T is the number of months of the first deployment for i and N is the number of soldiers in i's company
in month t.14
From an individual level, there is little potential for selection into combat exposure. E only includes the
casualty rates of other soldiers within each individual's company. Even if less risk-averse soldiers exhibit
more dangerous behavior while deployed, this will not directly influence their level of E. Similarly, since
some soldiers might volunteer for additional deployments, only considering company casualties from a
soldier's first deployment avoids this sort of selection-bias. Consistent with this, Appendix Table 1.1 shows
that E is not strongly correlated with baseline characteristics after controlling for a soldier's occupation and
rank at the start of his first deployment between 2003 and 2006.15 Notably, Appendix Table 1.1 also reveals
Yankovich (2011) also uses administrative casualty data to investigate of the effects of combat exposure on military
experience. My measure, E , differs from the measure of combat exposure constructed by Yankovich in three ways.
First, my measure is based on the percentage of service members in each soldier's company who suffer a casualty while
Yankovich's measure is the number of monthly casualties sustained in a soldier's company, averaged over each month
the soldier is deployed. Second, my measure only includes company casualty rates during each soldier's initial
deployment; Yankovich incorporates casualties from all deployments. Lastly, my measure only includes casualties of
other personnel in a soldier's unit, whereas Yankovich includes casualties sustained by the individual soldier.
'5 I define a soldier's occupation as his Career Military Field (CMF), which is the first two digits of his Military
Occupation Specialty (MOS).
14
26
that local employment rates exhibit no correlation with Ei. Soldiers with higher values of E are, on average,
more likely to have some college credits, but this correlation is economically small. A standard deviation
increase in local employment rates is only associated with a half a percentage point increase in some college
experience.
Nevertheless, Eg is still an imperfect measure of combat exposure. Companies with high casualty rates
might have poor leadership, which could be correlated with disability receipt or other outcomes. Ei is also
better suited to measure the effects of emotional trauma than physical trauma. A soldier who was not injured
himself but who witnessed several of his friends die or sustain wounds is more likely to develop symptoms
of post-traumatic stress and might be more likely to develop hearing problems or back pain later in life (e.g.,
if his military unit participated in more offensive actions). However, witnessing such trauma would not make
the soldier more likely to have lost a limb in battle. To address this last concern, I also control for wounds
sustained as a result of hostile action in many of the regressions that follow. When both measures of combat
exposure are included in the same regression, I interpret estimates of Ei as the effect of exposure to trauma
that is not due to physical combat injuries.
1.5
Results
A. Casualties, Local Employment Rates, and AFQT Scores
A negative relationship between casualties and local employment rates would suggest that veterans from
areas with poor economic conditions experienced more dangerous situations in combat. I investigate this
possibility by estimating the following equation:
(1.5)
CAS
=
a + flEMPgr + Osjr + XL' + EL
CASg indicates the number of casualties (WIA plus KIA) for individual i of race r (white or non-white)
who lived in Super PUMA g and state s at the start of his first deployment year j.16 EMPgr is the
Casualties are summed over all deployments between 2003 and 2011 for each soldier, not just his first deployment.
Although estimated by ordinary least squares, this is technically not a linear probability model because approximately
16
27
employment rate for veterans from Super PUMA g and race r. No Super PUMA spans multiple states, so g
uniquely identifies s. A fixed effect for each combination of state, deployment year, and race ensures that
variation in employment rates comes from veterans who deployed from different Super PUMAs, but who
came from the same state, deployed in the same year, and have the same race. Xi is a vector of individual
controls that includes a linear term for AFQT and indicators for year of birth, military occupation, military
rank, and education level.1 7 I cluster standard errors on each combination of Super PUMA and race, the level
of variation in local employment rates.
Table 1.5 reports estimates of (1.5). To make the table easier to read, I have multiplied all left-hand side
variables by 100. Estimates excluding Xi, reported in column 1, panel A, indicate that a one percentage point
increase in employment rates predicts a 0.007 percentage point increase in battlefield casualties, but this
estimate is not statistically different from zero. The estimated effects of local employment rates on casualties
hardly change with the inclusion of AFQT scores (column 2) and additional controls for year of birth,
occupation, rank, and education (column 3). Estimates of equation (1.5) with Ei included as a control,
reported in column 4, show no change in the correlation between local employment rates and casualties, but
reveal that veterans who served in companies that experienced high casualty rates were more likely to be
wounded than veterans who served in companies that experienced few casualties, as expected. Panels B and
C break down casualties into lethal (panel B) and nonlethal (panel C) components. Panel B actually indicates
a positive association between employment rates and combat fatalities, although this estimate is only
marginally significant.' 8 Overall, the results presented in Table 1.5 do not suggest that veterans from
hometowns with weak labor markets experienced greater battlefield exposure, injury, or death than veterans
from hometowns with strong labor markets.
0.15 percent of soldiers in my sample suffered more than one wound from hostile action. I consider a soldier's first
deployment between 2003 and 2006 as his first deployment during his military service. National Guard deployments to
combat zones were very rare prior to 2003 (Bonds, Baiocchi, and McDonald, 2010).
'7 Within my sample there are four education levels (GED or high school dropout, high school graduate, some college,
college), 31 possible years of birth (1958-1988), 38 possible military occupations, and 3 possible ranks (Private First
Class (PFC) or below, Specialist, and Sergeant or above). Education, occupation, and rank information are determined
at the time a soldier starts his first deployment.
18 It is worth noting that panel B of Table 1.5 reveals a negative relationship between
Ei and hostile deaths. Veterans
who were killed in action had their deployments cut short. Thus, their values for Ei are mechanically smaller.
28
Interestingly, Table 1.5 reveals that soldiers with low AFQT scores were more likely to be injured or
killed in battle than soldiers with high AFQT scores, even after controlling for age, education, occupation,
and rank. The estimates reported in column 4, which show that the negative relationship between AFQT
scores and casualties persists with the inclusion of company casualty rates, does not indicate that soldiers
with low AFQT scores sorted into companies that were exposed to more combat.1 9 Low cognitive ability
could be a risk factor for being killed or wounded in combat. Soldiers with high AFQT scores might have
greater ability to avoid or react to dangerous situations than soldiers with low AFQT scores, thus reducing
their likelihood of suffering an injury. This is consistent with studies that have found a positive correlation
between AFQT scores and soldier performance on tanker gunnery ranges and skill qualification tests
(Scribner, Smith, Baldwin, and Phillips, 1986, Home, 1987, Winkler, 1999).
B. The Effects ofLocal Employment Rates and Combat Exposure on DisabilityReceipt: Extensive Margin
As discussed in the theoretical section, the presence of either application costs or imprecise screening for
some disability conditions could cause veterans from weak labor markets to receive disability at higher rates
than veterans from strong labor markets. To investigate this possibility, Table 1.6 reports ordinary least
squares (OLS) estimates of the following equation:
(1.6)
Yi = a + fEMPgr + yEi + SWIA 1 + ipsjr + Xi6 + eC
In panel A, Y is an indicator for receiving any DC benefits. As above, g and s index veteran i's Super
PUMA and state of residence, respectively, at the start of his first deployment between 2003 and 2006, j
indexes year of deployment, and r indexes race (white or non-white). Ei is the combat exposure measure
described above and WIAi is the number of wounds that i sustained over all deployments. Including a fixed
effect for each combination of state, deployment year, and race ensures that any correlation between
employment rates and disability receipt is not confounded by variation in VARO approval procedures or
19 To further investigate this possibility, I also estimated equation (1.5) with company fixed effects, which hardly
change the negative relationship between AFQT scores and casualties. These results are available from the author upon
request.
29
other factors that potentially vary at the state and year level, such as state-level National Guard policies or
Unemployment Insurance replacement rates.20 Controlling for deployment year is also critical because
veterans who deployed in 2003, for example, had three more years to apply for disability, and potentially
upgrade their ratings, than veterans who deployed in 2006. As with Table 1.5, all outcomes have been
multiplied by 100.
Estimates excluding E1 , WIA , and X1 , reported in column 1, indicate that a one percentage point increase
in employment rates is associated with a 0.294 percentage point decrease in the probability of receiving any
DC benefits. The estimated effect of employment rates on DC receipt decreases by 23 percent (three-fourths
of a standard error) when AFQT is included as a control (column 2). This is not surprising: veterans from
regions with poor economic conditions are more likely to have low AFQT scores and the estimate on AFQT
is consistent with previous research which has documented that low cognitive ability is a strong predictor of
DC receipt (Autor et al., 2011). The estimated effect of employment rates on DC receipt hardly changes with
the inclusion of year of birth, military occupation, military rank, and education controls (column 3), or with
the inclusion of E and WIAi (column 4). With the rich set of controls included in column 4, a standard
deviation decrease in employment rates is associated with a 1.23 percentage point decrease in the probability
of receiving any DC benefits, which is roughly 40 percent as large as the effect of a standard deviation
increase in Ei and 4 percent as large as the effect of being wounded on any disability receipt.
Panels B and C of Table 1.6 break down any DC receipt into receipt of any compensation for PTSD and
physical conditions, respectively. The estimates reported in column 2 of both panels indicate that controlling
for AFQT scores attenuates the estimated effect of local employment rates on disability receipt for both
PTSD and physical conditions. Likewise, the estimated effects of employment rates on both outcomes hardly
change with the inclusion of Ej, WIA 1 , and Xj, consistent with estimates reported in panel A. The estimate
reported in column 4 of panel B indicates that a one percentage point increase in local employment rates is
As discussed above, VAROs typically service all veterans in a particular state. All results are similar when I remove
veterans from California, Texas, New York, and Pennsylvania, the four states that have more than one VARO.
21 The mean (standard deviation) of local employment rates and E for soldiers in the National
Guard sample are 80.73
(5.91) and 2.95 (5.20), respectively.
2
30
associated with a 0.220 percentage point decrease in the probability of receiving any PTSD benefits. This is
approximately 50 percent larger in magnitude than the estimated effect of employment rates on receipt of any
benefits for physical disabilities, as reported in column 4, panel C. Notably, unit casualty rates, as measured
by Ei, are more strongly correlated with receipt of any PTSD benefits than with receipt of any physical
benefits. A one percentage point increase in Ei is associated with a 0.622 percentage point increase in the
probability of receiving any benefits for PTSD and a 0.290 percentage point increase in the probability of
receiving any benefits for physical conditions. As discussed above, it is not surprising that E is more
strongly correlated with PTSD than with physical conditions because higher unit casualty rates should
directly increase a soldier's propensity to develop post-traumatic stress, but only indirectly increase his
propensity to develop physical disabilities. In contrast, the effect of wounds sustained through hostile action,
WIAi, on PTSD receipt is 30 percent smaller than the effect of wounds on receipt of any physical award
(22.52 relative to 32.53), although it is very clear that wounded veterans are substantially more likely to
receive benefits for PTSD and physical conditions than are uninjured veterans.
C. The Effects of Local Employment Rates and Combat Exposure on Disability Receipt: Intensive and
Extensive Margins
The results reported in panels B and C of Table 1.6 suggest that application costs contribute to, but do
not fully explain, the correlation between local economic conditions and receipt of PTSD benefits on a purely
extensive margin. However, the evidence for this interpretation is less than conclusive. On one hand, the
estimated effect of local employment rates on receipt of any PTSD award (column 4, panel B) is within the
margin of error of the estimated effect of local employment rates on receipt of any physical award (column 4,
panel C). On the other hand, PTSD awards receive much higher ratings than awards for physical ailments:
more than half of veterans who receive compensation for physical conditions have a physical disability rating
of 10 or 20, whereas 95 percent of DC beneficiaries with a PTSD award have a PTSD rating of 30 or higher.
Thus, application costs might deter some veterans from applying for physical disabilities that would result in
ratings of 10 or 20, but would not necessarily deter the same veterans from applying for PTSD awards that
31
would typically result in a rating of 30. Additionally, a purely extensive margin comparison ignores the
possibility that local labor markets might influence disability receipt on the intensive (severity of rating)
margin, and that intensive margin effects could differentially influence PTSD and physical ratings. For
example if a disability applicant's nonemployment
status increases his appearance
of occupational
impairment or encourages him to increase his initial disability rating to offset financial hardship, then local
labor markets would exhibit correlation with PTSD ratings on both the extensive and intensive margins.
To investigate the possibility that DC responds to local labor markets on an intensive as well as extensive
margin, Table 1.7 reports OLS estimates of (1.6) where the outcomes are a veteran's CDR (panel A), a
veteran's PTSD rating (panel B), and a veteran's physical rating (panel C). 22 The estimates reported in
column 1, panel A of Table 1.7 indicate that a one percentage point increase in local employment rates is
associated with a 0.22 average decrease in PTSD ratings. Similar to the pattern exhibited in Table 1.6,
estimates decrease by 23 percent with the inclusion of a control for AFQT (column 2), but remain relatively
consistent with the inclusion of additional individual-level covariates (column 3) and with the inclusion of of
E and WIAi. Using the estimates reported in column 4, the effect of a standard deviation decrease in
employment rates on a veteran's CDR is 5.7 percent of the mean CDR in the sample
(=
1
Z 59 )
5.7).
By contrast, the estimates reported in panel A, Table 1.6 indicate that the effect of the same decrease in
employment rates on DC receipt is 3.8 percent of the percentage of DC beneficiaries within the sample
(-.2 0(-.9)
= 3.8).
The larger effect of employment rates on CDRs, relative to the effect
of
employment rates on receipt of any DC benefits, suggests that local labor markets influence disability receipt
on both the extensive and intensive margins.
The estimates reported in panels B and C of Table 1.7 reveal that PTSD ratings explain most of the
correlation between local labor markets and combined ratings. The estimated effect of local employment
22
CDRs are 0 for veterans who received no DC and are 100 for veterans who receive an IU benefit. Likewise, PTSD
ratings nd a0 for veterans who received no comupensation for PTSD and are 100 for
veterans designated as IU for PTSD,
and physical ratings are 0 for veterans with no physical disabilities and 100 for veterans designated as LU for a physical
condition. Since all estimates include both DC beneficiaries and non-beneficiaries, the results reported in Table 1.7
combine both the extensive (receipt of disability) and intensive (disability rating) margins.
32
rates on PTSD ratings (column 4, panel B) is almost identical to the estimated effect of local employment
rates on CDRs (column 4, panel A). Comparing the estimated effect of employment rates on PTSD ratings to
the effect of E on PTSD ratings also suggests that the intensive margin influence of employment rates is
larger than the intensive margin influence of unit casualty rates. Whereas the effect of a standard deviation
decrease in local employment rates is 40 percent as large as the effect of a standard deviation increase in Ej
((=220)(
L
\\k(.622) (5.2)/
0.40
(column 4, panel B, Table 1.6), the effect of the
/
on receipt of any PTSD benefits
same decrease in local employment rates on PTSD ratings is 65 percent as large as the effect of a standard
deviation increase in E
(=(-5)
0.65
(column 4, panel B, Table 1.7). Unlike PTSD ratings,
physical ratings exhibit little correlation with local labor markets. The estimated effect of employment rates
on physical ratings is not statistically significant when all controls are included (column 4) and is one-fifth of
the magnitude of the estimated effect of employment rates on PTSD ratings (-0.169 for PTSD ratings, -0.034
for physical ratings). Furthermore, I can rule out an effect of employment on physical ratings that is of equal
value to the effect of employment on PTSD ratings.
To further investigate the intensive margin responsiveness of PTSD and physical ratings to local labor
markets, panel A of Table 1.8 reports estimates of equation (1.6) where the outcome indicates receipt of a
PTSD rating above 0 (column 1), or PTSD ratings of at least 30, 50, 70, or 100 (columns 2, 3, 4, and 5,
respectively).23 This follows the analysis of distribution effects described in Angrist (2001). Panel B is
analogous to panel A, except the outcomes are indicators for having physical ratings at or above the rating
cutoff for each column. To make the table easier to read, all outcomes have been multiplied by 100. The
estimates reported in column 2 of panel A indicate that a one percentage point increase in local employment
rates is associated with a 0.21 percentage point decrease in the probability of receiving a PTSD rating of 30
I have chosen these ratings cutoffs because nearly all mental disabilities result in condition-specific ratings of 10, 30,
50, 70, and 100.
23
33
or higher, which is nearly identical to the estimate reported in column
1 of panel A.24 On the other hand, a
one percentage point increase in local employment rates is associated with a 0.075 percentage point decrease
in the probability of receiving a physical disability rating of at least 30 (column 2, panel B), which is
substantially smaller in magnitude than the corresponding estimate for PTSD ratings of at least 30.
Importantly, the estimates reported in columns 3 through 5 of Table 1.8 (panel A) suggest that many
veterans who apply for PTSD benefits because of weak local labor markets receive high ratings. A one
percentage point increase in local employment rates is associated with a 0.184 percentage point increase in
the probability of receiving a PTSD rating of 70 or higher (column 4), which is close to the estimated effect
of local employment rates on receipt of any PTSD benefits (column 1, panel A). Benchmarking the effect of
local employment rates against the effect of E and WIAi for different PTSD ratings tells a similar story. The
association between a standard deviation decrease in employment rates and receipt of any PTSD award is 40
percent as large as the effect of a standard deviation increase in Ej ((-.220)(-5.)
0.40). However, the
association between a standard deviation decrease in employment rates and PTSD awards rated 100 or IU is
nearly 160 percent as large as the effect of a standard deviation increase in E
(
= 1.57).
Despite strong correlation between local employment rates and PTSD ratings of 70 or 100, the estimates
reported panel B indicate no correlation between local employment rates and physical ratings of 50 or more.
Unlike PTSD, ratings for physical conditions depend exclusively on the severity of observable symptoms.
The precisely estimated zero effect of employment rates on physical ratings of at least 50 (column 3, panel
B) therefore suggests that veterans from weak labor markets did not see a substantial deterioration in
physical health following their deployments. It also suggests that high application costs do not deter veterans
from applying for disabilities that are likely to result in CDRs of 50 or higher, although this interpretation
should be made with caution because few veterans have physical ratings of 50 or higher.
The similar estimates reported in columns 1 and 2 of panel A, Table 1.8, are not surprising because nearly 95 percent
of veterans who receive disability for PTSD have ratings of 30 or higher.
24
34
Appendix Table 1.2 distinguishes between IU status and ratings of 100. Column 1, which reports
estimates of equation (1.6) where the outcome is an indicator for receiving a rating of 100 or IU status for
either PTSD (panel A) or physical conditions (panel B), is identical to column 5 of Table 1.8. Columns 2 and
3 subsequently break down this outcome into IU status and a rating of 100, respectively. The estimates
reported in panel A indicate that local employment rates are correlated with both IU awards for PTSD and
PTSD ratings of 100. PTSD ratings of 100 are more strongly correlated with local employment rates than are
IU awards for PTSD, but this comparison is not very meaningful because, unlike physical disorder ratings,
mental disorder ratings of 100 require "total occupational and social impairment." In other words, both PTSD
IU awards and PTSD ratings of 100 have a subjective, occupational-impairment component. The estimates
reported in panel B reveal that local employment rates are neither correlated with IU status for physical
conditions nor physical ratings of 100.
Interestingly, ratings constructed from all conditions other than PTSD (not just physical conditions)
exhibit little correlation with local employment rates, particularly on the intensive margin. This can be seen
in panel C of Appendix Table 1.3, which reports estimates of equation (1.6) where the outcome is an
indicator for receiving any non-PTSD disability (column 1), or non-PTSD ratings of at least 30, 50, 70, or
100 (columns 2, 3, 4, and 5, respectively). A veteran's non-PTSD rating represents his CDR after excluding
any rating for PTSD. To facilitate comparison, panels A and B of Appendix Table 1.3 reproduce estimates of
the effect of local employment rates on PTSD and physical outcomes, respectively. All estimates reported in
panel C are very close to the estimated effects of employment rates on physical disabilities, as reported in
panel B. Panel D reports estimates where the outcomes are based on ratings from mental disorders other than
PTSD while panel E reports estimates where the outcome is tinnitus, which is also not included in physical
disability ratings, as discussed above. Since all diagnoses for tinnitus result in a rating of 10, no estimates are
reported in columns 2 through 5 of panel E.
Notably, the results reported in panel D of Appendix Table 1.3 do not indicate substantial correlation
between employment rates and mental disorders for conditions other than PTSD. These results appear to
contradict the assumption that screening for mental conditions is less precise than screening for physical
35
conditions. However, the diagnostic criteria for PTSD is more broad, and therefore less precise, than the
diagnostic criteria for other anxiety disorders, a point discussed in Brewin, Lanius, Novac, Schnyder, and
Galea (2009). Moreover, the key diagnostic criterion associated with PTSD, exposure to a traumatic event,
could lead to over-diagnosis in situations where a traumatic event occurred, as discussed in Maercker et al.
(2013).
Taken together, I interpret the results from this section as evidence that imprecise screening and the
employment criteria associated with PTSD benefits, rather than application costs, drive most of the
correlation between local labor markets and PTSD receipt. This is not to say that application costs are not
present. A plausible interpretation for the extensive margin correlation between employment rates and
physical disabilities, and the lack of correlation when both the extensive and the intensive margins are
combined, is that application costs deter some veterans in strong labor markets from applying for physical
disabilities that are likely to result in low ratings, but not physical conditions that are likely to result in high
ratings. Another explanation is that screening procedures for low-rated physical disabilities are imprecise.
The latter explanation is a distinct possibility given that nearly 50 percent of veterans in my sample with any
physical disability receive benefits for back pain, knee pain, or migraines, three conditions that
predominately rely on self-reported symptoms of pain.
In contrast to physical conditions, PTSD awards not only exhibit strong correlation with local
employment rates on the extensive margin, but much of this correlation is concentrated in PTSD awards
rated 70 or 100, the ratings that require the most evidence of social and occupational impairment. This result
is consistent with the hypothesis that the employment criteria associated with the mental disorder rating
schedule encourages many veterans from weak labor markets to seek high PTSD ratings. Additionally, the
correlation between local employment rates and PTSD awards rated 70 or higher (column 4, panel A, Table
1.8) is significantly larger than the association between local employment rates and physical ratings of 30 or
higher (column 2, panel B, Table 1.8), even though the incidence of physical ratings of 30 or higher is greater
than the incidence of PTSD ratings of 70 or higher. This last point suggests that application costs do not fully
explain the relationship between local economic conditions and PTSD. It is also worth noting that veterans
36
with PTSD ratings of 70 or higher receive a lifetime stream of between $1,300 and $3,200 in monthly
benefits, a very substantial sum. Overall, it seems unlikely that the time costs associated with applying for
DC drive the correlation between local employment rates and high PTSD ratings, but could potentially
explain the relationship between local employment rates and physical conditions.
1.6
Can Health Factors Explain the Correlation Between PTSD and Local Economic Conditions?
It is natural to ask whether exposure to the same traumatic experience is more likely to cause post-
traumatic stress in veterans from weak labor markets relative to veterans from strong labor markets, which
could potentially explain the results in the preceding section. Financial hardship or the lack of a social
support structure due to joblessness could exacerbate stress associated with a traumatic event, which is
consistent with evidence that PTSD symptoms worsen when individuals stop working (Schnurr, Lunney,
Sengupta, and Spiro III, 2005, Frueh, Grubaugh, Elhai, and Buckley, 2007).
If poor economic conditions aggravate symptoms of PTSD, then the effect of combat exposure should be
larger for veterans from hometowns with lower employment rates than for veterans from hometowns with
higher employment rates. I investigate this possibility by estimating the following equation:
(1.7)
Yi = a + 3EMPgr + yEi + 6(EMPgr x Ei) +
Osjr +
XL'9 + Ei
Column 2 of Table 1.9 reports estimates of (1.7) where the outcome is a veteran's PTSD rating. The
small, imprecise estimate of
employment rates.
8 does not indicate an interaction between combat exposure and local
Since estimates of the interaction term in (1.7) attribute any differences in the effects of
combat exposure by race, deployment year, or geographic region to differences in predicted employment
rates, column 3 reports estimates of the following equation:
(1.8)
Yi = a+ yEi + S(EMPgr x EI)+ pgjr + X:6 + Ei
Column 1 of Table 1.9 reports estimates of (1.7), but excluding the interaction term. EMPr has been demeaned for all
estimates reported in Table 1.9.
25
37
The main effect of EMP, is excluded from (1.8) because it is absorbed in the (Super PUMA) by
(deployment year) by (race) fixed effects, which replace the (state) by (deployment year) by (race) fixed
effects in (1.7). Estimates of (1.8) still do not indicate a significant interaction between employment rates and
combat exposure. Columns 4 through 6 repeat columns
1 through 3 but replace E with WIA to investigate
whether wounded veterans from areas with relatively poor economic conditions obtain higher PTSD ratings
than do wounded veterans from strong labor markets. The point estimates reported in column 6 indicate that
the effect of being wounded on PTSD ratings is 9 percent smaller for veterans from regions with an
employment rate that is a standard deviation above the mean employment rate (13.89 - 0.22 x 5.9 = 12.59,
which is 9 percent smaller than 13.89), although this estimate is not statistically significant. Overall, the
estimates in Table 1.9 argue against the interpretation that veterans from regions with poor economic
conditions are more psychologically vulnerable to combat exposure than veterans from regions with strong
economic conditions.
Another potential explanation for the results in the previous section is that low employment rates are
correlated with factors that negatively influence mental health. Ideally, my data would contain a precise
measure of veteran mental health that is independent of the VA disability program. Absent this, I investigate
whether there is a relationship between local economic conditions and diagnoses from DoD disability exams.
The DoD Disability Evaluation System follows the same rating schedule as the VA's DC program, but
screening standards for DoD disability are substantially more stringent. A soldier is not eligible to apply for
DoD disability unless his military commander or a medical provider refers him to a Medical Evaluation
Board (MEB). Furthermore, DoD disability is usually only granted for conditions that make a soldier unfit
for service in a combat zone. Whereas 32.5 percent of veterans in the National Guard sample received DC
for any condition in January 2014, only 2.5 percent of the sample received DoD disability. On the other
hand, over 90 percent of veterans who received DoD disability also received DC.
Column
1 of panel A, Table 1.10, reports estimates of (1.6) where the outcome is 100 for veterans who
receive any DoD disability pay and 0 otherwise. In column 2 the outcome indicates receipt of DoD disability
for PTSD and in column 3 the outcome indicates receipt of DoD disability for any physical condition. The
38
results reported in all three columns indicate no relationship between local employment rates and any of the
three DoD disability outcomes. Thus, while employment rates appear to predict a VA diagnosis of PTSD,
they do not appear to predict a DoD diagnosis of PTSD that is based on the more strict screening standard.
On the other hand, wounded veterans are substantially more likely to receive DoD disability for PTSD and
physical conditions, a result that is consistent with the results reported in Table 1.6. There is also a positive,
albeit statistically insignificant, relationship between Ei and receipt of DoD disability for PTSD. Still, the
effects of wounds and combat exposure on DoD disability receipt are much smaller in magnitude than the
effects of wounds and combat exposure on DC receipt, reflecting the more stringent screening standards for
DoD disability relative to those for DC.
To the extent that mortality is a downstream consequence of poor mental health, the absence of an
association between local employment rates and mortality further argues against the hypothesis that local
employment rates are negatively correlated with factors that aggravate mental health. This result is from
column 4 of panel A, Table 1.10, which reports estimates of (1.6) where the left-hand side variable is 100 if a
veteran was no longer alive as of May 2012 and 0 otherwise. The positive point estimate actually indicates a
positive association between employment rates and mortality, although this estimate is only marginally
significant.
The low incidence of DoD disability and mortality makes these measures imperfect reflections of health.
I therefore constructed geographic measures of self-reported health using the sample of post-9/l1 veterans
born in 1974 or later within the 2009-2011 ACS. As an example, in column 1, panel B of Table 1.10, the
outcome for veteran i who lives in Super PUMA g and is of race r is the percentage of post-9/l1 veterans in
the ACS who live in Super PUMA g, are of race r, and who self-report having any disability for a condition
other than hearing or eyesight. If veterans from weak labor markets have less access to quality health care
than do veterans from strong labor markets, then we would expect to see a negative correlation between local
employment rates and these measures of self-reported health. However, it is important to emphasize that the
group of post-9/11 veterans in the ACS does not perfectly represent National Guard veterans in my sample
because the ACS also includes veterans who served in the Active Duty Army, the Army Reserve, or other
39
military branches. Thus, any correlation between local economic conditions and the self-reported health of
veterans in the ACS could also reflect differences in demographics or military experiences between National
Guard soldiers and members of other services.26
The point estimate of -0.10 in column 1, panel B is marginally significant but still less than half the
magnitude of the point estimate reported in column 4 of Table 1.10 (panel A). In other words, veterans from
weak labor markets might be more likely to self-report disabilities than veterans from strong labor markets,
but the magnitude of this relationship is not likely to explain the correlation between local labor markets and
DC receipt within my National Guard sample. Column 2 of panel B is analogous to column 1, except the
outcome in column 2 is the percentage of post-9/11 veterans in the ACS, for each combination of race and
Super PUMA, who report having cognitive difficulty due to physical, mental, or emotional conditions.
The
point estimate reported in column 2, -0.050, is insignificant and less than one-fourth the magnitude of the
correlation between employment rates and PTSD awards, as reported in panel B of Table 1.6. This result
does not suggest that veterans from weak labor markets are substantially more likely to have cognitive
difficulty than veterans from strong labor markets. Furthermore, the point estimate reported in column 3,
where the outcome is the percentage of post-9/11 veterans in the ACS who report having a disability that
substantially limits a basic physical activity, is similar in magnitude to the point estimate reported in column
2 (-0.050 in column 2 and -0.055 in column 3). Thus, it does not appear that labor market conditions are
correlated with factors that influence mental health but not physical health.
As a final check, I investigate whether the correlation between local employment rates and PTSD ratings
is due to post-deployment military outcomes that influence disability receipt. Although National Guard
veterans may receive DC while not performing an active duty mission, Guard veterans are still more likely to
I used the 2009-2011 ACS to construct these indexes because the ACS did not track self-reported disability prior
to
2009. 2011 was also the last year the ACS used the Super PUMA as a geographic unit. I also do not report estimates of
Ei and WIA1 in panel B because geographic variation in combat exposure among National Guard veterans is not likely
to represent the same variation among all post-9/11 veterans.
27 Cognitive difficulty is a common symptom of post-traumatic stress (American
Psychiatric Association, 1994).
26
40
receive disability upon exiting the military.
8
Estimates of (1.6), where the left-hand side variable equals 100
for veterans who left the military prior to June 2013 and 0 otherwise, are reported in column 1 of Appendix
Table 1.4. These estimates do not indicate that local employment rates are correlated with attrition from the
military. Column 2 of the same table also indicates no significant correlation between local employment rates
and the propensity for soldiers to deploy more than once.
The results from this section do not suggest that health factors drive the correlation between local
economic conditions and PTSD benefits. Similarly, the lack of correlation between employment rates and
battlefield casualties (Table 1.5) argues against the hypothesis that the direct effects of combat exposure
explain the relationship between local employment rates and PTSD benefits. An alternative explanation is
that the large monetary value of PTSD benefits induces many veterans with poor employment prospects to
file claims for PTSD. As discussed above, the correlation between local employment rates and PTSD
benefits could also be the result of medical examiners or RVSRs who incorrectly interpret a veteran's
employment status to be the direct consequence of post-traumatic stress. Similarly, medical examiners and
RVSRs might produce reports that are more favorable to jobless disability applicants because higher CDRs
result in higher priority access to VA healthcare. Unfortunately, since my data does not contain information
on claims that do not result in disability ratings, I cannot distinguish between behavior on the part of veterans
and behavior on the part of examiners.
1.7
Conclusion
Disability benefits for PTSD are highly correlated with labor market conditions in a veteran's hometown
while equally large benefits for physical disabilities are unrelated to the same conditions. This pattern seems
unlikely to reflect elevated health risks for National Guard soldiers from regions with weak labor markets for
several reasons. First, injuries sustained in combat are not correlated with labor market conditions. Second,
veterans from regions with low employment-to-population ratios were no more likely to receive DoD PTSD
11 percent of veterans who were still in the military as of June 2013 received payments for PTSD while 23 percent of
veterans who had left the military prior to June 2013 received benefits for PTSD. June 2013 is the last month for which
I have a military record on file.
28
41
diagnoses, nor were they more likely to be deceased by 2012, than veterans from regions with high
employment-to-population ratios. Third, the effects of combat exposure appear unrelated to local economic
conditions, a result that weighs against the hypothesis that weak labor markets exacerbate symptoms of posttraumatic stress. These findings suggest that the relative attractiveness of DC when job opportunities are
scarce contributes to the unusually high incidence of PTSD awards among recent veterans. At the same time,
these results are also consistent with the possibility that medical examiners or VA ratings adjudicators
incorrectly interpret joblessness to be a consequence of post-traumatic stress.
The thrust of contemporary policy has been to expand benefits for PTSD. In July 2010, President Obama
announced that the VA would ease the evidentiary standards required for veterans to receive compensation
for PTSD (White House, 2010). Since veterans who were discharged from the military under other than
honorable circumstances are not eligible for VA benefits, the Secretary of Defense recently simplified the
application process for veterans seeking to upgrade their discharge status "based on claims of previously
unrecognized post-traumatic stress" (Hagel, 2014). In response to statistics showing low earnings among
those qualified for DC by virtue of PTSD, the Congressional Budget Office recommended supplementing
disability payments to veterans with mental disorders (Congressional Budget Office, 2014). The results
presented here suggest that low earnings among PTSD beneficiaries are not entirely due to health, but rather
reflect a combination of generous benefits and an implicit tax on earnings resulting from the determination of
mental disorder ratings and IU status based on the appearance of occupational impairment.
A key result from my investigation is that much of the correlation between PTSD and weak labor
markets is concentrated in PTSD awards rated 70 or higher, and thus eligible for IU status. The potential for
larger payments could encourage many veterans to apply for increases in their PTSD ratings, a possibility
that is consistent with the substantial CDR growth seen in Vietnam-era veterans immediately after they enroll
in DC (Autor et al., 2015a). Likewise, the financial incentives associated with DC could act as disincentives
for rehabilitation. There is evidence that disability status predicts treatment withdrawal among OIF and OEF
veterans diagnosed with PTSD (Gros et al., 2013). Earlier studies indicate that treatment for PTSD is less
effective for veterans than for other trauma victims (Bradley et al., 2005). Similarly, a 2005 investigation by
42
the VA Office of the Inspector General reveals that attendance at VA behavioral health treatment sessions
decreased by 80 percent when veterans received the maximum level of DC payments.
For many veterans who receive PTSD benefits, labor market reintegration could prove even more
difficult than psychological rehabilitation. Recent research on SSDI applicants indicates that the probability
of future employment declines by 0.4 to 0.5 percentage points each month that an applicant waits for a
decision, during which time he is not allowed to work (Autor, Maestas, Mullen, and Strand, 2015b). There is
also substantial evidence that employers discriminate against job applicants with long unemployment spells
(Eriksson and Rooth, 2014, Kroft, Lange, and Notowidigdo, 2013, Oberholzer-Gee, 2008). This presents a
difficult challenge for the VA. On one hand, DC benefits contribute to an improved quality of life for those
who indeed struggle in the labor market. On the other hand, the strong anti-work incentives inherent in the
DC program, especially for PTSD, could have long-term repercussions on the health and well-being of post9/11 veterans.
43
1.8
Chapter 1 Appendix
A. CasualtyInformation
When a company suspects that a soldier has been either killed or wounded, it is required to fill out a
Casualty Feeder Card (DA Form 1156) in accordance with AR 600-8-1. The Casualty Feeder Card requires
units to indicate when and where a casualty occurred, whether a casualty was the result of hostile or nonhostile action, and whether the casualty is deceased or injured. They then forward this information to their
battalion's personnel shop, which subsequently puts this information into DCIPS. 29 Casualties who are
declared deceased as a result of hostile action are reported as Killed in Action (KIA) and those who are
declared wounded as a result of hostile action are reported as Wounded in Action (WIA). Battalion personnel
shops will initiate a casualty report in DCIPS for any soldiers who are involved in a hostile action and
subsequently determined by medical personnel to have been injured or killed. Likewise, battalion personnel
shops will update DCIPS entries for any soldiers initially entered into the database as WIA or KIA, but who
are subsequently determined either not to have sustained injuries or not to have been injured or killed as the
result of hostile action.
I implicitly assume soldiers cannot manipulate the casualty process in any systematic way. This
assumption might be problematic because the Casualty Feeder Card, the form used to initially label a soldier
as wounded before he or she is entered into DCIPS, is one of several pieces of information required to
receive a Purple Heart Award. The Purple Heart does not come with any monetary awards but is often
viewed with great pride in military and civilian circles. Some might argue that soldiers with lower potential
earnings might have more of an incentive to be labeled as wounded, which might explain why soldiers with
lower AFQT scores suffer more wounds. I do not believe this potential bias presents a major problem, for
three reasons. First, there is a negative relationship between both KIA and AFQT scores and WIA and AFQT
scores. Second, the military has an incentive to tally accurate casualty records because the DCIPS database is
"used by DoD organizations, external government agencies, both houses of Congress, the President, the news
media, and the general public.. .to understand trends in casualties as they relate to terrain, advances in
29
A battalion is the higher headquarters for a company. Most battalions consist of 3 to 5 companies.
44
medicine, the advent of better technology that has enhanced the safety of the war fighter, or the challenges
brought about by new threats" (DMDC Defense Casualty Analysis System website). Third, medical
personnel and military command teams are wearisome of soldiers who might attempt to fake an injury for
personal gain. It seems reasonable to assume any attempts to falsify casualty reports are not large enough to
significantly bias my results.
B. ConstructingLocal Employment Rates with the ACS
I construct local employment rates using the 2005-2009 five-year American Community Survey (ACS). I
first limited the ACS to men born between 1974 and 1986. I excluded individuals who were not in the labor
force, who belonged to institutional group quarters, who served in the active duty military at the time of the
survey, and who had served in the military at any point after September 2001. I then grouped individuals into
cells of race (non-Hispanic whites and non-whites) and Super Public Use Microdata Area (PUMA). To
construct local employment rates, I divided the number of employed persons by the total number of persons
in each cell, weighting observations according to the ACS "perwt" variable.
After constructing local employment rates for each combination of (race)x(Super PUMA) within the
2005-2009 ACS, I assigned an employment rate to everyone in the National Guard sample. I defined a
soldier's Super PUMA based on the zip code of his mailing address at the start of his first mobilization. I
mapped each five-digit zip code to a specific Super PUMA using the zip-code-to-PUMA crosswalk file
(based on year 2000 geographic boundaries) provided by the Geographic Correspondence Engine of the
Missouri Census Data Center (http://mcdc2.missouri.edu/websas/geocorr2k.html). Each PUMA belongs to
only one Super PUMA. For zip codes that span two or more PUMAs, I assigned each zip code to the PUMA
with the largest allocation factor, where an allocation factor is the portion of each zip code that corresponds
to each PUMA. Half of zip codes that span multiple PUMAs have maximum allocation factors above 90
percent, meaning 90 percent or more of such zip codes belong to one PUMA, and 98 percent of zip codes
that span multiple PUMAs have maximum allocation factors above 50 percent.
45
About 2 percent of soldiers in the National Guard sample lived in zip codes that did not map to a Super
PUMA. I assigned these soldiers to the Super PUMA corresponding to the following zip code, but
conditional on the following zip code having the same first three digits as the original zip code. For example,
if veterans from zip code
11101 did not match to a Super PUMA, then I mapped them to the Super PUMA
corresponding to zip code 11102. However, if veterans from zip code 11199 did not match to a Super
PUMA, then they were excluded from my final sample. Overall, I was able to assign all but seven soldiers to
a Super PUMA.
C. Sample Selection Details
My original sample consists of 99,010 National Guard soldiers who enlisted in the National Guard
between fiscal years 1993 and 2006 and who deployed to a combat zone (mostly Iraq and Afghanistan)
between January 2003 and December 2006.
I first remove 9,869 female soldiers, then 25,599 soldiers with
prior military experience. Soldiers with prior military service are older than non-prior-service soldiers, have
different work experiences, and have different incentives for serving in the National Guard.' I then remove
1,952 soldiers who deployed to a combat zone more than ten years after their initial entry into the National
Guard. Soldiers who serve beyond ten years have reenlisted at least twice and are likely to serve until they
qualify for retirement benefits. I further exclude an additional 1,128 soldiers who have missing AFQT
information, missing education information, or who enlisted before 17 years of age or after 35 years of age.3
Since I exploit within-state variation in employment rates at the Super PUMA level, I remove 1,667 veterans
from U.S. territories, 7 veterans who had zip codes that I could not match to a particular Super PUMA, and
This number is comparable to a 2010 RAND study that used data managed under the Office of the Assistant Secretary
of Defense for Reserve Affairs to find that 168,213 Army National Guard soldiers were mobilized under Federal Title
10 authority between September 2001 and May 2007 (Bonds et al., 2010, p.8). The RAND number is larger than the
number of soldiers in my original sample because it includes officers, soldiers who joined the National Guard before
October 1992, soldiers who mobilized in support of operations conducted within the Continental U.S. (these
mobilizations do not constitute a combat deployment within my data), and a few soldiers who deployed to combat zones
before 2003 or between January and May 2007.
30
Prior service soldiers can join the National Guard as a way to finish out their eight-year military commitment (at least
four years of which must be on Active Duty) while still receiving heavily discounted health care through TRICARE.
32
National Guard soldiers are not allowed to enlist in the military before turning 17 or after turning 35. The few in
my
sample who did were either granted special waivers or had incorrectly coded years of birth.
31
46
2,679 veterans from Alaska, North Dakota, South Dakota, Wyoming, Vermont, or Washington, D.C., all of
which have only one Super PUMA. I finally remove 176 veterans who were the only soldiers in their
company during this first deployment.
3
My final sample consists of 55,936 enlisted National Guard soldiers.
Above I describe how I use casualty rates for other soldiers in the same company to construct a measure of combat
exposure. Soldiers who are the only member of a company will mechanically have no value for this measure. It makes
sense to exclude them from my analysis because these rare instances usually only occur for soldiers assigned to higher
headquarter staffs.
3
47
Figure 1.1. Any PTSD and Any Physical Disability by Local Employment Rates
B. Percentage with any DC for physical conditions
-
A. Percentage with any DC for PTSD
.
0
-
0
0
,0
0
,e
L()
.
0
4
e
0
0
0
C
S
*0
4
0e@
a
0
0
40
-
LO)
Nl
0-
68
72.4
76.8
81.2
Local Employment Rate
85.6
90
68
72.4
76.8
81.2
Local Employment Rate
Notes: This figure plots the percentage of veterans in the National Guard sample who receive DC for PTSD (panel A) and the
percentage who receive DC for physical conditions (panel B) by local employment rates.
85.6
90
Figure 1.2. Mean PTSD Rating and Mean Physical Rating by Local Employment Rates
B. Physical Rating
A. PTSD Rating
(0~
CO.
0@e
0
000
CqJ
0
0
.0
0
.s
CO-
Do
4
S.
0
0-
0-
68
72.4
76.8
81.2
Local Employment Rate
85.6
90
68
72.4
76.8
81.2
85.6
Local Employment Rate
Notes: This figure plots mean PTSD ratings (panel A) and mean physical ratings (panel B) by local employment rates for veterans
in the National Guard Sample.
90
Figure 1.3. High PTSD Ratings and High Physical Ratings by Local Employment Rates
B. Percentage with Physical Rating>60
A. Percentage with PTSD Rating>60
0
0
0)-
to
(0-
0
0
0@e
q
M
0
w
000
0-
0-
68
72.4
76.8
81.2
Local Employment Rate
85.6
90
68
72.4
76.8
81.2
85.6
Local Employment Rate
Notes: This figure plots the percentage of veterans in the National Guard sample with a PTSD rating above 60 (panel A) and the
percentage with a physical rating above 60(panel B) by local employment rates.
90
............
Any DC Receipt
PTSD Benefits
-'A-
Table 1.1. DC Receipt and Casualty Rates By Wartime Era
World War II
Korea
Vietnam
(3)
(2)
(1)
A. DC Receipt as of 2013
0.120
0.070
0.171
0.013
0.006
0.047
Wounded in Action (WIA)
Killed in Action (KIA)
B. Casualty Rates
0.042
0.018
0.0059
0.0181
0.035
0.0054
Post-9/1 1
(4)
0.310
0.084
0.014
0.0015
Notes: This table reports DC receipt and casualty rates by wartime era. In panel A, DC receipt rates
are the number of veterans from a wartime era who receive DC divided by the total number of
veterans from that wartime era. In panel B, casualty rates are the number of casualties divided by
the total number of persons who served during each wartime era. The number of veterans from
each wartime era who receive DC or specifically PTSD benefits is from the Veterans Benefits
Administration 2013 Annual Report. The number of casualties during WWII, Korea, and Vietnam
come from Leland and Oboroceanu (2010). The number of casualties for Post-9/1 I veterans is
available at http://www.defense.gove/news/casualty.pdf (accessed 12 February 2015). The number
of veterans from each wartime era is from the 2014 Veteran Population Projection Model, available
at http://www.va.gov/vetdata/Veteran-population.asp. The denominator for Post-9/1 1 DC rates
excludes 1.4 million members who were serving in the military in September 2012, which comes
from Defense Manpower Data Center Historical Reports.
51
-AAwwk
Table 1.2. Disability Rating Distributions for Most Common
0 Rating
10 - 20 Rating
30 - 40 Rating 50
Body System
(1)
(2)
(3)
Musculoskeletal
0.276
0.644
0.062
Auditory
0.340
0.605
0.029
Skin
0.732
0.228
0.031
Neurological
0.169
0.651
0.141
Mental
0.025
0.113
0.269
Body Systems
- 60 Rating 70 - 100 Rating
(4)
(5)
0.011
0.006
0.012
0.013
0.008
0.001
0.012
0.013
0.214
0.359
Notes: This table reports the share of disabilities that fall within the specified rating range for the five most
common body systems with disabilities. The data is from the VBA 2013 Annual Report. DC recipients from
all wartime eras are included in the reported statistics because VA Annual Benefits Reports do not
distinguish these statistics by wartime era.
Table 1.3. Employment Rate by Combined Disability Rating - Male, Post-9/1 1 Veterans
No DC
10-20 Rating 30-40 Rating 50-60 Rating 70 - 100 Rating
(5)
(4)
(3)
(2)
(1)
Share Employed
0.828
0.834
0.761
0.701
0.424
Notes: This table reports the share of employed veterans by Combined Disability Rating (CDR) for malb
post-9/1 I DC recipients who were born between 1974 and 1986, based on data from the 2013 American
Community Survey. It excludes 130 veterans with no reported CDR.
52
Table 1.4. Summary Statistics
White (Non-Hispanic)
GED or High School Dropout
High School Diploma
Some College
Baccalaureate Degree
Combat Occupation
Military Rank PFC or Below
Noncommissioned Officer
DC Receipt
All
No DC Receipt (No PTSD)
(1)
(2)
(3)
A. Demogranhic Variables
33.3
33.1
33.6
[4.1]
[4.3]
[3.9]
59.0
60.4
59.0
[19.5]
[19.6]
(19.5)
0.805
0.805
0.828
0.172
0.161
0.154
0.713
0.721
0.711
0.095
0.097
0.110
0.020
0.021
0.025
0.431
0.402
0.466
0.295
0.283
0.292
0.166
0.174
0.173
Wounded in Action (WIA)
Killed in Action (KIA)
Deceased by May 2012
B. Casualty and Mortality Rates
0.033
0.014
0.050
0.003
0.004
0
0.008
0.012
0
Age in 2014
AFQT
Any DC Receipt
Any PTSD Benefits
PTSD Is Primary Condition
IU Benefit
Combined Disability Rating
PTSD Rating
Physical Rating
Mean Monthly DC Payment
Observations
DC Receipt
(PTSD)
(4)
CDR=100 or
IU Benefit
(5)
33.8
[4.4]
54.2
[18.2]
0.787
0.221
0.686
0.079
0.014
0.511
0.339
0.132
34.7
[5.0]
51.4
[17.3]
0.776
0.302
0.625
0.061
0.012
0.493
0.379
0.110
0.088
0
0
0.128
0
0
C. Disability Compensation Variables
0.325
1
0.186
0
0.172
0
0.015
0.012
16.2
31.0
[28.1]
[23.5]
10.0
[23.6]
7.0
20.0
[16.8]
[21.4]
321
529
[655]
[590]
0.928
0.071
63.9
[21.6]
53.6
[25.5]
22.6
[25.0]
1334
[788]
55,936
10,380
37,779
7,777
1
1
1
0.866
0.808
0.407
91.7
[11.4]
82.1
[35.5]
39.5
[34.9]
2858
[0]
2,049
Notes: This table reports descriptive statistics for veterans in the National Guard sample. In panel A, a veteran's
education level, occupation, and military rank is determined at the start of his first deployment to a combat zone
between 2003 and 2006. In panel C, DC receipt information is as of January 2014. A veteran's primary condition is
the condition for which he receives the highest disability rating. A veteran's PTSD rating is 0 if he receives no
benefits for PTSD and 100 if he receives an IU award for PTSD. A veteran's physical rating is the CDR he would
receive excluding any ratings for mental health conditions and tinnitus. Veterans who receive an IU award for a
physical condition are given a physical rating of 100 and veterans with no physical conditions are given a physical
rating of 0. Mean monthly DC payments represent the payment a veteran would receive in 2014 based on his
combined disability rating and assuming no dependents. Veterans with no disability awards are given a mean
monthly DC payment of 0. Standard deviations are reported in brackets.
53
Table 1.5. Relationship Between Local Employment Rates and Casualties
Local Employment Rate
AFQT
(3)
(2)
(1)
A. Dependent Variable: 100 x All Casualties (KIA + WIA)
0.007
0.013
0.024
(0.027)
(0.027)
(0.026)
-0.025***
-0.021***
(0.004)
(0.004)
Combat Exposure (E,)
Outcome Mean
Local Employment Rate
AFQT
3.59
3.59
3.59
B. Dependent Variable: 100 x 1[Killed in Action]
0.0089
0.0097
0.0104*
(0.0061)
(0.0061)
(0.0061)
-0.0028***
-0.0025**
(0.0011)
(0.0012)
Combat Exposure (E,)
Outcome Mean
Local Employment Rate
AFQT
0.28
0.28
0.28
C. Dependent Variable: 100 x Wounded in Action
-0.002
0.003
0.014
(0.025)
(0.025)
(0.024)
-0.022***
-0.018***
(0.004)
(0.004)
Combat Exposure (E;)
Outcome Mean
3.31
Individual Controls
Clusters
Observations
1,048
55,936
3.31
3.31
1,048
55,936
X
1,048
55,936
(4)
0.019
(0.023)
-0.021***
(0.004)
0.605***
(0.031)
3.59
0.0105*
(0.0062)
-0.0025**
(0.0012)
-0.0103**
(0.0052)
0.28
0.009
(0.022)
-0.019***
(0.004)
0.616***
(0.030)
3.31
X
1,048
55,936
Notes: This table reports estimates of the relationship between local employment rates and all casualties
sustained by hostile action, fatal casualties, and non-fatal casualties for veterans in the National Guard sample.
The dependent variable in each regression is 100 times the number of casualties sustained over all deployments
after 2002. The local employment rate is the employment-to-population ratio for nbn-veteran men born between
1974 and 1986 by race (white or non-white), within each Super PUMA of the 2005-2009 ACS. All regressions
include a fixed effect for each combination of (state) x (deployment year) x (race). Individual controls (columns
3 and 4) include year-of-birth indicators, military occupation indicators, military rank indicators, and education
indicators. A veteran's state and all individual controls are determined at the start of the veteran's first
deployment. Standard errors are clustered on each combination of (Super PUMA) x (race). ***,**, and * denote
significance at the 1%, 5%, and 10% levels, respectively.
54
Table 1.6. Impact of Local Employment Rates on any DC, any PTSD, and any Physical Disability
(4)
(3)
(2)
(1)
A. Dependent Variable: 100 x I [Any Disability Compensation]
-0.227***
-0.199***
-0.208***
Local Employment Rate
-0.294***
(0.086)
(0.084)
(0.077)
(0.078)
-0.219***
-0.214***
-0.253***
AFQT
(0.011)
(0.011)
(0.011)
Combat Exposure (E,)
0.601***
(0.053)
28.98***
Wounded in Action
(1.10)
Mean Outcome
32.46
32.46
32.46
32.46
B. Dependent Variable: 100 x I [Any Compensation
-0.234***
-0.294***
Local Employment Rate
(0.071)
(0.073)
-0.227***
AFQT
(0.009)
Combat Exposure (E,)
for PTSD]
-0.212***
(0.065)
-0.192***
(0.009)
Wounded in Action
Mean Outcome
18.56
18.56
18.56
C. Dependent Variable: 100 x I [Any Compensation for a Physical Disability]
-0.154**
-0.205***
-0.141**
Local Employment Rate
(0.067)
(0.068)
(0.061)
-0.191***
-0.169***
AFQT
(0.009)
(0.009)
Combat Exposure (E ,)
-0.220***
(0.066)
-0.188***
(0.009)
0.622***
(0.051)
22.52***
(1.16)
18.56
Mean Outcome
22.58
22.58
22.58
-0.148**
(0.062)
-0.163***
(0.009)
0.290***
(0.049)
32.53***
(1.17)
22.58
Individual Controls
Clusters
Observations
1,048
55,936
1,048
55,936
X
1,048
55,936
X
1,048
55,936
Wounded in Action
Notes: This table reports estimates of the effects of local employment rates on any DC receipt, any PTSD
disability receipt, and any physical disability receipt for veterans in the National Guard sample. All dependent
variables reflect disability ratings as of January 2014. Physical disabilities are all disabilities other than mental
disorders and tinnitus (ringing of the ears). The local employment rate is the employment-to-population ratio
for non-veteran men born between 1974 and 1986, by race (white or non-white), within each Super PUMA of
the 2005-2009 ACS. All regressions include a fixed effect for each combination of (state) x (deployment year)
x (race). Individual controls (columns 3 and 4) include year-of-birth dummies, military occupation dummies,
military rank dummies, and education dummies. Standard errors are clustered on each combination of (Super
PUMA) x (race). ***,**, and * denote significance at the 1%, 5%, and 10% levels, respectively.
55
Table 1.7. Impact of Local Employment Rates on CDRs, PTSD Ratings, and Physical Disability Ratings
(1)
(2)
(3)
(4)
A. Dependent Variable: Combined Disability Rating (CDR)
Local Employment Rate
-0.218***
-0.168***
-0.154***
-0.160***
(0.061)
(0.059)
(0.053)
(0.054)
AFQT
-0.188***
-0.157***
-0.153***
(0.007)
(0.007)
(0.007)
Combat Exposure (E)
0.342***
Wounded in Action
Mean Outcome
Local Employment Rate
AFQT
16.46
16.46
B. Dependent Variable: PTSD Rating
-0.2l1***
-0.174***
(0.049)
(0.048)
-0.140***
(0.006)
16.46
-0.164***
(0.044)
-0.115***
(0.006)
Combat Exposure (E)
Wounded in Action
Mean Outcome
Local Employment Rate
AFQT
9.95
9.95
9.95
C. Dependent Variable: Physical Disability Rating
-0.053*
-0.033
-0.031
(0.029)
(0.029)
(0.027)
-0.074***
-0.064***
(0.004)
(0.004)
Combat Exposure (E)
Wounded in Action
Mean Outcome
Individual Controls
Clusters
Observations
6.97
6.97
6.97
1,048
55,936
1,048
55,936
X
1,048
55,936
(0.035)
23.76***
(0.88)
16.46
-0.169***
(0.045)
-0.113***
(0.006)
0.296***
(0.029)
12.96***
(0.77)
9.95
-0.034
(0.027)
-0.061***
(0.004)
0.076***
(0.020)
17.58***
(0.74)
6.97
X
1,048
55,936
Notes: This table reports estimates of the effects of local employment rates on CDRs, PTSD ratings, and physical
ratings for veterans in the National Guard sample. All dependent variables reflect disability ratings as of January
2014. CDRs, PTSD ratings, and physical ratings equal zero for veterans who did not receive DC, benefits for
PTSD, or benefits for physical conditions, respectively. Veterans with an IU benefit are given a CDR of 100 and a
PTSD or physical rating of 100 if the IU benefit is because of PTSD or a physical condition. A veteran's physical
rating is his CDR after excluding all mental disabilities and tinnitus. The local employment rate is the employmentto-population ratio for non-veteran men born between 1974 and 1986, by race (white or non-white), within each
Super PUMA of the 2005-2009 ACS. All regressions include a fixed effect for each combination of (state) x
(deployment year) x (race). Individual controls include year-of-birth dummies, military occupation dummies,
military rank dummies, and education dummies. Standard errors are clustered on each combination of (Super
PUMA) x (race). ***,**, and * denote significance at the 1%, 5%, and 10% levels, respectively.
56
Table 1.8. Impact of Local Employment Rates on Disability Compensation - Extensive and Intensive Margins
100x
1 [PTSD Rating>0]
Local Employment Rate
Combat Exposure (E1 )
Wounded in Action
Mean Outcome
Clusters (Company)
Observations
A. PTSD Disability Rating Outcomes
100x
100x
1 [PTSD Rating>50]
I [PTSD Rating?30]
100x
1 [PTSD Rating>70]
100x
I [PTSD Rating= 00/IU]
(5)
-0.098***
(0.024)
0.071***
(0.021)
4.52***
(0.68)
(1)
-0.220***
(0.066)
0.622***
(0.051)
22.52***
(1.16)
(2)
-0.208***
(0.067)
0.564***
(0.046)
21.74***
(1.15)
(3)
-0.193***
(0.063)
0.336***
(0.040)
16.24***
(1.13)
(4)
-0.184***
(0.046)
0.162***
(0.031)
8.80***
(0.93)
18.56
1,048
55,936
17.57
1,048
55,936
12.15
1,048
55,936
6.53
1,048
55,936
1,048
55,936
2.82
B. Physical Disability Rating Outcomes
100 x
I [Phys. Rating>0]
Local Employment Rate
Combat Exposure (E,)
Wounded in Action
Mean Outcome
Clusters (Company)
Observations
(1)
-0.148**
(0.062)
0.290***
(0.049)
32.53***
(1.17)
22.58
1,048
55,936
100x
100x
1 [Phys. Rating;>30]
(2)
-0.075
(0.050)
0.136***
(0.037)
27.49***
(1.24)
1 [Phys. Rating>50]
(3)
0.012
(0.029)
0.038
(0.026)
19.30***
(1.04)
100 x
I [Phys. Rating>70]
(4)
0.003
(0.017)
0.011
(0.018)
11.01***
(0.83)
100 x
1 [Phys. Rating=100/IU]
(5)
-0.003
(0.008)
0.002
(0.007)
2.39***
(0.36)
10.96
1,048
55,936
5.03
1,048
55,936
2.11
1,048
55,936
0.46
1,048
55,936
Notes: This table reports estimates of the effects of local employment rates on PTSD disability awards and physical disability awards. The dependent
variable in each regression equals 100 for veterans with an award rating greater than or equal to the cutoff specified in each column, and 0 otherwise. The
notes in Table 7 describe the construction of PTSD ratings, physical disability ratings, and local employment rates. All regressions include a fixed effect
for each combination of (state) x (deployment year) x (race), a linear term for AFQT, year-of-birth indicators, military occupation indicators, military
rank indicators, and education indicators. Standard errors are clustered on each combination of (Super PUMA) x (race). ***,**, and * denote
significance at the 1%, 5%, and 10% levels, respectively.
57
Table 1.9. Interaction of Local Employment Rates and Combat Exposure Measures
Dependent Variable: PTSD Disability Rating
(2)
-0.165***
(0.047)
0.376***
(0.030)
0.376***
(0.030)
-0.001
(0.005)
(3)
(4)
-0.166***
(0.044)
(5)
-0.15
(0.044
14.03***
(0.76)
14.06***
(0.75)
-0.25*
(0.13)
13.89***
(0.80)
-0.22
(0.14)
9.95
1,048
55,936
9.95
1,048
55,936
X
9.95
1,048
55,936
(6)
)
Local Employment Rat e
(1)
-0.167***
(0.046)
Combat Exposure (E,)
(E,) x (Local Emp. Rat e)
0.361***
(0.032)
0.002
(0.006)
Wounded in Action (WIA)
(WIA) x (Local Emp. Rate)
(Super PUMA) x (Race) x
(Deployment Year) FE
Mean Outcome
Clusters
Observations
9.95
1,048
55,936
X
9.95
1,048
55,936
9.95
1,048
55,936
Notes: This table reports estimates of equations (7) and (8) for veterans in the National Guard sample. The dependent variable
is a veteran's PTSD rating in January 2014. Veterans who do not receive compensation for PTSD have a rating of zero and
veterans who receive an IU award for PTSD are given a PTSD rating of 100. The local employment rate is the employment-topopulation ratio for non-veteran men born between 1974 and 1986, by race (white or non-white), within each Super PUMA of
the 2005-2009 ACS. The results reported in columns 1, 2, 4, and 5 include a fixed effect for each combination of (state) x
(deployment year) x (race) while the results reported in columns 3 and 6 include a fixed effect for each combination of (Super
PUMA) x (deployment year) x (race). All regressions include a linear term for AFQT, year-of-birth indicators, military
occupation indicators, military rank indicators, and education indicators. Standard errors are clustered on each combination of
(Super PUMA) x (race). ***,**, and * denote significance at the 1%, 5%, and 10% levels, respectively.
58
1- 1....
.
..
.
--
..........
-
.
Table 1.10. Impact of Local Employment Rates on DoD Disability, Mortality, and Veteran Health
A. DoD Disability and Mortality Outcomes
100 x
1 [Any DoD
Disability]
Local Employment Rate
Combat Exposure (E,)
Wounded in Action
Mean Outcome
Clusters
Observations
(1)
-0.010
(0.018)
0.013
(0.018)
11.43***
(0.87)
2.426
1,048
55,936
100 x
1 [DoD Disability
for PTSD]
(2)
-0.001
(0.011)
0.018
(0.013)
4.43***
(0.55)
100 x
1 [DoD Physical
Disability]
(3)
0.004
(0.016)
0.011
(0.016)
10.93***
(0.86)
(4)
0.019*
(0.011)
-0.008
(0.009)
-0.05
(0.26)
0.833
1,048
55,936
2.081
1,048
55,936
0.828
1,048
55,936
100 x
1 [Deceased by
May 2012]
B. Veteran Health Index Outcomes
Veteran SelfReported
Disability Index
Veteran Physical
Health Index
(0.060)
Veteran Cognitive
Health Index
(2)
-0.050
(0.045)
7.210
1,047
55,928
4.979
1,047
55,928
2.939
1,047
55,928
(1)
Local Employment Rate
Mean Outcome
Clusters
Observations
-0. 100*
(3)
-0.055
(0.039)
Notes: This table reports estimates of equation (6) for veterans in the National Guard sample where the
dependent variable is described by the header for each column. A veteran is considered deceased if he was
identified as such in the SSA Death Master File prior to June 2012. The Veteran Self-Reported Disability Index
is the percentage of male, post-9/11 veterans in each Super PUMA (by race) of the 2009-2011 ACS who were
born in 1974 or later and who reported having any disability other than one involving hearing or eyesight. The
Veteran Cognitive Health Index and the Veteran Physical Health Index are the percentage of male, post-9/11
veterans in each Super PUMA (by race) of the 2009-2011 ACS who reported having cognitive difficulties and
physical disabilities, respectively. The local employment rate is the employment-to-population ratio for nonveteran men born between 1974 and 1986, by race (white or non-white), within each Super PUMA of the 20052009 ACS. All regressions include year-of-birth indicators, a linear term for AFQT, military occupation
indicators, military rank indicators, education indicators, and a fixed effect for each combination of (state) x
(deployment year) x (race). Standard errors are clustered on each combination of Super PUMA and race (white
or non-white). ***,**, and * denote significance at the 1%, 5%, and 10% levels, respectively.
59
Appendix Table 1.1. Covariate Balance on E
Baseline Characteristic
Mean
(1)
AFQT
Age in 2012
Local employment rate
GED or less than high school
59.02
[19.50]
33.33
[4.07]
80.73
[5.91]
0.17
Some college or more
0.12
No dependents
0.70
(State) x (Race) x (Dep Year) FE
Occupation and Rank FE
P Val (Joint x 2 Test)
Clusters
Observations
55,936
Regression of LHS Variable on E;
(3)
(2)
-0.0488**
(0.0215)
-0.0189***
(0.0035)
-0.0006
(0.0093)
0.0000
(0.0004)
0.0003
(0.0003)
0.0014***
(0.0004)
0.0218
(0.0207)
-0.0028
(0.0036)
0.0052
(0.0095)
-0.0005
(0.0004)
0.0008**
(0.0003)
0.0002
(0.0004)
X
X
X
0.000
0.172
1048
55,936
1048
55,936
Notes: This table reports coefficients from regressions of the baseline characteristic listed on E j,
the combat exposure measure described in section 3.3. All regressions include a fixed effect for
each combination of (state) x (deployment year) x (race). All baseline characteristics, including a
veteran's state of residence, are determined at the start of a veteran's first deployment between
2003 and 2006. The construction of local employment rates is described in section 3.2. Joint X 2
tests are for the null hypothesis that the coefficients on combat exposure in all regressions are
equal to zero. Standard deviations are reported in brackets. Standard errors, clustered on each
combination of (Super PUMA) x (race), are reported in parentheses. ***,**, and * denote
significance at the 1%, 5%, and 10% levels, respectively.
60
Appendix Table 1.2. Impact of Local Employment Rates on IU and 100 Ratings
A. PTSD Disability Rating Outcomes
Local Employment Rate
Combat Exposure (E,)
Wounded in Action
Mean Outcome
Clusters (Company)
Observations
Local Employment Rate
Combat Exposure (E,)
Wounded in Action
Mean Outcome
Clusters (Company)
Observations
100x
100x
I [PTSD Rating= 00/IU]
1 [PTSD Rating=IU]
(2)
-0.024**
(0.012)
0.021*
(0.012)
2.451***
(0.498)
(1)
-0.098***
(0.024)
0.071***
(0.021)
4.519***
(0.677)
2.82
1,048
55,936
1.24
1,048
55,936
B. Physical Disability Rating Outcomes
100x
100 x
1[Phys. Rating= I00/IU]
I [Phys. Rating=IU]
(2)
(1)
-0.003
-0.001
(0.008)
(0.005)
0.002
0.004
(0.007)
(0.005)
2.389***
0.743***
(0.356)
(0.217)
0.46
1,048
55,936
0.18
1,048
55,936
100 x
I [PTSD Rating=100]
(3)
-0.074***
(0.019)
0.050***
(0.017)
2.068***
(0.464)
1.58
1,048
55,936
100 x
I [Phys. Rating=100]
(3)
-0.002
(0.006)
-0.002
(0.005)
1.646***
(0.291)
0.28
1,048
55,936
Notes: This table reports estimates of the effects of local employment rates on PTSD disability awards and physical
disability awards. The dependent variable in each regression equals 100 for veterans with an award specified in each column,
and 0 otherwise. The notes in Table 7 describe the construction of PTSD ratings, physical disability ratings, and local
employment rates. All regressions include a fixed effect for each combination of (state) x (deployment year) x (race), a linear
term for AFQT, year-of-birth indicators, military occupation indicators, military rank indicators, and education indicators.
Standard errors are clustered on each combination of (Super PUMA) x (race). ***,**, and * denote significance at the 1%,
5%, and 10% levels, respectively.
61
Appendix Table 1.3. Impact of Local Employment Rates on Disability Compensation Outcomes - Extensive and Intensive Margins
100x
100x
100x
100x
100x
00/lU]
1
[Rating=1
1
[Rating>70]
50]
[Rating
1
I
[Rating>30]
1 [Rating>0]
(5)
(4)
(3)
(2)
(1)
A. PTSD Disability Rating Outcomes
-0.098***
-0.184***
-0.193***
-0.208***
-0.220***
Local Employment Rate
(0.024)
(0.046)
(0.063)
(0.067)
(0.066)
2.82
6.53
12.15
17.57
18.56
Mean Outcome
Local Employment Rate
Mean Outcome
Local Employment Rate
Mean Outcome
Local Employment Rate
Mean Outcome
Local Employment Rate
Mean Outcome
-0.148**
(0.062)
22.58
B. Physical Disability Rating Outcomes
0.012
-0.075
(0.029)
(0.050)
5.03
10.96
0.003
(0.017)
2.11
-0.003
(0.008)
0.46
-0.141**
(0.071)
28.43
C. Non-PTSD Rating Outcomes
0.009
-0.075
(0.036)
(0.053)
7.25
14.79
0.009
(0.021)
3.22
0.003
(0.010)
0.74
0.041*
(0.022)
3.66
D. Mental Disorder Rating (excluding PTSD) Outcomes
0.007
0.023
(0.012)
(0.019)
1.24
2.84
0.001
(0.008)
0.58
0.006
(0.006)
0.27
E. Tinnitus Outcome (all tinnitus diagnoses results in a rating of 10)
-0.106**
(0.053)
16.69
Notes: This table reports estimates of the effects of the local employment rates on PTSD disability awards, physical awards, non-PTSD awards, mental
awards for conditions other than PTSD, and tinnitus. The dependent variable in each regression equals 100 for veterans with an award rating greater
than or equal to the cutoff specified in each column, and 0 otherwise. The notes in Table 7 describe the construction of PTSD ratings, physical disability
ratings, and local employment rates. Non-PTSD ratings represent a veteran's CDR from all conditions other than PTSD while mental disability ratings
represent a veteran's CDR based on all mental conditions other than PTSD. All regressions include a fixed effect for each combination of (state) x
(deployment year) x (race), a linear term for AFQT, year-of-birth indicators, military occupation indicators, military rank indicators, and education
indicators. The number of observations in each regression is 55,936 and the number of clusters is 1,048. Standard errors are clustered on each
combination of (Super PUMA) x (race). ***,**, and * denote significance at the 1%, 5%, and 10% levels, respectively.
62
Appendix Table 1.4. Impact of Local Employment Rates on Military Outcomes
100 x
1 [Exit Army by June
2013]
Local Employment Rate
Combat Exposure (E,)
Wounded in Action
Mean Outcome
Clusters
Observations
100 x
1 [More than One
Deployment]
(1)
-0.074
(0.071)
0.187***
(0.046)
1.83
(1.12)
(2)
0.053
(0.065)
65.130
1,048
55,936
31.742
1,048
55,936
-0.306***
(0.059)
3.55***
(1.22)
Notes: This table reports estimates of equation (6) for veterans in the National
Guard sample where the dependent variable is described by the header for each
column. The local employment rate is the employment-to-population ratio for nonveteran men born between 1974 and 1986, by race (white or non-white), within
each Super PUMA of the 2005-2009 ACS. All regressions included, year-of-birth
indicators, a linear term for AFQT, military occupation indicators, military rank
indicators, education indicators, and include a fixed effect for each combination of
(state) x (deployment year) x (race). Standard errors are clustered on each
combination of Super PUMA and race (white or nonwhite). ***,**, and * denote
significance at the 1%, 5%, and 10% levels, respectively.
63
Chapter 2. The Impact of Disability Benefits on Labor Supply:
Evidence from the VA's Disability Compensation Program*
Co-Authored with David H. Autor (MIT and NBER), Mark Duggan (Stanford and NBER), and David S.
Lyle (United States Military Academy)
April 2015
Abstract
Combining administrative data from the U.S. Army, Department of Veterans Affairs (VA) and the U.S.
Social Security Administration, we analyze the effect of the VA's Disability Compensation (DC) program on
veterans' labor force participation and earnings. The largely unstudied Disability Compensation program
currently provides income and health insurance to almost four million veterans of military service who suffer
service-connected disabilities. We study a unique policy change, the 2001 Agent Orange decision, which
expanded DC eligibility for Vietnam veterans who had served in-theatre to a broader set of conditions such
as type 2 diabetes. Exploiting the fact that the Agent Orange policy excluded Vietnam era veterans who did
not serve in-theatre, we assess the causal effects of DC eligibility by contrasting the outcomes of these two
Vietnam-era veteran groups. The Agent Orange policy catalyzed a sharp increase in DC enrollment among
veterans who served in-theatre, raising the share receiving benefits by five percentage points over five years.
Disability ratings and payments rose rapidly among those newly enrolled, with average annual non-taxed
federal transfer payments increasing to $17K within five years. We estimate that benefits receipt reduced
labor force participation by 18 percentage points among veterans enrolled due to the policy, though measured
income net of transfer benefits rose on average. Consistent with the relatively advanced age and diminished
health of Vietnam era veterans in this period, we estimate labor force participation elasticities that are
somewhat higher than among the general population.
* This research was supported by the U.S. Social Security Administration through grant #10-P-98363-1-05 to the
National Bureau of Economic Research as part of the SSA Retirement Research Consortium. The views and findings
expressed herein are those of the authors and do not purport to reflect the position of the U.S. Military Academy, the
Department of the Army, the Department of Defense, the SSA, or the NBER. We are grateful to Josh Angrist, Orley
Ashenfelter, Mary Daly, and seminar participants at Arizona State University, the Federal Reserve Board, Princeton
University, and Stanford University for helpful comments. We are indebted to Luke Gallagher of the Army Office of
Economic Manpower Analysis for outstanding research assistance and to Mike Risha of the Social Security
Administration for assistance with all aspects of data development and interpretation.
64
2.1
Introduction
This chapter investigates the effect of the U.S. Department of Veterans Affairs' (VA's) Disability
Compensation (DC) program on the labor supply of military veterans. Since the ratification of the U.S.
Constitution in 1789, the federal government has provided cash benefits to disabled veterans. During the
Civil War, the benefit was revised from a flat payment scheme to a graduated schedule based on disability
severity. Multiple governmental agencies administered Veterans' benefits until the summer of 1930, when
they were consolidated under a new federal agency called the Veterans' Administration.34
We focus on a major legislative change that took effect in 2001, which generated a plausibly exogenous
increase in the generosity of disability benefits for one group of Vietnam Era veterans but not another.35
Motivated by an Institute of Medicine study that linked exposure to Agent Orange and other herbicides used
by the U.S. military during the Vietnam War to the onset of type 2 diabetes, the VA in July of 2001
expanded the medical eligibility criteria for Vietnam veterans to include diabetes as a covered condition.
This coverage expansion applied to veterans who served "in-theatre" in Vietnam, Cambodia, or Laos during
the 1964 to 1975 period. It did not, however, apply to the approximately 55 percent of Vietnam era veterans
who did not serve in theatre during the war.
The 2001 policy change coincided with a sharp acceleration in the number of veterans receiving DC
benefits, documented in Figure 2.1. Some of this overall increase was attributable to a much higher rate of
DC enrollment among veterans serving in the 1990s and 2000s than among their counterparts from earlier
service eras.36 But much of it was driven by the rise in DC enrollment among Vietnam era veterans. As
shown in Figure 2.2, the fraction of Vietnam era veterans receiving DC benefits had been trending up
gradually prior to the 2001 policy change so that 9 percent of Vietnam-era veterans received DC benefits in
that year. But there was a significant break in that trend after 2001 so that by 2013 more than 17 percent of
3
This was changed to the Department of Veterans Affairs in 1989.
See Gruber (2000) for an analysis of a reform to the federal government's disability program in all parts of Canada
except for Quebec that is estimated to have reduced labor supply.
36 Veterans from the Gulf War and Global War on Terror are 2-3 times more likely
than veterans from WWII or the
3
Korean War era to receive DC benefits. Shifts in the composition of veterans (as those from older eras die and the
recent era join the ranks) have contributed to a substantial increase in total DC enrollment.
65
Vietnam era veterans were receiving DC benefits. No similar changes in rates of DC enrollment occurred for
veterans from earlier service eras. 37
The policy-induced increase in DC enrollment provides an unusual opportunity to study how disability
benefits affect the labor supply of near elderly male veterans. Adopting the terminology used by the military,
we distinguish among 'boots on the ground' (BOG) Vietnam era veterans-the veterans directly affected by
the Agent Orange policy-with 'not on ground' (NOG) veterans, whose DC benefit eligibility was not
expanded. We analyze unique administrative data for a sample of more than 4 million U.S. Army veterans to
compare the evolution of labor market outcomes among BOG veterans to other Vietnam era veterans who
did not serve in the Vietnam theatre during the conflict there. By using other Vietnam era veterans as our
comparison group, we account for the possibility that veterans would have retired sooner (or later) than nonveterans for reasons unrelated to the DC program. And given the large number of years of pre-2001 data
included in our research data, we can control for possible differential trends between BOG and NOG
veterans.
A large body of research investigates the effects of U.S. federal disability programs on health,
employment, poverty, consumption and welfare. 38 This research has focused almost exclusively on the Social
Security Disability Insurance (SSDI) program, with early studies considering the effects on labor force
participation (Parsons, 1980; Bound, 1989; Bound and Waidmann, 1992) and subsequent studies exploring
the sensitivity of the program to economic conditions (Black et al, 2002) and the labor market effects of
changes in the program's medical eligibility criteria and in effective replacement rates (Autor and Duggan,
2003).39 One challenge for this research is that, because SSDI is a federal program, there is no natural
comparison group against which to estimate the effects of the program. To address this issue, more recent
research has estimated the effect of SSDI on the labor supply of applicants and beneficiaries by using
variation in the propensity to award disability benefits across disability examiners (Maestas et al, 2013;
37
38
See Duggan, Rosenheck, and Singleton (2010) for a comparison to veterans from earlier service eras.
See for example the Handbook chapter by Bound and Burkhauser (1999).
The labor supply consequences of the federal Supplemental Security Income (SSI) program are rarely studied because
SSI largely serves individuals with extremely limited prior work histories (those with significant work histories
normally qualify for SSDI).
39
66
Autor et al., 2015b) and administrative law judges (French and Song, 2014). The findings from these studies,
which utilize large-scale administrative data sets on both SSDI enrollment and earnings, suggest that labor
force participation among marginal applicants-that is, those who would receive an SSDI award from a
lenient judge or examiner but not from a stricter one-declines by about thirty percentage points as a result
of receiving an SSDI award.4
There has been much less research on the VA's Disability Compensation program. Autor and Duggan
(2007), Autor et al (2011), and Coile et al (2015) use data from the Current Population Survey (CPS) to
explore how labor force participation changed for male Vietnam-era veterans relative to similarly aged nonveteran males after the 2001 policy change. All three studies demonstrate a significantly larger decline in
labor force participation among Vietnam-era veterans in the post-2001 period, though their confidence
intervals are compatible with a wide range of effect sizes. A concern with this body of work is that veterans
might retire sooner than non-veterans for reasons unrelated to the DC program. 41Using the Vietnam-era draft
lottery as an instrumental variable for Vietnam-era military service, Angrist et al (2010) estimate that
employment was lower and transfer income receipt was higher among low skilled veterans than among lowskilled non-veterans. They hypothesize that the lower employment rate of Vietnam-era veterans is due to the
availability of Veterans Disability Compensation benefits. In related work on the labor supply of veterans,
Boyle and Lahey (2010) study the expansion of the Veterans Health Insurance program to non-disabled
veterans in the mid-1990s to analyze changes in labor force participation stemming from increased real
incomes and reductions in "job lock."
Using individual-level data from the U.S. Department of Veterans Affairs, we document that DC receipt
and enrollment growth were higher among BOG than NOG veterans prior to 2001, but these gaps were
Using an approach similar to Bound (1989), Chen and van der Klauuw (2008) estimate even smaller effects of
SSDI
enrollment, with an upper bound of a 20 percent reduction in labor force participation. Results from Von Wachter et al
(2011) suggest labor supply effects may be larger for younger SSDI recipients. While SSDI reduces the incentive to
work among recipients, the results of a recent policy change in Norway suggest that disability insurance recipients are
responsive to changes in the magnitude of this incentive (Kostol and Mogstad, 2014).
40
4' Duggan et al (2010) use five years of data (odd-numbered years between 1997 and 2005) from the Veterans'
Supplement to the CPS to compare changes in labor force participation among Vietnam veterans who served in theatre
and other Vietnam era veterans. Given the small sample size, their estimates are very imprecise, as their confidence
interval includes the full range of possible effect sizes (zero effect or a one-for-one reduction in labor force
participation).
67
relatively stable. After 2001, however, the rate of DC enrollment grew much more rapidly among BOG
veterans, as shown in Figure 2.3. Between 2000 and 2006, the ratio of DC receipt among BOG relative to
NOG veterans rose from approximately two-to-one to three-to-one: almost one-in-four BOG veterans in our
analysis sample received DC benefits in 2006 versus one-in-twelve among veterans in our NOG sample. This
trend break was driven primarily by a sharp increase in the number of diabetes awards to BOG veterans as
shown in Figure 2.4. DC enrollment growth among BOG veterans shows essentially no break in trend when
one excludes DC recipients with a diabetes diagnosis (Figure 2.5). Using matched data from the Social
Security Administration, we also document a differential increase in SSDI enrollment after 2001 among
BOG veterans. A plausible channel for this effect, which is about one-tenth as large as the corresponding
change in DC enrollment, is that receipt of veterans disability benefits eases financial constraints associated
with exiting the labor force and applying for SSDI benefits. Moreover, since SSA regulations require
disability adjudicators to consider disability decisions of other federal agencies (U.S. General Accounting
Office, 2009), a veteran's enrollment in DC may also increase the likelihood that he or she applies for and
ultimately receives Social Security disability benefits.
By generating a sharp differential increase in DC enrollment among BOG veterans for reasons unrelated
to changing health, the Agent Orange policy change permits causal estimation of the effect of DC program
participation on the labor supply of near-elderly Vietnam veterans. Analyzing this policy-induced variation,
we find that labor force participation, defined as having strictly positive earnings for the year in our
administrative data, declined sharply among BOG relative to NOG veterans soon after the 2001 policy
change. The results are similar for younger (born 1949-51) and older (born 1946-48) Vietnam veterans in our
analysis sample, suggesting that this pattern reflects an effect of the policy rather than a tendency of BOG
veterans to retire sooner than their NOG counterparts for reasons unrelated to the DC program. For every 100
individuals who entered the DC program as a result of the policy change, we estimate that 18 drop out of the
labor force. The magnitude of this decline reflects in part the size of the cash transfers that DC beneficiaries
receive. Among veterans who entered the DC program after 2001, annual benefits averaged $1 OK in the first
year of enrollment and $17K in the fifth year of enrollment. Since DC benefits are not subject to state or
68
federal taxation, their after tax value is 30 to 40 percent greater than nominally equivalent labor income.
Indeed, the increase in disability benefit income among BOG veterans more than offset (on average) their
reduction in earnings among BOG veterans, so that total incomes of BOG relative to NOG veterans rose
steadily after 2001.
Combining our labor force participation and benefit receipt estimates, we obtain a non-participation
elasticity of 0.50. This estimate is larger than the figure of 0.16 reported by Coile and Gruber (2007) for the
elasticity of retirement of near-elderly adults with respect to Social Security and other retirement wealth. 2
Our estimates are highly consistent with the participation elasticities calculated by Boyle and Lahey (2010),
however, who studied labor supply of older non-disabled veterans ages 55 through 64 who were granted
access to VA health insurance in the mid- 1990s.
Distinct from the SSDI program, which provides income replacement for beneficiaries who are unable to
work due to disability, DC benefits are awarded as compensation for service-related reductions in health and
thus, for the most part, are not contingent on veterans' past or present employment. This observation would
suggest that any DC-induced reduction in labor supply that we detect would be attributable to a nonincentive income effect. However, nearly 14 percent of BOG veterans who were receiving DC by the end of
our sample window were receiving maximum DC benefits of approximately $2,900 monthly because they
were deemed unable to work ('Individually Unemployable' or IU) due to their disability. For veterans
receiving the IU benefit and those seeking it, the labor supply effects that we estimate are likely to
encompass both income and incentive effects.
The labor supply effects that we document acquire added significance in light of the rapid growth in
Disability Compensation enrollment, with annual benefit payments of $49.2 billion in 2013 (U.S. Veterans
Benefits Administration, 2014). Today's veterans are substantially more likely than U.S. veterans of earlier
cohorts to obtain DC benefits; indeed, veterans who have exited military service since 2001 are more than
three times as likely as were Vietnam era veterans to receive disability compensation benefits soon after the
A venerable literature estimates labor supply elasticites, including Frisch (1959), Ashenfelter and Heckman (1974),
Abbott and Ashenfelter (1976), and Chetty (2012) among many key contributions. Very few studies estimate income
elasticites of participation, however. McClelland and Mok (2012) provide a recent review.
42
69
completion of service. This cross-cohort contrast suggests that Veterans Disability Compensation program
costs may rise substantially beyond what would be predicted based on earlier generations of veterans, a
concern highlighted by Bilmes and Stiglitz (2008).
The analysis proceeds as follows. Section 2.2 discusses the financial and labor force participation
incentives created by the Veterans Disability Compensation program. Section 2.3 describes the 2001 Agent
Orange decision that made type 2 diabetes a service-connected disability for veterans who served in the
Vietnam theatre. Section 2.4 details the construction of our data, which we use in Section 2.5 to analyze the
impact of the Agent Orange decision on veterans' enrollment in DC and their receipt of transfer income from
DC and two other federal disability programs, SSDI and Supplemental Security Income (SSI). Section 2.6
presents reduced form estimates of the impact of the Agent Orange policy on labor force participation and
total labor earnings. Section 2.7 combines these margins to provide instrumental variables estimates of the
impact of DC enrollment and total disability benefits payments on labor force participation. Section 2.8
documents the effect of the Agent Orange policy on total measured income inclusive of disability benefits
and net of any induced change in labor supply. Section 2.9 concludes.
2.2
The Veterans Disability Compensation Program: Eligibility, Benefits and Work Incentives
The DC program pays cash benefits and provides prioritized access to VA health facilities to military
veterans with service-connected medical conditions, meaning that they are caused or aggravated by their
military service. Unlike SSDI and SSI, federal programs that classify disability using a categorical (all-ornothing) determination, the DC program rates disability on a discrete scale with eleven gradations ranging
from zero to 100 percent in ten percent increments. Ratings depend on the type and severity of the disability,
with more severe conditions receiving a higher rating.
If the recipient receives ratings for multiple
disabilities, the recipient's Combined Disability Rating (CDR) is an increasing, concave function of the
4 The range of possible ratings differs among disabilities. For example, type 2 diabetes can have a rating of 10, 20, 40,
60, or 100 percent. Arthritis can be assigned a rating of 10 or 20 percent. For a list of conditions and ratings see
http://www.warms.vba.va.gov/bookc.html. A disability with a 0 percent rating would not increase the monthly cash
benefit but would entitle the veteran to priority for health care through the Veterans Health Administration.
70
individual ratings, where concavity prevents the combined rating from exceeding 100 percent.44
A.
Eligibilityand benefits
Veterans seeking DC benefits apply to one of 56 regional offices of the Veterans Benefit Administration
(VBA), which collects necessary information and forwards the information to a Rating Board. For each
disability claimed, the Rating Board determines whether the disability is verified, whether it is service
connected, and if so, what rating it merits. During the 2000 fiscal year, more than 70 percent of those
applying for DC sought benefits for more than one medical condition (U.S.
Veterans Benefits
Administration, 2001). Applicants face one of three possible outcomes: outright rejection, an award for some
but not all conditions, or an award for all conditions. In 2006, current DC beneficiaries averaged 2.97
disabilities per recipient, with the highest number of disabilities per capita among Gulf War and Vietnam Era
veterans, and the lowest number among WWII veterans (U.S. Veterans Benefits Administration, 2007).
Monthly benefits awarded by DC are an increasing and convex function of the veteran's CDR. In 2014, a
10 percent award provided a monthly payment of $131 whereas a 100 percent award provided a monthly
payment of $2,858.4 Veterans receiving a CDR of 30 percent or higher and who have spouses, dependent
children, or surviving parents also receive modest additional benefits. The VBA also considers employment
capability for veterans with severe disabilities. Veterans who have single disabilities rated at 60 percent or
above or a combined disability rating of at least 70 and one disability rated at least 40 can qualify for the
Individual Unemployability (IU) designation if VBA determines that they are unable to "to secure and follow
a substantially gainful occupation by reason of service-connected disability." Veterans receiving the IU
designation are provided cash payments at the 100 percent CDR level even if their CDR is less than 100
If a claimant has multiple disabilities, only the claimant's 'residual ability' is considered when determining the effect
of each additional disability on the CDR. For example, if a veteran has two disabilities rated at 50%, his CDR would be
he equal to the sum of 50% for the first disability and 50% of his residual capacity of 50% for the second disability, all
rounded to the nearest increment of 10%. Thus, two disabilities rated at 50% results in a CDR of [0.5 + (1 - 0.5) * 0.5]
= 0.75, which is then rounded up to 0.80.
4
The stated policy of the VBA that the DC benefits schedule reflects the average reduction in earnings capacity for
each value of the CDR. Since benefits determination depends only on CDR and family status, it is clear that the benefit
payment will exceed the earnings loss for some veterans and fail to meet the earnings loss of others. In 2014, the
monthly benefit schedule (by CDR) was: $131 (10%), $259 (20%), $401 (30%), $578 (40%), $822 (50%), $1,041
(60%), $1,312 (70%), $1,526 (80%), $1,714 (90%), and $2,858 (100%).
4
71
percent.
Veterans Disability Compensation benefits typically have longer award durations and fewer work
restrictions than other federal disability benefits. DC benefits are also not subject to federal income or payroll
tax; hence, a dollar in DC income is roughly equivalent to $1.30 to $1.50 in pre-tax earned income,
depending upon the recipient's marginal tax rate. DC benefits generally do not offset and are not offset by
other federal transfer benefits, and, once awarded, are rarely retracted. 46 Unlike federal SSDI benefits, DC
benefits do not terminate when a recipient reaches retirement age, even for recipients receiving the IU
benefit. Moreover, a veteran's ongoing receipt of DC benefits is neither work-contingent nor incomecontingent, except for veterans who have received the IU rating.
Thus, DC benefits are roughly akin to an
inflation-indexed annuity that provides monthly payments for as long as a veteran remains alive.
Appendix Table 2.1 summarizes DC cash benefits paid in fiscal year 2006, the final year for which we
have individual-level DC data in the analyses below. The first three columns enumerate the count of
recipients, the total dollars paid, and the average annual benefit in each CDR category at the end of fiscal
year 2006. The average annual payment to the 2.73 million DC recipients in this year was $10,862 per capita,
totaling approximately $29.6 billion for the year. Veterans with ratings between 0 and 20 percent accounted
for 44 percent of recipients but just 8 percent of dollars paid. Those with ratings at 70 percent or above
comprised 21 percent of the population and received 62 percent of the benefits payments. 48
Total DC benefits payments rose from $20.8 billion to $49.1 billion between 2001 and 2013 (in constant
2013 dollars). Simultaneously, the estimated veteran population declined from 26.1 million to 22.1 million
(VBA, 2002 and 2014). As a result of these changes, real annual DC expenditures per living veteran
A Veteran may receive both DC and SSDI payments without any reduction in benefits from either program, though
SSI payments will generally be reduced or eliminated by DC payments.
46
A veteran can lose the IU rating if his annual labor market earnings (measured by SSA earnings data) exceeds a
threshold amount, which was equal to $6,000 in 2004 and 2005. The General Accounting Office notes, however, that
"this process relies on old data, outdated and time-consuming procedures, insufficient guidance, and weak eligibility
criteria" (GAO 2006, p. 23).
48
The average monthly benefit amounts for those with ratings between 0 and 20 percent are very close to the baseline
amounts because veterans with these ratings are not eligible for dependent benefits. The average amounts paid for those
rated 60 percent and higher are substantially greater than the baseline amounts because many of these recipients are
eligible for the 100 percent payment amount because they are receiving the IU benefit.
47
72
increased by 180 percent (from $798 to $2,234).
B.
Work Incentives under DC
The graduated scale of DC disability ratings creates a complex set of incentives. Though disability
ratings for DC recipients notionally depend exclusively on medical criteria rather than employment status,
veterans may nevertheless perceive that their disabilities will receive higher ratings if they are not employed
when applying to obtain or increase benefits. Veterans also face an incentive to repeatedly reapply to
increase their Combined Disability Ratings-and therefore their benefits-as their health conditions
evolve. 49 One consequence is that veterans' CDRs and benefit levels tend to rise steeply in the years
following enrollment, as shown in panels A and B of Table 2.1.50 This pattern of rapidly escalating benefits
following enrollment suggests that policies that induce veterans to obtain an initial DC award, even at a low
CDR, may lead to substantially larger claims over the longer term and discourage labor force participation.
The availability of the Individual IU designation is likely to amplify these incentives. The IU benefit has
significant monetary value: a 2006 General Accounting Office report found that the average present
discounted incremental value of receiving an IU award in 2005 was approximately $300 to $460 thousand
for veterans age 20 (net of existing benefits), and was $89 to $142 thousand for veterans age 75 (U.S. GAO,
2006).5
The availability of this benefit appears likely to induce at least some subset of work-capable
veterans to curtail labor force participation to qualify. Once the IU designation is awarded, veterans face an
incentive to maintain low earnings since the benefit is technically only available for those with labor market
earnings at or below the poverty level for a single individual (U.S. GAO, 2006).52
In fact, we observe very few reductions in CDRs in our data, and it is possible that those few that exist reflect coding
errors. Veterans face little risk of having their CDRs reduced after the initial award.
4
We describe the sample used to construct Table 2.1 in Section III below. Although our data codes DC receipt in each
year from 1998 through 2006, we can only determine what year DC was awarded if a veteran is observed not receiving
DC in a prior year. We can thus identify DC enrollment cohorts from 1999 forward, but not for 1998.
50
51 Among veterans in our sample who received benefits at the 100 percent disability level in 2006, about half were
designated as LU. We henceforth do not distinguish between the IU benefit and 100 percent disability since many DC
recipients with 100 percent disability may have previously qualified for the IU benefit with a lower CDR.
52 The fact that only veterans with severe disabilities (a CDR of 60 or higher) are eligible
for the IU benefit might be
expected to deter all but the most disabled veterans. But the data in Table 2.1 indicate that very high CDRs are not
73
The DC program may also alter work incentives through its interactions with other federal benefits
programs, SSDI in particular. Though the DC and SSDI programs have distinct disability screening criteria,
the medical information generated by the DC award may alert some veterans that they suffer from
impairments that could merit an SSDI award (and vice versa). Receipt of DC benefits may also render the
SSDI application process less financially onerous, since SSDI applicants must remain out of the labor force
for at least five months before receiving SSDI benefits. Because cash benefits from the two programs are
additive rather than offsetting, it is plausible that a veteran's receipt of either DC or SSDI benefits increases
his odds of applying for the other."
2.3
The 2001 Agent Orange Decision, Type 2 Diabetes and 'Service-Connectedness'
For a disability to be classified as service-connected, it must be "a result of disease or injury incurred or
aggravated during active military service." This criterion makes it straightforward for a veteran to obtain
compensation for a tangible injury that occurs during military service but significantly more difficult to
obtain compensation for a disease that develops later in life, such as cancer or heart disease. Thus in 2006,
the five most prevalent service-connected disabilities were primarily battle traumas: hearing defects, tinnitus,
general musculoskeletal disorders, arthritis due to trauma, and scars (U.S. Veterans Benefits Administration,
2006). Nevertheless, disabilities that typically develop post-service are also prevalent: post-traumatic stress
disorder (PTSD) and hypertensive vascular disease (high blood pressure) were the sixth and ninth most
prevalent service-connected disabilities in 2006.
In November of 2000, type 2 diabetes was added to the list of compensable and presumptively serviceconnected impairments for Vietnam veterans who had served in theatre due to their potential exposure to the
herbicide Agent Orange. This policy change substantially weakened the link between service-connectedness
uncommon, even for veterans that initially enter with low or moderate CDRs. Among veterans awarded DC benefits in
1999, only 15 percent qualified for either the IUbenefit or 100 percent disability (panel C). Seven years later, in 2006,
three times that number (45 percent) of the 1999 DC enrollment cohort was either receiving the IU benefit or was 100
percent disabled.
1 The combination of VA health benefits and Medicare benefits from SSDI may also be more attractive than either
individually since VA and Medicare differ in ailments covered, rapidity of access to treatment, size of co-pays, and
coverage of prescription drugs.
74
and DC benefits for eligible veterans. The Agent Orange policy was years in the making. The U.S. military
applied more than 19 million gallons of herbicides to defoliate Vietnamese jungle areas between 1962 and
1971, a quantity of defoliant sufficient to cover 8.5 percent of the country's land area (U.S. Department of
Veterans Affairs, 2003). 5 After the war ended, concern grew among veterans that their exposure to the
dioxins in Agent Orange would have long-term adverse consequences. The VA responded by establishing the
Agent Orange Registry in 1978. In 1991, Congress enacted the Agent Orange Act, which charged the
National Academy of Sciences' Institute of Medicine (IOM) with reviewing the evidence for a link between
Agent Orange exposure and the prevalence of certain medical conditions.
In a series of reports, the IOM found insufficient evidence to establish an association between dioxin
exposure and diabetes. But the publication of two new studies in 1999 and 2000 that found an association
between dioxin exposure and diabetes turned the tide against this longstanding consensus (Calvert et. al.,
1999; Air Force Health Study, 2000). In October of 2000, the IOM concluded that there was
"limited/suggestive evidence of an association between exposure to the herbicides used in Vietnam or the
contaminant dioxin and type 2 diabetes" (IOM, 2000). " This in turn prompted the Secretary of Veterans
Affairs' decision to classify type 2 diabetes as presumptively service connected.
The VA's adoption of the Agent Orange policy offers an unusual opportunity: almost three decades after
the end of the Vietnam War, veterans who served in theatre were unexpectedly granted presumptive
eligibility for financially significant Disability Compensation benefits without a precipitating change in
health. This chapter exploits the contrast in expanded benefits eligibility between in-theatre and not-intheatre veterans-whom we refer to as 'boots on ground' (BOG) and 'not on ground' (NOG) veterans-to
study the impact of benefits receipt on veterans' labor supply.
5
Agent Orange accounted for more than 80 percent of the total amount of herbicide dispensed.
The same 2000 report by the National Academy of Sciences' Institute of Medicine explained that any increased risk
of Type 2 diabetes due to Agent Orange appeared to be small and that family history, physical inactivity, and obesity
were far greater predictors of diabetes (IOM, 2000).
5
75
2.4
Data and Analytic Sample
Our research draws on a unique set of linked administrative data sources. The first is a near census of
approximately four million veterans who left the Army between 1968 and 1985. The U.S. Army's Office of
Economic and Manpower Analysis (OEMA) constructed this database by combining two files from the
Defense Manpower Data Center (DMDC): the first DMDC file enumerates essentially every person who left
the Army between 1968 and 1985 (designated as the service member's "loss year"); OEMA then merged this
"loss year" file with DMDC's Vietnam file, which identifies the vast majority of veterans who served in the
Vietnam theatre.56 Approximately 36 percent of this sample served with boots on the ground in Vietnam,
Cambodia, or Laos during the Vietnam War era, and is therefore potentially directly affected by the 2001
expansion of the DC program's medical eligibility criteria. More than 35 percent of the NOG sample had a
start year of 1976 or later and thus did not serve during the Vietnam War era.5
Appendix Table 2.2 and
Table 2.2 show summary statistics on the distribution of loss years, years-of-birth, and start years for military
service.
We use three additional data sources to measure mortality, employment, and disability outcomes for
Vietnam era veterans. To measure DC participation, OEMA obtained from VA detailed information about
veterans' enrollment and DC benefits received from VA programs in September of each year from 1998
through 2006. To account for mortality, OEMA merged their data to the Social Security Administration
Death Master File (DMF), which includes the year of death for any individual in the sample who died prior
to 2008. According to DMF data, approximately 13 percent of the 4.1 million individuals in the sample were
deceased by late 2007.
To collect employment, earnings, SSDI, and SSI information, OEMA contracted with the U.S. Social
The file does not include the comparatively small number of Army service members who died during service. U.S.
government archives record 38,224 Army service members who were killed in action in Vietnam.
56
7Veterans who were in the "loss year" file and the Vietnam file are in the BOG sample while veterans who were in the
"loss year" file but not the Vietnam file are part of the NOG sample. The Vietnam file contained loss year information
for approximately 25 percent of veterans, which explains why a small fraction of the BOG sample has a loss year before
1968 or after 1985. Our final analysis sample, described below, excludes all veterans with loss years before 1968 or
after 1985, but does contain a few veterans who were in the Vietnam file but not in the "loss year" file. This group
comprises only 1.25 percent of our final sample and our results are nearly identical when we exclude them from our
analysis.
76
Security Administration (SSA) to match veterans in the OEMA data set to enumerate wage earnings and
Social Security benefits in each year from 1976 through 2007.58 This resulted in a successful match for 3.8 of
the 4.1 million veterans in the full data set, with overall match rates exceeding 90 percent for both BOG and
NOG samples, as detailed in Appendix Table 2.3.
Confidentiality rules prevent SSA from disclosing individual earnings or benefits data. SSA instead
provided statistics on earnings and benefits for cells containing five to nine veterans. These statistics include
the number of cell members with zero earnings, mean labor earnings, the number receiving SSDI and SSI,
and mean SSDI and SSI benefit amounts. In constructing cells, we grouped individuals with similar
background characteristics, including gender, race, BOG and NOG status, and year of birth. Our final
analysis sample consists of veterans who joined the military between 1966 and 1971, and were born between
1946 and 1951 (see Table 2.2). The appendix to this chapter contains more details on the cell construction
and sample selection.
Panel A of Table 2.3 compares the BOG and NOG samples. The fraction nonwhite is approximately
equal in the two samples (11.3 and 11.8 percent, respectively) as is the fraction with positive earnings in
1998 (84.4 and 85.1 percent). Among those with a non-missing Armed Forces Qualification Test score
(AFQT), the average scores are also relatively close (52.1 and 53.4). And by construction, the average yearof-birth and the average start year are comparable in the two groups. There are clear differences between the
BOG and NOG samples as well. BOG veterans are more than twice as likely as NOG veterans to be
receiving DC benefits in 2000 (14.3 versus 6.5 percent), just prior to the 2001 policy change described
above. This in part reflects the greater toll that military service took on those who served in the Vietnam
theatre.
An examination of trends in key outcome variables in the BOG and NOG samples prior to the 2001
policy change reveals many similarities: the fraction with zero earnings increased by similar, though not
identical, amounts for both samples between 1998 to 2000 (by 1.5 and 1.0 points for BOG and NOG
58
SSA required a match on social security number, last name, and date of birth of each individual to make a match.
Additionally, members of the NOG sample are more likely to have missing data on education and less likely to be
missing AFQT score data.
5
77
samples, respectively), as did the fraction receiving SSDI benefits (1.2 and 0.9 points for BOG and NOG
samples). Our analysis will control for any differential trends in outcome variables that precede the policy
change.
A remaining concern with the primary analytic sample is that individuals who are matched in the SSA
data may systematically differ from those who are not. This is especially an issue for veterans who died prior
to 1997. As shown in columns 3 and 4 of panel B in Table 2.3, the fraction of veterans in the full sample who
were deceased as of 1997 was 6.2 and 6.1 percent in the full BOG and NOG samples but only 4.8 and 2.9
percent in the SSA verified samples-though, notably, the fraction of veterans who died between 1997 and
2007 in the BOG and NOG samples is closely comparable in both the full sample and the SSA verified
sample. 60 The lower SSA match rate for deceased veterans is a consequence of SSA's record matching
criteria, which require a match on subjects' full names as well as SSN and date of birth. Due to poor optical
character recognition, the NOG data contained a relatively high frequency of garbled names. We worked
with the credit information provider TransUnion to obtain names for those with incomplete information.61
Due to limited availability of archival credit information prior to 1997, TransUnion could not provide names
for most veterans who passed away before that same year, leading to a low overall match rate for NOG
soldiers who died prior to 1997. Differential mortality match for the BOG and NOG samples is not a major
threat to the validity of our research design, however, since our primary focus is on outcomes from 1998
forward. Thus, soldiers who were deceased as of 1997 are excluded from the analysis.
To benchmark the representativeness of the sample, we compare the OEMA data with a similarly drawn
group of males from the 2000 IPUMS Census file. Using the 5 percent Census IPUMS extract, we draw a
group of all males born between 1946 and 1951, and further limit the sample to (self-reported) Vietnam-era
We constructed the sample summarized in panel B of Table 2.3 in the same manner as the sample summarized in
panel A, except that we did not exclude veterans who did not match with SSA earnings information.
61 TransUnion performed this work pro bono. Our original sample had
1.7 million observations with a missing name.
60
TransUnion was able to provide names for 1.5 million of these observations upon confirming a match with date of birth
and social security number.
78
veterans.62 Appendix Table 2.4 provides a side-by-side comparison of age, race, schooling, annual earnings,
and share with non-zero earnings in the OEMA and Census samples. Race and labor force participation rates
are closely comparable: the fractions of the OEMA and Census samples with non-zero earnings are 84.1
percent and 82.2 percent, respectively, while the percent nonwhite are 11.3 and 13.3. Reflecting the fact that
OEMA data code education at the time of military enlistment (average age of 20) while the Census data code
educational attainment in late adulthood, the OEMA sample reports considerably lower educational
attainment than the Census sample. Average earnings in the OEMA sample are also about 10 percent lower
than in the Census sample. This gap may reflect earnings differences between Army veterans and those of
other branches of the military. SSA data may also fail to capture some earnings sources, including selfemployment and non-covered work. Overall, our comparison of OEMA and Census data provides some
assurance that the OEMA sample is representative of the target population of Vietnam era Army veterans,
measured in terms of age, race, labor force participation and earnings.
2.5
The Impact of the Agent Orange Policy on Receipt of Disability Benefits
The Agent Orange policy spurred a steep rise in Disability Compensation enrollment, and may
potentially have had spillover effects on enrollment in other federal disability programs as well. We begin by
estimating impacts on DC enrollment, followed by SSDI and SSI enrollment, and finally, total federal
disability benefits.
A.
Enrollment in Veterans Disability Compensation
Figure 2.3 plots the fraction of BOG and NOG veterans receiving DC benefits in September of each year
from 1998 through 2006. Prior to the Agent Orange change in 2001, DC enrollment was rising somewhat
more rapidly among BOG than NOG veterans. But DC enrollment among BOG veterans accelerated
substantially after 2001. Column 2 of panel A in Table 2.4 shows that BOG DC enrollment increased by 0.4
The Census data do not allow us to distinguish among veterans according to their branch of military service. To the
extent that Army veterans are different from their counterparts serving in the Navy, Air Force, Marines, or Coast Guard,
we would expect some differences between the Census and OEMA samples.
62
79
percentage points per year between 1998 and 2000 (13.5 to 14.3) and by 1.6 percentage points annually
between 2001 and 2006 (15.0 to 23.0). In contrast, DC enrollment growth rates among NOG veterans
remained small and relatively steady, increasing from 0.1 percentage points per year between 1998 and 2000
(6.3 to 6.5) to 0.2 percentage points per year between 2001 and 2006 (6.6 to 7.6).63 Data from the National
Health Interview Survey (Schiller et al, 2010) indicate that the fraction of individuals with diabetes varies
substantially by race, with rates among blacks substantially higher than among whites. Consistent with this
fact, an examination of the trends in DC enrollment in Table 2.4 reveals substantial differences by race in the
BOG sample, with DC enrollment increasing about 40 percent more among BOG nonwhites (19.3 to 30.2,
10.9 percentage points) than among BOG whites from 2001 to 2006 (14.4 to 22.1, 7.7 percentage points).
These raw differences in DC enrollment trends between BOG and NOG veterans may reflect differences
in veteran characteristics in addition to any impact of the Agent Orange policy. To account for these factors,
we estimate a set of OLS models that regress DC enrollment on a full set of controls for veterans' year-ofbirth, race, and AFQT score quintile.64 For consistency with the subsequent analysis of labor market
outcomes, our DC variables are calculated as means over five to nine veterans grouped at the level of SSA
reporting cells. 65 We estimate the following equation for 1998 through 2006, weighting each cell-year by the
number of individuals in the cell:
(2.1)
2006
yt x BOGj + XJflt + Ejt
yit = at + yo x BOGj +
1999
The outcome variable Yjt is the percentage of cell
j enrolled in the DC program in September of year t, and
The policy change took effect in July of 2001 and we measure DC enrollment in September. Thus, our 2001 data
is
arguably more "pre" than "post," though it's likely that the policy change did contribute to DC enrollment growth
between September 2000 and September 2001 in our data.
64 Veterans with low AFQT scores are more likely to enroll in DC
(Autor et al 2011), and average scores differ slightly
between BOG and NOG veterans.
65 SSA outcomes are only available at the cell level due to confidentiality restrictions,
as discussed above. Because we
apply OLS models to cell means and weight by the number of individual observations in each cell, the cell level
estimates will be in most cases algebraically identical to those that we would obtain if cells were instead disaggregated
to individual level rows. The one exception to this dictum arises from the fact that ten percent of cells in have more than
one AFQT quintile represented within the cell. We assign the cell to the AFQT quintile nearest to the cell's mean AFQT
quintile in these cases, meaning that the cell-level and corresponding individual level regressions will differ slightly.
63
80
BOG is an indicator variable that is set equal to one if veterans in cell
j are in the BOG sample and is
otherwise equal to zero (cells include either all BOG or all NOG veterans). The term at is a vector of nine
indicator variables for each year considered, and Xjt is a vector set of 14 variables corresponding to the
possible values of year-of-birth, AFQT quintile, and race.66 We interact each of these 14 indicator variables
with nine year-specific indicator variables to account for differential levels of growth rates in DC enrollment
by age, race, or AFQT level during the 1998 through 2006 period. The coefficient yo corresponds to the
(conditional) baseline DC enrollment gap between BOG and NOG veterans in the base year of 1998, while
the coefficient vector Yt estimates the difference in this gap in each subsequent year 1999 through 2006
relative to the enrollment gap in 1998.
The statistically significant estimate of 6.97 for yo in the first column of panel A, Table 2.5 implies a 7
percentage point gap in DC enrollment between the BOG and NOG samples in the baseline year (1998) after
controlling for race, year-of-birth, and AFQT quintile, quite similar to the unconditional estimated difference
of 7.2 percentage points in columns 1 and 2 of Table 2.4 (13.5 for BOG versus 6.3 for NOG). The next eight
rows of the first column in Table 2.5 display the estimates for Yt in each year from 1999 through 2006. The
statistically significant estimates of 0.33 and 0.59 for y1999 and y2000 imply that DC enrollment was
increasing more rapidly (by 0.3 percentage points annually) for the BOG than NOG sample prior to the 2001
policy change. Beginning in 2001, these coefficients increase much more rapidly, by about 1.4 percentage
points per year, and reach a cumulative differential of 7.98 percentage points by September of 2006. As
shown in panels B and C of Table 2.5, the point estimates differ only modestly by birth cohort (1946-48 and
1949-51), with slightly larger effects for the older than the younger group (8.28 and 7.65 percentage points,
respectively).
The sharp break in trend for DC enrollment among BOG relative to NOG veterans motivates a
In our sample there are six possible values of YOB (1946 through 1951), two possible values of race (white and
nonwhite), and six possible values of AFQT quintiles (we group those with a missing AFQT into a sixth category).
Given there are nine years of data used in these estimates, we are including 126 interactions. Veterans within each cell
have the same year-of-birth and race (by construction) and the vast majority also have the same AFQT quintile. Data for
each veteran is included in each of these nine years unless the veteran dies at some point between 1998 and 2006, in
which case the veteran is dropped from the sample. Year-of-death is one of the variables used to construct the cells, so
typically the entire cell is dropped.
66
81
parameterized version of equation (2.1) found in even-numbered columns of Table 2.5. For this specification,
we replace the full set of year-by-BOG interactions with two linear time trends: a pre-2001 trend and a post2001 trend change, estimated relative to the pre-2001 trend:
(2.2)
Yt = at + yo X BOG + So X BOGj X (t - 1998) + 61 X BOG X (t - 2001) X 1[t > 2002] + XjJlt + Ejt
Here, 65 captures the pre-existing trend in BOG relative to NOG DC participation just prior to the policy
change while
61 estimates any additional change in the BOG relative to the NOG trend following the policy.
We define 2002 to be the first post-policy year in this specification given that most of the time from
September 2000 to September 2001 occurred before the policy took effect in July 2001. To interpret
61 as the
causal effect of the Agent Orange policy change on DC enrollment, we must assume that the pre-existing
trend in the difference in DC enrollment between the BOG and NOG samples would have continued after
2001.
The estimates in the even-numbered columns of Table 2.5 highlight the significant trend increase in DC
enrollment among BOG relative to NOG veterans following the 2001 policy change. Because we treat 2001
as a pure pre-policy year, the estimate of S in each case is slightly larger and the estimate of 61 slightly
smaller than is suggested by the year-by-year estimates. The magnitude of the trend break is similar between
the two birth cohort groups (1946-48 and 1949-51). In panel A of Table 2.6, we split the sample by race
(white/nonwhite) and by AFQT score. The acceleration in DC enrollment is largest among BOG veterans
who are nonwhite and have relatively low (below the
4 0 th
percentile in the sample) AFQT scores.67
Figures 2.4 and 2.5 relate these enrollment trends to the Agent Orange policy. Figure 2.4 documents a
steep increase in the likelihood that BOG (but not NOG) veterans receive compensation for diabetes, both in
the year of current DC receipt and in the year of initial DC enrollment. As documented in Panel C of Table
2.4, the fraction of DC beneficiaries receiving compensation for diabetes among BOG veterans rose from 2.4
to 28.1 percent (25.7 points) between 2001 and 2006 versus 1.6 to 6.4 percent among NOG veterans (4.8
67
We drop cells with missing AFQT scores for these specifications.
82
points). 68 Yet, as shown in Figure 2.5, there was only a modest increase in the fraction of veterans receiving
DC benefits absent a diabetes award. Thus, the rapid growth in DC enrollment appears substantially
accounted for by an influx of new diabetes awards. Figure 2.6 underscores that the increase in DC enrollment
was similarly rapid among older (YOB 1946-48) and younger (YOB 1949-51) Vietnam veterans in our
sample, as indicated by the regression estimates in Table 2.5.
The estimates so far capture the extensive margin impact of the Agent Orange policy on DC
participation. Panels B and C of Table 2.6 consider outcome measures that combine responses along the
extensive (enrollment) and intensive (rated severity) margins: Combined Disability Ratings (CDRs) and
annual benefits payments, both of which are continuous measures of DC participation. 69 Both CDRs and
annual benefit payments rose significantly more rapidly for BOG than NOG veterans after 2001. Panel B of
Table 2.6 shows that prior to the Agent Orange policy, average CDRs of BOG relative to NOG veterans were
rising at 0.34 percentage points annually; after 2001, this differential trend tripled to 1.00 percentage points
annually (0.34 + 0.66). Similarly, panel C indicates that the year-over-year increase in annual mean DC
benefits paid to BOG relative to NOG veterans rose from $113 prior to the policy change to $305 per year
after 2001 ($113 + $192). Consistent with the estimates for DC enrollment by race and AFQT score, the
remaining columns of panels B and C demonstrate that CDRs and benefits payments increased more rapidly
among nonwhite than white BOG veterans, and more rapidly among low-AFQT than high-AFQT BOG
veterans.
B. Enrollment in other FederalDisability Programs
A significant constraint that some workers face when applying for SSDI benefits is that they are unlikely
There are three likely reasons that DC receipt for diabetes increased in the NOG. One is that our data only imperfectly
classify BOG and NOG veterans; some veterans categorized as NOG may in fact have served in theatre and hence
presumptively qualified for service-connectedness for their type 2 diabetes. A second cause is that administration of DC
benefits appears to be somewhat discretionary. Even prior to the Agent Orange decision in 2000, 1.4 percent of BOG
and 0.7 percent of NOG DC recipients were receiving compensation for service-connected diabetes (panel C of Table
2.4). Finally, some NOG veterans may have served in Korea in 1968 or 1969. This source of slippage does not
invalidate our identification strategy provided that the post-2001 trend break in DC receipt in BOG versus NOG
veterans is induced by the Agent Orange Policy change, which seems quite plausible.
68
Veterans receiving the
level.
69
IU rating are coded with a CDR of 100 since they receive benefits at the 100 percent disability
83
to qualify for an award if they participate gainfully in the labor force while their case is being decided (Autor
et al., 2015b). Knowledge of this fact may deter workers with residual work capacity from applying for
benefits since they would have to forfeit potential earnings in pursuit of an uncertain benefit. Since, as we
document below, many veterans did leave the labor force after becoming eligible for DC benefits, it is
plausible that a subset that would otherwise have been deterred from seeking SSDI benefits would choose to
pursue these benefits as an indirect consequence of the Agent Orange policy.70
To examine these potential spillovers, we estimate models analogous to equation (2.2) where the
outcome variable is the probability of SSDI receipt among BOG and NOG veterans (again measured at the
level of SSA reporting cells). Reflecting the longer sample window available for the SSA outcomes, these
models include two additional "pre" years, 1996 and 1997, and one additional "post" year, 2007. Panel A of
Table 2.7 shows that, like DC enrollment, SSDI receipt was trending upward faster among BOG than NOG
veterans by 0.13 percentage points per year prior to the 2001 policy change. But this trend accelerated after
2001, increasing to 0.25 percentage points (0.13 + 0.12) annually. Notably, the magnitude of the estimated
effect on SSDI is only about 10 percent as large as the corresponding estimate for DC enrollment (compare
column 1, panel A of Tables 2.5 and 2.6).7' These results suggest that the Agent Orange policy likely spurred
additional SSDI enrollment, which may in turn have magnified any reduction in labor force participation
among eligible veterans.
In panel B of Table 2.7, we estimate models for receipt of SSI. Here, the estimates point in the opposite
direction: SSI enrollment was growing more rapidly among NOG than among BOG veterans prior to 2001,
and this trend accelerated after 2001, in all likelihood because the additional DC transfer income made some
veterans ineligible for means-tested SSI benefits. The magnitude of the estimated break in trend for SSI is
only one-tenth as large as the corresponding estimate for SSDI, however.
The final panel of Table 2.7 estimates equation (2.2) for disability income combining benefits payments
Borghans et al (2014) find related evidence of program spillovers in disability receipt. On average, Dutch DI
recipients offset a El.00 loss in DI benefits with a 0.30 increase in other social support programs. Duggan et al (2007)
find evidence that reductions in Social Security retirement benefits stemming from the rising U.S. Full Retirement Age
have lead to an increase in SSDI enrollment among cohorts facing a higher full retirement age.
71 In estimates not displayed, we find impacts on SSDI receipt that are similar for the 1946-48 and 1949-51 cohorts.
70
84
from all three federal disability programs: DC, SSDI, and SSI. These estimates only include data from 1998
through 2006 since we do not have DC data for 1996, 1997, and 2007. Inclusion of SSDI and SSI benefits
with DC benefits raises the differential trend increase in benefits payments for BOG relative to NOG
veterans by approximately an additional 15 to 20 percent (compare to panel C of Table 2.6). Across all
veterans in our sample, combined disability benefits were rising by $128 annually among BOG relative to
NOG veterans prior to the 2002 Agent Orange policy change. Commencing in 2002, this differential trend
rose to $354 ($128 + $226). This average masks considerable heterogeneity. For veterans with high CDRs,
DC benefits payments substantially exceed SSDI benefits for all but the highest (pre-disability) earners;
moreover, DC benefits are not subject to federal taxation. Thus, the bulk of the impact of the Agent Orange
policy on veterans' transfer income accrues through the DC program.
2.6
Consequences for Labor Force Participation
We now consider impacts of the Agent Orange policy on veterans' employment and earnings. Using
models analogous (2.1) and (2.2), we test whether the fraction of BOG veterans in the labor force declined
differentially relative to NOG veterans following the 2001 policy change. We again include data from 1996,
1997, and 2007 because of the longer sample window available for SSA outcomes.
A.
Impacts on laborforce participation
Table 2.8 considers the impact of the Agent Orange policy on labor force participation, measured as the
percentage of living veterans in an SSA reporting cell who have positive labor earnings. The first column
finds that labor force participation among BOG veterans in the baseline year of 1996 was slightly lower (by
0.32 points) among BOG than NOG veterans. This participation gap was modestly expanding (i.e., becoming
more negative) in the early part of our sample period, falling by an additional 0.81 points between 1996 and
2001. This expansion accelerated after 2000, with the BOG relative to NOG labor force participation rate
falling by an additional 2.02 percentage points between 2001 and 2007. Column 2, which fits the
parameterized model in equation (2.2), shows the rate of divergence between BOG and NOG veterans
85
roughly doubled after the Agent Orange decision, from 0.15 percentage points per year to 0.33 percentage
points (0.15 + 0.18) annually after 2001.
Coupled with the evidence in Table 2.5 on DC enrollments, the results in Table 2.8 suggest that the
differential increase in DC enrollment among BOG veterans spurred a reduction in their labor supply. A
natural alternative interpretation, however, is that BOG veterans were in worse health than NOG veteransperhaps due to the rigors of in-theatre military service-leading to comparatively earlier retirements in the
2000s for reasons unrelated to the 2001 policy change. If so, the accelerating decline in LFP among BOG
veterans after 2001 should primarily impact the oldest Vietnam veterans, with younger veteran cohorts
exhibiting a similar LFP falloff as they approached these early retirement years in subsequent years. Figure
2.7 and the remaining two panels of Table 2.8 explore these competing hypotheses by again subdividing the
sample into two birth cohorts of average ages 54 and 51 years at the time of the policy change. Both the
figure and the table reveal that labor force participation of BOG relative to NOG veterans for both sets of
birth cohorts dropped sharply soon after the policy change. The year-over-year decline in BOG LFP relative
to NOG LFP doubled after 2001 for both older (born 1946-48) and younger (born 1949-1951) cohorts, rising
from 0.17 to 0.36 percentage points (0.17 + 0.19) among the former group, and from 0.14 to 0.32 percentage
points (0.14 + 0.18) among the latter group. This trend break in BOG relative to NOG labor force
participation occurring simultaneously across birth cohorts after 2001 suggests that benefits expansion rather
than early retirements explains the sharp drop in BOG labor force participation following the Agent Orange
decision.
The upper panel of Table 2.9 explores these employment patterns by race and AFQT subgroup. Declines
in LFP generally mirror uptake of DC across demographic groups. Consistent with the Table 2.6 findings for
DC enrollment, the fall in labor force participation among nonwhite and among low-AFQT BOG relative to
NOG veterans following the 2001 policy change is more pronounced than among white and higher-AFQT
veterans. These differences across groups are not statistically significant, however.
86
B. Impacts on labormarket earnings
Alongside its extensive margin impact, receipt of DC benefits may spur veterans to reduce their work
hours or switch from full to part-time employment, thus reducing earnings conditional on ongoing
employment. We explore the effect of the Agent Orange policy on total labor force earnings by estimating
equation (2.2) for the log of mean cell earnings, excluding (of necessity) cells where all veterans have exited
the labor force.72 Panel B of Table 2.9 shows that earnings averaged about 2.6 log points lower among BOG
than NOG veterans in our baseline year, and this gap was expanding by about 0.2 log points per year through
2001. The earnings gap grew significantly more rapidly-by about 0.7 log points annually (0.23 + 0.45)following the 2001 policy change. Once again, the effects are larger for nonwhite and low-AFQT veterans.
Since these total earnings impacts combine extensive margin (participation) and intensive margin (hours
and wage) responses, we cannot directly infer whether they reflect changes in earnings among incumbents or
merely a reduction in the number of veterans working. The evidence in panel A of Table 2.9 suggests,
however, that minority and low-AFQT veterans were particularly likely to exit the labor force upon receiving
DC benefits. If we hypothesize more generally that veterans with lower potential earnings are most likely to
exit the labor force in response to increased transfer income, we can compare extensive margin and total
earnings responses to assess evidence for intensive margin responses. Concretely, suppose that earnings
among those dropping out of the labor force are below the average of those still working within the cellthat is, income effects are larger for low earnings workers. In this case, the decline in log cell earnings will be
smaller than the decline in log cell labor force participation. Conversely, if the decline in log cell earnings
exceeds the decline in log cell participation, this indicates that either earnings is also falling on the intensive
margin or, contrary to our assumption, higher earning workers are exiting employment disproportionately.
To facilitate earnings and employment comparisons, panel C of Table 2.9 reports estimates where the
dependent variable is the log of the fraction of veterans in a cell working (rather than simply the percentage
working, as in panel A). Comparing across panels B and C reveals that the decline in relative earnings of
We use the log of the mean rather than the mean of the log because of the cell-level aggregation of our earnings data.
We do not, however, adjust cell means for non-participation; if a cell contains five veterans each earning $25K annually
and one leaves the labor force, mean cell earnings falls to $20K. Thus, this measure incorporates intensive and extensive
margin adjustments.
72
87
BOG versus NOG veterans after 2001 is, in most cases, slightly larger than the relative decline in labor force
participation. In column 1, for example, the relative decline in log earnings of BOG relative to NOG veterans
expands by 0.45 log points per year after 2001 whereas the relative decline in log LFP increases by only 0.36
log points per year. This pattern also holds for black and white veterans considered separately, though it does
not hold for lower and higher-AFQT veterans. We infer that the Agent Orange policy likely induced a
mixture of labor force exit and earnings reductions among beneficiaries, though the evidence on the latter
margin is far from clear cut.
2.7
Comparing Labor Supply and Enrollment Impacts: Instrumental Variables Estimates
Motivated by the evidence that the Agent Orange policy both raised DC enrollment and lowered earnings
among eligible veterans, we now use the Agent Orange policy as an instrumental variable for receipt of DC
benefits. This instrumental variables approach provides a valid estimate of the causal effect of DC benefits
on labor supply under the assumption that the Agent Orange policy only affects employment and earnings
through its impact on benefits enrollment and transfer payments. This exclusion restriction is untestable of
course, but the panoply of findings above confer credibility: we find a sharp rise in DC receipt, DC
payments, and total disability income among BOG relative to NOG veterans following adoption of the Agent
Orange policy; the rise in take-up was equally steep and pronounced among younger and older veterans, and
was greatest among nonwhite and low-AFQT veterans; and in all cases, take-up responses are paralleled by
differential employment and earnings drops among BOG relative to NOG veterans that occur among both
older and younger veterans, and which are slightly more pronounced among nonwhite and low-AFQT vets.
Table 2.10 presents instrumental variables of equation (2.2) for the impact of DC participation on labor
force participation, where the endogenous variable in panel A is the percentage of a cell enrolled in DC.
The 2SLS estimate in column
1 finds that each percentage point increase in DC enrollment reduces the
fraction of veterans working by 0.18 points-that is, approximately one veteran exits the labor force for
We limit the sample to the years 1998 through 2006 for which DC measures are available, rather than the longer
sample window of 1996 through 2007 for which SSA earnings data are available. The instrumental variable for DC
enrollment is BOG; x (t - 2001) x 1[t > 2002].
73
88
every five veterans newly awarded DC benefits. Comparing across rows of this table, we find slightly larger
employment effects for nonwhite and low-AFQT veterans, consistent with our maintained hypothesis that
participation among these groups is more elastic. Notably, the (scaled) labor force participation impact is
estimated to be slightly (though not significantly) larger for younger than older veterans, again underscoring
that the results are unlikely to be simply driven by differential trends in early retirement among BOG relative
to NOG veterans.
By specifying DC participation as the endogenous variable, the panel A estimate does not account for
policy-induced increases in DC benefits along the intensive margin-higher benefits among incumbents, and
greater potential benefits among new enrollees. We relax this restriction in panel B by making our
endogenous variable the sum of disability income (DC, SSDI and SSI benefits). This variable captures
changes in transfer payments among both new enrollees and incumbents.74 The panel B estimates find that
each thousand dollars in disability benefits payment reduces the probability of a veteran working by
approximately 0.80 percentage points, with slightly larger impacts among nonwhite veterans (0.97 points)
and slightly smaller impacts among older veterans (0.70 points). Scaling these estimates by the mean rate of
labor force participation (84.7 points) and mean year 2000 earnings (in constant $2013 dollars) of $52.5K
(Appendix Table 2.4) implies an extensive margin labor force participation elasticity of -0.50. This is similar
in magnitude to the estimates reported by Boyle and Lahey (2010), who study the labor supply of a relatively
comparable set of veterans who were nearing retirement in the mid- 1990s. We also cannot rule out that the
participation elasticity we obtain incorporates both pure (non-incentive) income effects from increased
transfer income and additional incentive effects stemming from the potential availability of IU and SSDI
benefits. Fully distinguishing these incentive and non-incentive effects remains a ripe topic for future work.
To what degree is this causal effect driven by changes in labor force participation among incumbent DC
recipients-many of whom received a higher combined disability ratings and corresponding benefit increases
as a consequence of the Agent Orange policy-rather than by changes in participation among new enrollees?
Appendix Table 2.5 probes this question by comparing the evolution of CDRs among veterans according to
Because cells receive no disability payments in a given year, we specify the disability payment measure in real dollars
rather than taking the logarithm.
74
89
DC enrollment status as of 1998 (the earliest year for which we have enrollment data). Column
1 reports
estimates of equation (2.2) for all veterans in our sample where the left hand side variable is a veteran's
combined disability rating. Column 2 includes only veterans enrolled in the DC program as of 1998, while
column 3 excludes veterans not enrolled in DC as of 1998. Comparing the column 2 and 3 estimates reveals
that CDRs were increasing more rapidly among 1998 BOG incumbent DC recipients than among their NOG
counterparts prior to the policy change, but that there is only a modest acceleration in this trend after 2001.
Specifically, CDRs were increasing by about 0.8 percentage points more annually among the BOG
incumbents than the NOG incumbents between 1998 and 2001, and this differential increased only modestly
to 1.1 percentage points annually from 2002 forward (0.84 + 0.26). In contrast, the acceleration among new
BOG entrants after 2001 (column 2) is much larger, increasing from 0.20 to about 1.0 percentage points
annually after 2001(0.22 + 0.73). Thus the Agent Orange policy change generated a much larger impact on
the DC status of BOG veterans who had not already enrolled in DC prior to the policy change than it did on
incumbent recipients, suggesting that labor supply impacts are likely driven primarily by new DC entrants.
Panel B of Appendix Table 2.5 complements this evidence by comparing labor force participation trends
between cells with a high density of DC incumbents and those with a low density of DC incumbents.75
Column-
1 reports estimates of equation (2.2) for all veterans in our sample, while column 2 restricts our
sample to cells where at least one third of veterans ("high density") were enrolled in DC as of 1998 and
column 3 restricts our sample to cells where less than one third of veterans ("low density") were enrolled in
DC as of 1998.76 Following the 2001 policy change, the rate of divergence among BOG relative to NOG
veterans actually attenuated slightly for cells with a high density of incumbents (from 0.35 to 0.29 points per
year), but nearly tripled for cells with a low density of incumbents (from 0.11 to 0.32 points per year). This
pattern again suggests that the differential decline in labor force participation among BOG relative to NOG
While the SSA outcome cells never combine BOG and NOG veterans, they often combine a mixture of DC and nonDC enrollees, and this fraction changes over time as enrollment evolves. It would have been invalid to construct cells
based on eventual DC enrollment, of course, since DC enrollment is an outcome of the analysis.
76 We cannot fully distinguish labor force participation effects for incumbents
and potential entrants because labor force
participation rates are determined at the cell level and cells are not stratified on DC enrollment. For example, consider a
cell with two veterans who were enrolled in DC as of 1998 and three veterans who were not enrolled as of 1998. If all
five veterans had positive earnings in 1998, but only four had positive earnings as of 2006, we are unable to determine
whether the veteran who dropped out of the labor force was enrolled in DC as of 1998 or not.
75
90
veterans after 2001 is driven primarily by veterans induced to enroll in DC by the Agent Orange policy rather
than by policy-induced changes in behavior among incumbent recipients.
2.8
Impacts on Total Earnings
Since the Agent Orange policy precipitated a rise in transfer payments and fall in employment and
earnings among eligible veterans, its net impact on incomes of BOG relative to NOG veterans is ambiguous.
We compare trends in total measured income (earnings plus the sum of DC, SSDI and SSI benefits) between
BOG and NOG veterans in Table 2.11, again limiting to the years 1998 through 2006, where we have all
earnings and benefits variables available. Since we have already established that employment and earnings
respond endogenously to policy-induced shifts in disability transfer income (as theory would predict), the
current exercise should be viewed as descriptive rather than inferential; the net impact of the Agent Orange
policy on veterans' income combines exogenous policy impacts and endogenous behavioral responses.
The first column of Table 2.11 (panel A) shows that, at the start of sample window in 1998, measured
incomes of BOG veterans were slightly lower than those of NOG veterans, with gaps of 0.75 log points
overall, 0.80 log points among nonwhites, and 0.70 log points among whites.77 Notably, incomes were
significantly higher (by 3.7 log points) among low-AFQT BOG veterans and significantly lower (by 2.5 log
points) among high-AFQT BOG veterans (relative to their NOG counterparts). Recall, we saw earlier in
panel B of Table 2.9 that labor earnings of BOG veterans averaged up to 5 log points below that of NOG
veterans. Thus, comparing Tables 2.9 and 2.11 reveals that disability transfer income largely made up the
difference between BOG and NOG veterans, and resulted in substantially higher income for low-AFQT BOG
veterans relative to low-AFQT NOG veterans, consistent with the findings of Angrist et al (2010) for loweducation veterans.
The second and third rows of Table 2.11 summarize the evolution of the BOG/NOG income differential
before and after the Agent Orange policy. Prior to the policy change, BOG veterans were on a slightly more
positive total income trend than NOG veterans, with average annual incomes in the full sample rising by 0.20
These reported earnings are covariate adjusted since they derive from the Table 2.11 regression estimates. Raw
differences are similar, however, as may be inferred from Table 2.3.
77
91
log points annually in relative terms. This differential trend steepened significantly following the policy
change, jumping from 0.20 to 0.71 log points (0.20 + 0.51) annually. For nonwhites and low-AFQT veterans,
the increase in the trajectory was steeper, rising from 0.27 to 1.31 log points annually for the former group,
and from 0.33 to 1.19 log points for the latter. While the Agent Orange policy sharply reduced employment
and earnings among eligible veterans, the Table 2.11 estimates make clear that the policy-induced rise in
transfer income more than offset these earnings reductions.
As a final characterization of these net policy and behavioral effects, the lower panel of Table 2.11
summarizes the net impact of DC enrollment on total earnings, where DC enrollment is instrumented with
the Agent Orange policy. The outcome variable for these estimates is cell total income-earnings plus
disability benefits-and the endogenous variable is again DC enrollment. The point estimate of 0.52 for the
full sample implies that - if a cell increased from 0 to 20 percent enrolled in DC - then average total veteran
income in the cell would increase by about 10.4 percent. In interpreting this sizable impact, note that we
found above that only one in five (18 percent of) newly enrolled veterans exited the labor force as a
consequence of the Agent Orange policy. For the remaining four-fifths, enrollment in DC raised transfer
income while generating only minimal offsetting reductions in labor income, thus unambiguously raising
total income. Of course, average incomes of Vietnam era veterans were modest in the early 2000s and
remained modest net of transfer benefits following the Agent Orange policy change. Nevertheless, this policy
had the unusual effect of increasing average veteran incomes, inclusive of the foregone earnings of the
approximately one in five newly enrolled DC recipients induced to exit the labor force.
2.9
Conclusion
This chapter provides evidence that the policy-induced increase in enrollment in the VA's Disability
Compensation program had a significant effect on the labor supply of Vietnam-era veterans. We estimate
that 18 percent of individuals who became eligible for the DC program as a result of the policy change
dropped out of the labor force. Analyses that explore the effect on average earnings support the conclusion
that DC enrollment substantially lowered labor supply among Vietnam-era veterans who served in the
92
Vietnam theatre during the conflict there. These effects appear to be driven by those made newly eligible for
DC benefits rather than by increases in benefits among those already receiving DC prior to the 2001 policy
change. The increase in transfer income more than offsets the reduction in earnings, so that average income
among BOG veterans increased relative to their counterparts who did not serve in theatre. Combining policyinduced increases in transfer income and reductions in labor force participation, we estimate an extensive
margin labor supply elasticities of -. 50, which is somewhat larger than consensus estimates, though not
implausible for this near-elderly population of Vietnam era veterans in diminished health.
It is also likely that the income effect of DC income on employment and earnings is augmented by the
interaction between DC enrollment and veterans' potential eligibility for Individual Unemployability and
SSDI benefits, both of which provide strong incentives against gainful labor force participation. Consistent
with this hypothesis, we find that the Agent Orange decision affected enrollment in two other federal
disability programs, raising enrollment in SSDI by approximately one recipient for every ten veterans newly
enrolled in DC, and slightly reducing enrollment in the means-tested SSI program (though the estimated
effects are an order of magnitude smaller than for SSDI). In addition, 14 percent of Vietnam era veterans
within our sample received IU status by 2006. Among veterans who enrolled in DC the year after the Agent
Orange policy change, fully one-third received IU status or a CDR of 100 by 2006 . Since IU benefits are
only available to veterans with earnings at or below the poverty threshold for a single individual, the rapid
growth of IU designations among newly enrolled DC beneficiaries underscores the potential for the
graduated DC benefit program to generate unintended labor supply impacts. Analyzing the dynamic
interaction between these incentive and non-incentive effects of disability benefit programs is a worthy goal
for future research.
93
2.10 Chapter 2 Appendix
A.
Construction of Cells in the BOG and NOG Samples
To construct cells for matching individuals in the BOG and NOG samples with SSA data, we group
individuals based on their values of certain background characteristics, including gender, race, year of birth,
and so forth. The list of variables used to form cells, ranked in the order in which the grouping occurred, is:
1. Gender (male or female)
2.
3.
4.
5.
6.
7.
8.
9.
Race (white or nonwhite)
Death year (e.g. 1985)
Year-of-birth (e.g. 1946)
Start year (e.g. 1966)
Education at entry (e.g. 0, 1, 2, 3, 4 for hsd, hsg, smc, clg, clg-plus)
AFQT score quintile (e.g. 2)
Loss year (e.g. 1972)
Region of residence (e.g. Northeast)
Before forming cells, we determined which individuals could be verified upon matching to the SSA data.
A match would be verified if the social security number, date of birth, and at least six letters of the last name
could be matched in the two data sets. We construct cells from the verified BOG and NOG samples so that
there are between five and nine individuals in each cell. Each cell consists only of individuals in the BOG or
in the NOG. The number of variables used in the grouping varies across cells. This occurs because in some
cases, a cell reaches a size of five to nine after grouping on a relatively small number of variables. If a cell is
between 10 and 29 individuals, we do not split further, but instead split the cell into the maximum number of
cells with size five or more after sorting on the next matching variable to maximize similarity within the
cells. If a cell has fewer than five individuals then we re-merge it with an adjacent cell. If a cell with fewer
than five individuals is merged with an adjacent cell to form a new cell with more than nine individuals, the
merged cell is split in two.
The distribution of the cell size for the verified NOG sample is:
In Cell of Size
5
6
7
8
9
Total
Frequency
1,388,440
657,696
171,269
70,048
71,271
2,358,724
Percentage
58.86
27.88
7.26
2.97
3.02
100.00
94
The corresponding distribution for the verified BOG sample is:
In Cell of Size
5
6
7
8
9
Total
Frequency
942,415
328,338
92,785
41,120
40,878
1,445,536
Percentage
65.19
22.71
6.42
2.84
2.83
100.00
The frequency distribution for the number of variables used in the BOG and NOG cell formation is as
follows (thus 206,780 in the BOG matched on variables
Number of Vars
1
2
3
4
5
6
7
8
9
Total
1 through and including 8):
# in BOG
0
61
4,149
30,716
62,807
65,140
162,909
206,780
912,974
1,445,536
# in NOG
0
89
3,391
38,087
86,671
80,909
174,281
397,667
1,577,629
2,358,724
B. Constructionof the Analysis Sample
Appendix Figure 2. 1A diagrams the construction of the sample used in our analysis. The "loss year" file and
the Vietnam file combine to form our baseline OEMA sample of 4.1 million veterans. After merging the
OEMA sample to VA disability data, the SSA Death Master File, and SSA earnings information, we restrict
our sample to 3.8 million veterans who had valid SSA earnings data. As seen in Table 2.2, the number of
BOG veterans in our database is largest in the 1966 through 1971 start years; veterans who began their
service after 1971 tended not to serve in the Vietnam theatre while those entering before 1966 are much less
likely to be included in our sample since many in the latter group left the Army prior to 1968, the first year of
our "loss year" file. We therefore restrict both the BOG and NOG samples to veterans who have a start year
95
between 1966 and 1971 inclusive, which reduces our sample to 1.9 million veterans. 78 We further restrict
attention to individuals born between 1946 and 1951 inclusive, with 1.5 million veterans remaining. We next
exclude approximately 150,000 veterans who have missing loss years, loss years before 1968, or loss years
after 1985. We finally drop an additional 8,000 veterans who are in SSA cells where not all veterans have the
same birth year, at least one veteran in the cell has a start year before 1966 or after 1971, or at least one
veteran in the cell has a loss year before 1968 or after 1985. Our final sample includes 1.351 million veterans
of the U.S. Army who began their service between 1966 and 1971 and were born between 1946 and 1951.
For the analysis, we converted this final sample into a 1996-2007 panel of 15.2 million (year) x (individual)
observations, or 2.9 million (year) x (cell) observations, as depicted in Appendix Figure 2.1 B. 79
For the nearly 200,000 veterans with missing start years, we imputed their start year based on the median start year
for other veterans in the sample with the same date of birth. This has little bearing on our final sample as nearly all
veterans with missing start years have missing loss years and are therefore excluded from our analysis.
78
A veteran who dies in year t is dropped from the analysis sample in year t and in all subsequent years. Because yearof-death is one of the variables used when constructing cells, typically all members of a cell die in the same year (if any
die). The number of surviving cells falls by approximately 4.8 percent from 1996 to 2007. We restrict our panel to 1998
through 2006 for all specifications where the outcome is based on VA disability data.
79
96
....I.....I.I.,,..0
Figure 2.1. Number and Percentage of Veterans Enrolled in DC
4.0 -
18
3.5 -
15
3.0
12 0
3-r
2.5
9
CAC
2.0
6
1.5
3
1 .0
,
1988
.
1990
.
1992
.
I
1994
1996
1998
2000
2002
2004
2006
2008
2010
2012
2014
Year
-4-Number Enrolled
-a-
Percent Enrolled (All Vets)
Notes: This figure reports the number of veterans enrolled in the Veterans' Disability Compensation (DC) program (left axis) and
the percentage of living veterans enrolled in DC (right axis). Prior to 1999, all data comes from Annual Statistical Abstracts of the
United States. From 1999 through 2014, annual DC enrollment numbers are from Veterans' Benefits Administration Annual Reports
and the number of veterans in the U.S. comes from VetPop 2007, VetPop20 10, and VetPop2014.
97
Figure 2.2. DC Enrollment as a Percentage of Vietnam Era Veterans
-
20
-
-o 16
0
Ci
12
-
C
W
8
0
W
to
4
0 *1
1988
1990
1992
1994
1996
1998
2000
2002
2004
2006
2008
2010
2012
Year
Notes: This figure reports the annual percentage of Vietnam era Veterans who received any DC benefits. See sources from Figure 1.
98
Figure 2.3. Percentage of BOG and NOG Veterans Enrolled in DC
25 -1
20
m
'
da
015
C
E
-
0 10
0
5-
0*
1998
1999
2000
2001
2002
2003
2004
2005
Year
-*
BOG Veterans
-0- NOG Veterans
Notes: This table reports the annual percentage of BOG and NOG veterans who received any DC benefits between 1998 and 2006.
99
2006
Figure 2.4. Prevalance of DC Benefits for Diabetes in BOG and NOG Samples
-
7
E
5
-
-
6
.
0
4
0~
o
0
-
3
---
1999
2000
-
2001
.
0
2002
2003
2004
2005
Year
-*-BOG, Diabetes Benefits
S-NOG, Diabetes Benefits
--
BOG, Diabetes Benefits During Year of DC Enrollment
-+-
NOG, Diabetes Benefits During Year of DC Enrollment
Notes: This table reports the annual percentage of BOG and NOG veterans who were enrolled in DC and who received DC benefits for
diabetes. The series "BOG, Diabetes Benefits During Year of DC Enrollment" refers to the percentage of veterans in the BOG sample
who were enrolled in DC, received benefits for diabetes, and began receiving diabetes benefits the same year they enrolled in DC.
100
20106
Figure 2.5. Percentage of BOG and NOG Veterans Enrolled in DC without Diabetes
18
16
E
12
-
14 i
0
0
10
-
0
8
---- -e--U
s-I
0~
6
---- -----
-
-
0
0
4
4
2
0
i
I
I
I
I
I
I
I
1999
2000
2001
2002
2003
2004
2005
2006
Year
-+BOG Veterans
-0- NOG Veterans
Notes: This table reports the annual percentage of BOG and NOG veterans who were enrolled in DC but did not receive
compensation for diabetes.
101
Figure 2.6. BOG DC Receipt Relative to NOG by Year of Birth Cohort
-
9
-
8
-
7
C:
-
6
5
-
4
24
0
1998
1999
2000
2001
2002
2003
2004
2005
Year
-8-2YOB 1946-1948
-+-YOB 1949-1951
Notes: This table reports the absolute percentage point difference between BOG DC receipt and NOG DC receipt for two veteran birth
cohorts: 1946-1948 and 1949-1951. The vertical line indicates the year of the Agent Orange policy change. Each series is normalized to
the BOG-NOG difference in DC receipt as of year 1998.
102
2006
Figure 2.7. BOG LFP Relative to NOG by Year of Birth Cohort
-
-
0
-0.5
a.)
-1
-
a.)
a.)
0
-1.5
e'
a.)
a.)
-2
~.)
I-
a.)
-2.5
9%e
-3
-3.5
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
Year
-*-
-+-
YOB 1946-1948
YOB 1949-1951
Notes: This table reports the absolute percentage point difference between BOG labor force participation (LFP) and NOG LFP for two
veteran birth cohorts: 1946-1948 and 1949-195 1. The vertical line indicates the year of the Agent Orange policy change. Each series is
normalized to the BOG-NOG difference in LFP as of year 1996. Labor force participation is defined as positive earnings.
103
2007
Table 2.1. Evolution of Disability Compensation (DC) Benefits by Year of DC Enrollment
DC Enrollment Cohorts 1999-2006
Outcome Year
Year of
Enrollment
1999
2000
2001
2002
2003
2004
2005
2006
1999
2000
2001
2002
2003
2004
2005
2006
1999
2000
2001
2002
2003
2004
2005
2006
1999
2000
2001
2002
2003
2004
2005
2006
1999
2000
2001
(1)
(2)
(3)
41.8
45.5
44.1
10,773
15.3
0.8
12,577
11,714
2002
(4)
2003
(5)
A. Mean Combined Disability Rating
53.5
57.7
49.7
55.0
59.4
50.0
46.2
52.3
41.2
46.0
38.7
41.8
B. Mean Annual DC Payment ($2013)
14,389
15,734
17,325
14,295
16,090
17,756
10,709
12,603
14,991
9,611
12,432
10,650
C. Percentage Receiving IU Award
31.3
26.4
20.9
31.3
17.6
25.2
15.3
20.8
12.3
2004
(6)
2005
(7)
2006
(8)
60.3
62.3
56.6
50.9
47.8
43.0
62.3
64.6
60.6
54.8
52.2
48.5
41.3
63.6
66.3
62.6
57.6
55.2
51.5
45.1
39.1
18,347
18,923
16,715
14,368
13,007
11,114
19,083
19,799
18,329
15,878
14,735
13,222
10,335
19,822
20,701
19,317
17,153
16,099
14,521
11,922
9,590
43.2
44.6
38.6
30.4
27.0
22.5
14.4
44.7
46.5
40.7
33.2
30.0
25.6
17.7
11.5
13.8
14.6
39.9
59.7
49.6
43.7
39.9
14.8
15.8
40.4
59.9
50.1
44.5
40.8
40.9
or 100 CDF
40.5
36.7
41.5
37.1
33.5
28.0
25.9
20.1
21.9
15.1
16.8
D. Percentage Receiving Compensation for Diabetes
12.4
7.9
11.0
1.1
2.5
9.4
12.1
13.4
0.5
2.3
39.4
38.6
15.3
37.6
59.0
59.5
58.5
48.6
49.1
42.7
Notes: This table reports the progression of Disability Compensation (DC) benefits for veterans in the sample
described in the data appendix. Each row corresponds to a DC enrollment cohort. Each column corresponds to a
particular year. In panel A, veterans with an IU award are given a CDR of 100. In panel B, mean annual DC payments
are in $2013.
104
Birth
Year
(1)
Table 2.2. Distribution of Year of Birth and Start Year in OEMA Sample
B. Start Year Distribution
A. Year of Birth Distribution
BOG Veterans
NOG Veterans
BOG Veterans
NOG Veterans
Percent of
of
Percent
of
Percent
Percent of
Observations BOG Sample
Observations NOG Sample
Start Year
Observations BOG Sample
Observations NOG Sample
(5)
(4)
(3)
(2)
(1)
(5)
(4)
(3)
(2)
1.42
0.07
0.07
0.10
0.09
0.09
0.15
0.17
0.18
0.25
0.37
106,389
12,132
12,107
15,181
14,769
14,192
16,002
15,689
7.14
0.81
0.81
1.02
0.99
0.95
1.07
1.05
15,895
18,357
26,859
1.07
1.23
1.80
123,079
13,933
13,250
14,212
15,024
17,947
23,911
34,477
47,114
60,134
80,247
8.26
0.93
0.89
0.95
1.01
1.20
1.60
2.31
3.16
4.03
5.38
<1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
36,885
1,776
1,779
2,485
2,254
2,339
4,018
4,359
29,298
38,786
48,100
1.75
0.10
0.11
0.14
0.19
0.29
0.42
0.73
1.13
1.49
1.85
72,463
2.79
131,973
8.86
1966
38,014
1.46
88,681
5.95
3.74
7.56
7.71
6.30
174,641
263,378
263,700
149,738
11.72
17.67
17.69
10.05
<1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
45,287
2,665
2,840
3,548
4,813
7,522
10,865
18,988
1946
4,778
6,362
9,675
1947
1948
1949
1950
108,739
165,733
155,513
143,759
4.19
6.39
5.99
5.54
227,042
227,941
191,467
134,445
15.23
15.29
12.85
9.02
1967
1968
1969
1970
96,939
196,281
199,973
163,525
1951
1952
141,803
165,508
5.46
6.38
72,982
37,175
4.90
2.49
1971
1972
191,736
215,043
7.39
8.29
60,564
15,087
4.06
1.01
1953
141,310
5.45
15,720
1954
1955
>1956
157,134
162,502
968,018
6.05
6.26
37.30
5,849
919
1,518
1.05
0.39
0.06
0.10
1973
1974
1975
>1976
Missing
144,792
187,413
162,643
920,849
1,276
5.58
7.22
6.27
35.48
0.05
3,053
1,753
1,802
3,624
196,766
0.20
0.12
0.12
0.24
13.20
Total
2,595,194
100.00
1,490,359
100.00
Total
2,595,194
100.00
1,490,359
100.00
Notes: Panel A reports the distribution of birth years in the original OEMA sample for the BOG and NOG subsamples. Panel B reports the distribution of
start years. A start year is the year a veteran entered the Army.
105
Table 2.3. Characteristics of the BOG and NOG Samples
A. SSA Verified Sample
B. Full Sample
BOG
NOG
BOG
NOG
(2)
(1)
(2)
(1)
% Verified in SSA Data
% with Start Yrs 1966-71
% with YOB 1946-1951
% Deceased by 1997
% Deceased by 2006
% Nonwhite
% Missing Education
% HS Dropout
% HS Grad
% Some College
% College Grad
% More than College
%Missing AFQT Score
Average AFQT Score
Average Year of Birth
Average Start Year
Average Loss Year
% on DC in 1998
% on DC in 2000
% on DC in 2006
Mean Annual DC Payment in 1998
Mean Annual DC Payment in 2000
Mean Annual DC Payment in 2006
Mean Annual DC Pay in 1998 if>0
Mean Annual DC Pay in 2000 if>0
Mean Annual DC Pay in 2006 if>0
% Positive Earnings in 1998
% Positive Earnings in 2000
% Positive Earnings in 2006
Ln(Cell Mean Earnings) in 1998
Ln(Cell Mean Earnings) in 2000
Ln(Cell Mean Earnings) in 2006
% on SSDI in 1998
% on SSDI in 2000
% on SSDI in 2006
% on SSI in 1998
% on SSI in 2000
% on SSI in 2006
100.0
100.0
100.0
4.8
9.9
11.3
1.1
31.4
48.1
14.5
4.8
0.2
26.4
52.1
1948.4
1968.4
1971.1
13.5
14.3
23.0
10,529
12,078
15,935
10,530
12,079
15,935
84.3
82.8
71.3
10.67
10.66
10.37
5.6
6.8
12.6
0.7
0.7
0.7
100.0
100.0
100.0
2.9
7.9
11.8
15.3
23.3
37.1
14.4
9.2
0.8
19.1
53.4
1948.6
1969.0
1971.4
6.3
6.5
7.6
9,993
10,750
12,543
9,993
10,751
12,543
85.1
84.1
74.4
10.70
10.69
10.44
4.6
5.5
9.7
1.0
1.0
1.3
95.7
100.0
100.0
6.2
11.5
11.8
1.2
31.4
48.3
14.3
4.7
0.2
25.8
51.7
1948.4
1968.4
1971.1
13.5
14.2
22.9
10,600
12,140
15,982
10,600
12,140
15,982
89.0
100.0
100.0
6.1
11.0
12.7
14.3
24.3
38.0
14.2
8.6
0.7
17.7
52.5
1948.7
1969.0
1971.4
6.2
6.4
7.5
10,137
10,888
12,658
10,137
10,889
12,658
Observations
765,852
585,017
806,698
659,283
Notes: Panel A reports summary statistics for veterans in the BOG and NOG samples described in Section
III and the data appendix. Panel B includes veterans who did not verify with the Social Security
Administration, but makes all other sample restrictions described in the data appendix. All earnings and DC
payment values are in $2013. Disability, employment, and earnings information only reflect living veterans.
106
Table 2.4. DC Receipt and Diabetes Compensation among BOG and NOG Veterans, 1998-2006
Year
1998
1999
2000
2001
2002
2003
2004
2005
2006
1998
1999
2000
2001
2002
2003
2004
2005
2006
Full Sample
BOG
NOG
(1)
(2)
6.3
6.4
6.5
6.6
6.8
7.0
7.2
7.4
7.6
13.5
13.9
14.3
15.0
16.4
18.3
19.9
21.4
23.0
Whites
NOG
BOG
(3)
(4)
A. Percentage Enrolled in DC
13.0
6.0
13.4
6.1
6.2
13.8
14.4
6.3
6.4
15.8
6.6
17.6
6.8
19.1
7.0
20.6
7.2
22.1
Nonwhites
NOG
BOG
(6)
(5)
9.0
9.2
9.3
9.4
9.7
10.0
10.3
10.6
10.9
B. Mean Annual DC Payment, Conditional on DC Enrollment
9,993
10,529
9,989
10,476
10,011
10,356
11,284
10,324
11,206
10,515
10,750
12,078
10,690
11,987
11,055
11,089
12,763
11,022
12,654
11,427
11,225
13,109
11,135
12,976
11,675
11,665
13,992
11,537
13,835
12,310
11,991
14,773
11,850
14,577
12,702
12,237
15,420
12,082
15,207
13,018
12,543
15,935
12,381
15,696
13,363
17.1
17.7
18.2
19.3
21.4
24.0
26.0
28.2
30.2
10,859
11,765
12,635
13,422
13,908
14,923
15,933
16,688
17,358
C. Percentage Receiving Compensation for Diabetes, Conditional on DC Enrollment
1998
1999
2000
2001
2002
2003
2004
2005
2006
1.4
1.4
1.6
3.3
4.4
5.2
5.8
6.4
0.7
0.7
2.4
13.3
19.6
22.9
25.6
28.1
1.2
1.2
1.4
3.0
4.0
4.8
5.4
6.0
0.6
0.6
2.2
12.4
18.4
21.8
24.4
26.9
2.2
2.2
2.5
4.8
6.3
7.2
7.8
8.4
1.3
1.3
3.9
18.3
26.2
30.0
32.8
35.2
Notes: Panel A of this table reports the percentage of veterans in the BOG and NOG samples who were
enrolled in the DC program between 1998 and 2006. Panel B reports the mean monthly disability payments (in
$2013) for veterans who were enrolled in the DC program between 1998 and 2006. Panel C reports the
percenage of DC enrolled veterans who received compensation for diabetes. 1998 is not included in panel C
because our data include specific conditions for 1999 through 2006 only.
107
Table 2.5. DC Receipt in the BOG versus NOG samples from 1998-2006
Dependent Variable: Percentage of Cell Enrolled in DC
A. All
B. YOB: 1946-1948
C. YOB: 1949-1951
(2)
(1)
(2)
(1)
(2)
(1)
BOG
6.97***
(0.06)
BOG * (YR-1998)
BOG * (YR-2001)*
(YR>2002)
BOG * (YR99)
BOG
*
(YROO)
BOG *(YRO1)
BOG
*
(YR02)
BOG
*
(YR03)
BOG
*
(YR04)
BOG *(YR05)
BOG
*
(YR06)
Outcome Mean (1998)
# OBS (Cell)x(Year)
6.91***
(0.05)
0.40***
(0.02)
0.99***
(0.03)
7.12***
(0.09)
0.33***
(0.09)
0.59***
(0.09)
1.18***
(0.09)
2.46***
(0.09)
4.08***
(0.09)
5.39***
(0.09)
6.71***
(0.09)
7.98***
(0.09)
0.34***
(0.13)
0.58***
(0.13)
1.18***
(0.13)
2.55***
(0.13)
4.22***
(0.13)
5.58***
(0.13)
6.93***
(0.13)
8.28***
(0.13)
10.36
2,142,029
10.75
1,104,897
7.06***
(0.07)
0.40***
(0.03)
1.04***
(0.05)
6.83***
(0.09)
6.77***
(0.07)
0.40***
(0.03)
0.93***
(0.05)
0.32**
(0.12)
0.61***
(0.12)
1.18***
(0.12)
2.37***
(0.12)
3.92***
(0.12)
5.18***
(0.12)
6.45***
(0.12)
7.65***
(0.13)
9.94
1,037,132
Notes: This table reports estimates of equations (1) and (2) where the dependent variable is the percentage of each cell
enrolled in the Disability Compensation program each year. Panel A includes all veterans in our sample and other
panels restrict the sample to veterans in the specified group. Each cell has one observation for each year between 1998
and 2006. All regressions include year fixed effects and fixed effects for each combationation of (AFQTquintile)x(year), (year of birth)x(year), and (race)x(year), where race is defined as white or nonwhite and veterans with
a missing AFQT score are grouped into a sixth AFQT category. All regressions weight each observation according to
the number of veterans in the cell. Standard errors are reported in parentheses. *
**, and * denote significance at the
1%, 5%, and 10% level, respectively.
108
TTable 2.6. Disability Compensation Outcomes in the BOG versus NOG samples from 1998-2006
AFQT>45
AFQT<45
Whites
All
Nonwhites
(5)
(4)
(3)
(2)
(1)
BOG
BOG * (YR-1998)
*
BOG * (YR-2001)
(YR 2002)
Outcome Mean (1998)
# OBS (Cell)x(Year)
A. Dependent Variable: Percentage of Cell enrolled in DC
7.72***
6.80***
8.33***
6.91***
(0.10)
(0.17)
(0.05)
(0.05)
0.61***
0.37***
0.52***
0.40***
(0.08)
(0.03)
(0.05)
(0.02)
0.99***
1.27***
1.19***
0.95***
(0.04)
(0.11)
(0.06)
(0.03)
10.36
2,142,029
13.44
232,540
9.96
1,909,489
11.55
660,866
B. Dependent Variable: Cell Mean Combined Disability Rating (CDR)
2.88***
3.39*
2.81***
3.83***
(0.10)
(0.06)
(0.03)
(0.03)
0.50***
0.34***
0.47***
BOG * (YR-1998)
0.32***
(0.01)
(0.05)
(0.01)
(0.03)
0.66***
0.96***
0.63***
0.83***
BOG * (YR-2001)
(0.04)
(YR 2002)
(0.02)
(0.07)
(0.02)
*
*
BOG
Outcome Mean (1998)
# OBS (Cell)x(Year)
BOG
BOG * (YR-1998)
*
BOG * (YR-2001)
(YR 2002)
Outcome Mean (1998)
# OBS (Cell)x(Year)
3.96
2,142,029
5.25
232,540
3.79
1,909,489
4.75
660,866
C. Dependent Variable: Cell Mean Annual DC Payment
1,072***
930***
766***
744***
(11)
(20)
(10)
(35)
107***
158***
159***
113***
(10)
(17)
(5)
(5)
239***
192***
290***
179***
(14)
(24)
(7)
(7)
1,076
2,142,029
1,425
232,540
1,031
1,909,489
1,323
660,866
5.93***
(0.07)
0.31***
(0.03)
0.81***
(0.05)
8.98
988,801
2.48***
(0.04)
0.26***
(0.02)
0.51***
(0.03)
3.19
988,801
673***
(13)
85***
(6)
145***
(9)
851
988,801
Notes: This table reports estimates of equation (2) where the dependent variable is the outcome indicated
in the panel heading. Each cell has one observation for each year between 1998 and 2006. All regressions
include year fixed effects and fixed effects for each combationation of (AFQT-quintile)x(year), (year of
birth)x(year), and (race)x(year), where race is defined as white or nonwhite. Column I includes all
veterans in our sample; all other columns restrict the sample to veterans in the specified group. Columns
1-3 group veterans with a missing AFQT score into a sixth AFQT category while columns 4-5 exclude
veterans with missing AFQT scores. In panel B, veterans with an IU rating are coded as having a
Combined Disability Rating (CDR) of 100. In panel C, mean annual DC payments are in $2013. All
regressions weight each observation according to the number of veterans in the cell. Standard errors are
reported in parentheses. ***, **, and * denote significance at the 1%, 5%, and 10% level, respectively.
109
Table 2.7. SSDI, SSI, and Total Disability Income in the BOG versus NOG samples
All
Nonwhites
Whites
AFQT<45
(4)
(3)
(2)
(1)
BOG
BOG * (YR-1 996)
*
BOG * (YR-2001)
(YR:2002)
Outcome Mean (1996)
# OBS (Cell)x(Year)
BOG
BOG
*
(YR-1 996)
*
BOG * (YR-2001)
(YR!2002)
Outcome Mean (1996)
# OBS (Cell)x(Year)
A. Dependent Variable: Percentage of Cell Receiving SSDI
0.74***
1.14***
0.69***
1.12***
(0.04)
(0.12)
(0.04)
(0.07)
0.13***
0.19***
0.13***
0.14***
(0.01)
(0.04)
(0.01)
(0.02)
0.12***
0.06
0.12***
0.12***
(0.02)
(0.06)
(0.02)
(0.04)
4.51
2,862,513
6.69
310,987
4.23
2,551,526
6.77
883,832
B. Dependent Variable: Percentage of Cell Receiving SSI
-0.238***
-0.353***
-0.222***
-0.328***
(0.012)
(0.055)
(0.012)
(0.028)
-0.025***
-0.024
-0.025***
-0.038***
(0.004)
(0.016)
(0.003)
(0.008)
-0.013**
-0.052**
-0.008
-0.036***
(0.006)
(0.026)
(0.006)
(0.013)
0.90
2,862,513
2.26
310,987
0.73
2,551,526
1.54
883,832
AFQT 45
(5)
0.81***
(0.04)
0.11***
(0.01)
0.10***
(0.02)
3.08
1,321,309
-0.063***
(0.013)
-0.015***
(0.004)
-0.004
(0.006)
0.50
1,321,309
*
C. Dependent Variable: Annual Disability Income (DC Ben + SSDI Ben + SSI Ben), 1998-2006 only
BOG
861***
1,061***
835***
772***
1,197***
(13)
(44)
(14)
(26)
(17)
BOG * (YR-1998)
128***
178***
121***
172***
98***
(6)
(21)
(6)
(12)
(8)
BOG * (YR-2001)
226***
325***
214***
277***
173***
(YR22002)
(18)
(11)
(9)
(30)
(9)
Outcome Mean (1998)
Total Income Mean (1998)
# OBS (Cell)x(Year)
1,608
53,639
2,142,029
2,181
38,276
232,540
1,535
55,601
1,909,489
2,093
39,549
660,866
1,225
59,663
988,801
Notes: This table reports estimates of equation (2) where the dependent variable is the outcome indicated in each
panel heading. In panels A and B, each cell has one observation for each year between 1996 and 2007. Panel C only
includes observations from 1998 through 2006 since DC outcomes are not available in other years. Annual
disability benefits in panel C are in $2013. All regressions include year fixed effects and fixed effects for each
combationation of (AFQT-quintile)x(year), (year of birth)x(year), and (race)x(year), where race is defined as white
or nonwhite. Column I includes all veterans in our sample and columns 2-5 restrict the sample to veterans in the
specified group. Columns 1-3 group veterans with a missing AFQT score into a sixth AFQT category while
columns 4-5 exclude veterans with missing AFQT scores. All regressions weight each observation according to the
number of veterans in the cell. Standard errors are reported in parentheses. *
*, and * denote significance at the
1%, 5%, and 10% level, respectively.
110
Table 2.8. Labor Force Participation in the BOG versus NOG samples from 1996-2007
Dependent Variable: Percentage of Cell with Positive Annual Earnings
B. YOB: 1946-1948
C. YOB: 1949-1951
A. All
(2)
(1)
(2)
(1)
(2)
(1)
BOG
-0.32***
-0.33***
0.00
0.02
-0.82***
-0.87***
(0.11)
(0.08)
(0.05)
(0.10)
(0.07)
(0.08)
BOG * (YR-1996)
-0.15***
-0.17***
-0.14***
(0.02)
(0.02)
(0.02)
-0.18***
-0.18***
-0.19***
BOG * (YR-2001) *
(YR>2002)
(0.03)
(0.04)
(0.04)
BOG * (YR97)
-0.16
-0.15
-0.16
(0.11)
(0.15)
(0.16)
BOG * (YR98)
-0.31***
-0.29**
-0.33**
(0.11)
(0.16)
(0.15)
BOG * (YR99)
-0.45***
-0.38**
-0.52***
(0.11)
(0.15)
(0.16)
BOG * (YROO)
-0.77***
-0.79***
-0.75***
(0.11)
(0.15)
(0.16)
BOG * (YRO1)
-0.81***
-0.87***
-0.76***
(0.11)
(0.15)
(0.16)
BOG * (YR02)
-1.01***
-1.06***
-0.96***
(0.11)
(0.15)
(0.16)
BOG * (YR03)
-1.34***
-1.45***
-1.23***
(0.11)
(0.15)
(0.16)
BOG * (YRO4)
-1.74***
-1.92***
-1.55***
(0.11)
(0.15)
(0.16)
BOG * (YR05)
-2.15***
-2.27***
-2.03***
(0.11)
(0.15)
(0.16)
BOG * (YR06)
-2.43***
-2.56***
-2.29***
(0.11)
(0.15)
(0.16)
BOG * (YR07)
-2.83***
-2.93***
-2.73***
(0.11)
(0.15)
(0.16)
Outcome Mean (1996)
# OBS (Cell)x(Year)
85.70
2,862,513
86.56
1,476,423
84.80
1,386,090
Notes: This table reports estimates of equations (1) and (2) where the dependent variable is the percentage of each cell
with positive annual earnings. Panel A includes all veterans in our sample and panels B and C restrict the sample to
veterans in the specified group. All regressions include year fixed effects and fixed effects for each combationation of
(AFQT-quintile)x(year), (year of birth)x(year), and (race)x(year), where race is defined as white or nonwhite and
veterans with a missing AFQT score are grouped into a sixth AFQT category. All regressions weight each observation
according to the number of veterans in the cell. Standard errors are reported in parentheses. *
**, and * denote
significance at the 1%, 5%, and 10% level, respectively.
III
BOG
*
(YR-1996)
*
BOG
*
BOG * (YR-2001)
(YRt2002)
Outcome Mean (1996)
# OBS (Cell)x(Year)
85.70
2,862,513
78.18
310,987
86.67
2,551,526
81.70
883,832
BOG * (YR-2001)
(YR 2002)
B. Dependent Variable: 100 x Ln(Cell Mean Earnings)
-4.68***
-2.59***
-2.30***
0.02
(0.18)
(0.62)
(0.18)
(0.32)
-0.23 * *
-0.33*
-0.21 ***
-0.27***
(0.18)
(0.05)
(0.05)
(0.10)
-0.77***
-0.45***
-0.41 ***
-0.40**
(0.30)
(0.09)
(0.16)
(0.08)
Outcome Mean (1996)
Earnings Mean (1996)
# OBS (Cell)x(Year)
1,065.83
49,848
2,855,305
BOG
*
(YR-1996)
BOG
BOG
*
(YR-1996)
*
BOG * (YR-2001)
(YR 2002)
1,032.25
35,449
308,923
1,070.12
51,691
2,546,382
1,038.31
36,656
879,962
C. Dependent Variable: 100 x Ln(Percentage of Cell with Earnings>0)
-0.29***
-3.57***
0.13
-1.20***
(0.31)
(0.08)
(0.09)
(0.17)
-0.3 1* **
-0.21 **
-0.20***
-0.24***
(0.09)
(0.02)
(0.05)
(0.03)
-0.36***
-0.52***
-0.35***
-0.44***
(0.04)
(0.15)
(0.04)
(0.08)
Outcome Mean (1996)
# OBS (Cell)x(Year)
*
*
*
BOG
-17.85
2,855,305
-28.26
308,923
-16.51
2,546,382
-23.28
879,962
-0.92***
(0.07)
-0. 11 **
*
A. Dependent Variable: Percentage of Cell with Earnings>0
-0.33***
-2.50***
-0.05
-0.88***
(0.05)
(0.06)
(0.18)
(0.10)
-0.19***
-0.15*
-0.15***
-0.16***
(0.02)
(0.05)
(0.02)
(0.03)
-0.22**
-0.18***
-0.18***
-0.20***
(0.03)
(0.03)
(0.09)
(0.05)
AFQT 45
(5)
(0.02)
-0.15***
(0.03)
89.08
1,321,309
-4.10***
(0.23)
-0.13
(0.07)
-0.27**
(0.11)
*
Table 2.9. LFP and Earnings in the BOG versus NOG samples from 1996-2007
All
Nonwhites
Whites
AFQT<45
(4)
(3)
(2)
(1)
1,080.45
55,709
1,320,155
-1.14***
(0.10)
-0.16***
(0.03)
-0.29**
(0.05)
-13.25
1,320,155
Notes: This table reports estimates of equation (2) where the dependent variable is the outcome indicated in
each panel heading. Each cell has one observation for each year between 1996 and 2007. Panels B and C
exclude cells with 0 mean annual earnings. All earnings outcomes are in $2013. All regressions include year
fixed effects and fixed effects for each combationation of (AFQT-quintile)x(year), (year of birth)x(year), and
(race)x(year), where race is defined as white or nonwhite. Column 1 includes all veterans in our sample and
columns 2-5 restrict the sample to veterans in the specified group. Columns 1-3 group veterans with a missing
AFQT score into a sixth AFQT category while columns 4-5 exclude veterans with missing AFQT scores. All
regressions weight each observation according to the number of veterans in the cell. Standard errors are
reported in parentheses. ***, **, and * denote significance at the 1%, 5%, and 10% level, respectively.
112
Table 2.10. 2SLS Estimates of DC Receipt and Annual Disability Benefits on Labor Force Participation
Dependent Variable: Percentage of Cell with Positive Earnings. 1998-2006.
DC Enrollment
Outcome Mean (1998)
# OBS (Cell)x(Year)
(Disability Benefits)/1000
Outcome Mean (1998)
# OBS (Cell)x(Year)
YOB: 1946-
YOB: 1949-
All
1948
1951
Nonwhites
Whites
AFQT<45
(1)
(2)
(3)
(4)
(5)
(6)
A. Endogenous Variable: Percentage of Cell Enrolled in DC
-0.17***
-0.25**
-0.21***
-0.16***
-0.18***
(0.05)
(0.11)
(0.06)
(0.07)
(0.04)
84.7
2,142,029
85.5
1,104,897
83.9
1,037,132
77.1
232,540
85.7
1,909,489
B. Endogenous Variable: (DC Benefits + SSDI Benefits + SSI Benefits)/1000
-0.77***
-0.97**
-0.94***
-0.70***
-0.80***
(0.19)
(0.42)
(0.28)
(0.23)
(0.18)
84.7
2,142,029
85.5
1,104,897
83.9
1,037,132
77.1
232,540
85.7
1,909,489
-0.20***
(0.07)
80.5
660,866
-0.84***
(0.28)
80.5
660,866
AFQT 45
(7)
-0.17**
(0.07)
88.3
988,801
-0.82***
(0.31)
88.3
988,801
Notes: This table reports 2SLS estimates of the effects of DC receipt on labor force participation (panel A) and annual disability benefits on
labor force participation (panel B). The instrument for all estimates is (BOG)x(year - 2001)xl(year ' 2002). All regressions include a main
effect for (BOG), a (BOG)x(year - 1998) time trend, year fixed effects, and fixed effects for each combationation of (AFQT-quintile)x(year),
(year of birth)x(year), and (race)x(year). Annual disability benefits are in $2013. Each cell has one observation for each year between 1998
and 2006 and all regressions weight each observation according to the number of veterans in the cell. Standard errors are reported in
parentheses. ***, **, and * denote significance at the 1%, 5%, and 10% level, respectively.
113
Table 2.11. Earnings plus Disability Income in the BOG versus NOG samples
Dependent Variable: 100 x Ln(Earnings + DC Ben + SSDI Ben + SSI Ben), 1998-2006 only
All
Nonwhites
Whites
AFQT<45
AFQT 45
(5)
(4)
(3)
(2)
(1)
BOG
BOG * (YR-1998)
BOG * (YR-200l)*
(YR 2002)
Outcome Mean (1998)
Total Income Mean (1998)
# OBS (Cell)x(Year)
A. Reduced Form Effect of Agent Orange Policy Change
-0.75***
-0.80
-0.70***
(0.17)
(0.53)
(0.18)
0.20**
0.27
0.19**
(0.08)
(0.25)
(0.08)
0.51***
1.04***
0.45***
(0.11)
(0.36)
(0.12)
1,074.05
53,639
2,137,665
1,043.42
38,276
231,235
1,077.96
55,601
1,906,430
3.66***
(0.28)
0.33**
(0.13)
0.86***
(0.19)
1,048.71
39,549
658,480
253***
(0.23)
0.23**
(0.11)
0.34**
(0.16)
1,087.39
59,663
988,152
B. 2SLS Estimates of DC Receipt on 100 x Ln(Earnings + DC Ben + SSDI Ben + SSI Ben)
Endogenous Variable: Percentage of Cell Enrolled in DC
DC Enrollment
Outcome Mean (1998)
Total Income Mean (1998)
# OBS (Cell)x(Year)
0.52***
(0.12)
1,074.05
53,639
2,137,665
0.83***
(0.29)
1,043.42
38,276
231,235
0.47***
(0.13)
1,077.96
55,601
1,906,430
0.73***
(0.16)
1,048.71
39,549
658,480
0.42**
(0.20)
1,087.39
59,663
988,152
*
Notes: Panel A of this table reports estimates of equation (2) where the dependent variable is 100 times the log of
each cell's mean annual earnings and total disability income. Panel B reports 2SLS estimates of the effects of DC
receipt on the same outcome. The instrument in panel B is (BOG) x (year - 2001) x 1(year > 2002) and the
endogenous variable is the percentage of the cell enrolled in DC. Each cell has one observation for each year
between 1998 and 2006, which are the years where DC information is available. Cells with 0 mean annual earnings
are excluded. Earnings and disability income values are in $2013. All regressions include year fixed effects and
fixed effects for each combationation of (AFQT-quintile)x(year), (year of birth)x(year), and (race)x(year), where
race is defined as white or nonwhite. Column I includes all veterans in our sample and columns 2-5 restrict the
sample to veterans in the specified group. Columns 1-3 group veterans with a missing AFQT score into a sixth
AFQT category while columns 4-5 exclude veterans with missing AFQT scores. All regressions weight each
observation according to the number of veterans in the cell. Standard errors are reported in parentheses. ***,
and * denote significance at the 1%, 5%, and 10% level, respectively.
114
Appendix Figure 2. lA. Data Diagram
Army Loss Files
1968-1985
Department of Defense
Vietnam File
Original OEMA Sample
N = 4.086 million
SSA Earnings
Data
VA DC Data
N=
SSA Mortality
Data
SSA Verified Sample
3.804 million, Cells= 694,589
YOB: 1946-1951, Start Year: 1966-1971
N = 1.508 million, Cells = 283,278
Exclude Veterans with Missing Loss Years,
Loss Years before 1968, or Loss Years after 1985
N = 1.359 million, Cells = 255,222
N
=
Cell Restrictions*
1.351 million, Cells= 253,122
*Cell restrictions limit to the sample to cells where all veterans have the same birth year
(466 observations dropped), all veterans have start years between 1966 and 1971
(2,677 observations dropped), all veterans have loss years between 1968 and 1985
(4,681 observations dropped), and no veterans have a missing loss year (80
observations dropped).
Appendix Figure 2.1 B. Annual Observations Breakdown
I
I
**A deceased veteran is removed from the panel in the year of his death and all years
following. If only a fraction of the cell is deceased in a particular year, all cell level
averages are recomputed to reflect averages among living veterans only.
Appendix Table 2.1. Percentage of DC Recipients and Average Annual Benefit by CDR and Service Era in 2006
CDR
Recipients
Payments (2013
$ Millions)
0%
14,291
10%
20%
30%
40%
50%
60%
70%
80%
775,346
90%
60,546
100%
238,662
$15
$1,212
$1,275
$1,742
$1,956
$1,709
$3,236
$4,263
$3,252
$1,896
$9,051
Total
2,725,824
$29,608
417,721
334,931
259,834
161,568
184,264
165,257
113,404
Mean Annual
Benefit
Gulf
Share with Each Rating by Service Era
WW II
Vietnam
Korea
$1,030
0.1%
0.3%
2.4%
$1,563
21.8%
30.6%
13.3%
10.6%
9.0%
5.9%
7.0%
8.5%
5.7%
3.1%
14.7%
13.0%
12.4%
8.2%
5.4%
7.6%
5.5%
4.1%
$37,926
27.3%
17.8%
15.3%
12.5%
7.2%
7.0%
4.8%
3.4%
1.7%
3.0%
$10,862
694,813
947,598
$3,052
$5,202
$7,530
$10,577
$17,564
$25,799
$28,678
$31,310
1.4%
33.0%
Peacetime
0.4%
37.3%
17.8%
2.3%
8.5%
12.5%
13.1%
8.1%
5.8%
7.1%
5.4%
4.1%
2.3%
7.2%
159,804
328,044
595,565
11.0%
8.0%
4.7%
5.7%
4.2%
2.7%
1.4%
7.0%
Source: U.S. Department of Veterans Affairs, 2006 Annual Benefits Report. Average annual benefit payments are in $2013 (million
117
Appendix Table 2.2. Distribution of Loss Year in OEMA Sample
A. NOG Veterans
Loss Year
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
Missing
Total
Observations
17,787
83,222
265,968
241,772
122,400
164,026
175,414
162,983
160,387
146,516
129,296
141,220
137,754
119,916
128,458
141,806
131,090
125,179
2,595,194
B. BOG Veterans
Percent of NOG
Sample
Observ ations
Percent of BOG
Sample
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.69
3.21
10.25
9.32
4.72
6.32
6.76
6.28
6.18
5.65
4.98
5.44
5.31
4.62
4.95
5.46
5.05
4.82
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
7,128
165,959
258,909
273,725
219,993
77,013
51,296
32,696
26,037
23,410
21,334
19,612
15,946
12,921
12,153
11,393
11,597
10,857
6,052
5,089
5,684
5,212
4,392
3,501
3,671
2,563
201,786
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.02
0.48
11.14
17.37
18.37
14.76
5.17
3.44
2.19
1.75
1.57
1.43
1.32
1.07
0.87
0.82
0.76
0.78
0.73
0.41
0.34
0.38
0.35
0.29
0.23
0.25
0.17
13.54
100.00
1,490,359
100.00
1
2
2
2
8
25
140
250
Notes: This table reports the distribution of loss years within the NOG and BOG groups of the
original OEMA sample. A loss year is the year a veteran exited the Army.
118
4
Appendix Table 2.3. SSA Verification Rate by Start Year
A. NOG Veterans
Start Year
S1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
>1976
Missing
Obs
B. BOG Veterans
SSA Match
Rate
86.5
85.3
88.2
87.3
87.8
88.6
82.2
82.4
82.8
81.5
82.4
84.3
84.8
88.1
88.4
91.0
93.5
93.6
91.4
91.4
91.5
91.6
94.4
93.3
92.3
87.0
87.1
86.6
86.2
85.6
85.8
88.2
96.3
81.7
6,970
1,321
2,057
2,027
3,752
3,846
5,400
4,275
1,925
1,983
1,522
1,807
1,776
1,779
2,485
2,254
2,339
4,018
4,359
4,778
6,362
9,675
38,014
96,939
196,281
199,973
163,525
191,736
215,043
144,792
187,413
162,643
920,849
1,276
Total
Obs
8,209
2,808
3,943
3,754
5,988
4,794
8,286
13,569
13,125
15,593
13,370
12,950
12,132
12,107
15,181
14,769
14,192
16,002
15,689
15,895
18,357
26,859
88,681
174,641
263,378
263,700
149,738
60,564
15,087
3,053
1,753
1,802
3,624
196,766
SSA Match
Rate
95.8
96.4
97.2
96.4
96.3
95.8
95.0
95.3
98.0
98.6
98.9
99.0
98.8
99.0
98.9
98.9
99.0
98.8
98.5
98.4
98.5
98.2
98.8
97.2
95.5
93.1
98.8
99.4
99.5
98.8
99.1
98.6
99.1
99.4
2,595,194
90.9
1,490,359
97.0
Notes: This table reports SSA verification rates for veterans by start year and BOG or
NOG status.
119
Appendix Table 2.4. Comparison of Census and Army/SSA Demographics and Earnings Data for Year 1999
Age in 1999
A. Army/SSA Verified Sample (1999)
Whites
Nonwhites
All
(2)
(1)
(3)
50.5
50.5
50.3
(1.5)
(1.5)
(1.5)
Race
White
Nonwhite
88.7
11.3
100.0
0.0
0.0
100.0
Education
HS Dropout
HS Grad
Some College
College Grad
More than College
27.3
43.6
14.7
6.9
0.4
26.9
42.9
15.3
7.5
0.5
30.7
49.2
10.4
2.4
Positive Earnings
84.1
Annual Earnings ($2013)
Mean
Mean (if>0)
Median
Observations
B. Vietnam Veterans born 1946-1951: 2000 Census
All
Whites
Nonwhites
(2)
(3)
(1)
50.7
50.7
50.5
(1.6)
(1.6)
(1.6)
86.7
13.3
100.0
0.0
0.0
100.0
0.1
5.2
39.3
29.7
15.9
9.9
4.7
39.1
29.1
16.6
10.5
8.3
41.0
33.4
11.3
6.0
85.0
76.5
82.2
83.1
76.3
52,512
(76,853)
61,053
(80,366)
45,530
54,570
(80,899)
62,898
(84,600)
47,123
36,376
(24,652)
46,566
(27,209)
33,879
58,488
(65,224)
71,158
(65,378)
48,941
60,909
(67,114)
73,303
(67,174)
49,220
42,703
(48,337)
55,935
(48,170)
36,356
1,286,921
1,141,340
145,581
203,781
178,601
25,180
Notes: This table reports summary statistics for veterans in the SSA verified sample and veterans in the 2000 census. Panel A maintains the same sample
restrictions described in the data appendix and further restricts to veterans who were still alive as of 1999. All statistics in panel A are from 1999. Panel B
reports summay statistics for veterans in the 5 percent 2000 Census IPUMS extract who were born between 1946 and 1951. All earnings are in $2013.
120
Appendix Table 2.5. DC Progression and LFP for Pre-Existing Beneficiaries and New Beneficiaries
B. LFP (Cell Level)
A. CDR (Individual Level)
Not Enrolled
Enrolled in
in DC by
All Veterans
DC by 1998
1998
(1)
(2)
(3)
High DC_98
Low DC_98
All Veterans
Cell
Cell
(1)
(2)
(3)
2.87***
2.55***
-0.01
-0.66***
-0.71***
0.21***
(0.03)
0.34***
(0.01)
0.66***
(0.02)
(0.18)
0.84***
(0.08)
0.26**
(0.12)
(0.02)
0.22***
(0.01)
0.73***
(0.01)
(0.06)
-0.15***
(0.03)
-0.18***
(0.04)
(0.27)
-0.35***
(0.13)
0.06
(0.17)
(0.07)
-0.11***
(0.03)
-0.21***
(0.04)
3.96
38.21
0.00
84.68
77.23
85.58
# OBS
11,376,787
1,165,699
10,211,088
2,372,076
244,709
2,127,367
Years
1998-2006
1998-2006
1998-2006
1998-2007
1998-2007
1998-2007
BOG
BOG * (YR-1998)
BOG * (YR-2001)*
(YR>2002)
Outcome Mean (1998)
Notes: This table reports estimates of equation (2) where the dependent variable is a veteran's Combined Disability
Rating (panel A), or the percentage of the veteran's cell with positive annual earnings (panel B). In panel A, column 1
includes all veterans in the sample, column 2 only includes veterans who initially enrolled in DC in 1998 or earlier,
and column 3 only includes vetearns who were not enrolled in DC as of 1998. Veterans with an IU award are given a
CDR of 100. In panel B, column 1 includes all cells, column 2 only includes cells where at least one third of veterans
in the cell were enrolled in DC as of 1998, and column 3 includes cells where fewer than one third of veterans in the
cell were enrolled in DC as of 1998. All regressions in panel A are at the individual level while regressions in panel
B are at the cell level and are weighted by the number of veterans in a cell in a particular year. All regressions
include year fixed effects and fixed effects for each combationation of (AFQT-quintile)x(year), (year of
birth)x(year), and (race)x(year), where race is defined as white or nonwhite and veterans with a missing AFQT score
* and * denote
are grouped into a sixth AFQT category. Standard errors are reported in parentheses. *
significance at the 1%, 5%, and 10% level, respectively.
121
Chapter 3. The Impact of Voluntary Military Service on Mortality:
Evidence from Active Duty U.S. Army Applicants
Kyle Greenberg
July 2015
Abstract
Since the advent of the all-volunteer force in 1973, military operations in Afghanistan and Iraq have
produced more U.S. casualties than all other conflicts combined. This chapter exploits enlistment test score
cutoffs in a fuzzy regression-discontinuity (RD) design to evaluate causal effects of military service for
individuals who applied to the active duty U.S. Army from 1996 through 2007. Fuzzy RD results suggest
that military service does not increase mortality, but I cannot rule out positive effects. Ordinary Least
Squares (OLS) estimates that compare applicants who did not enlist to soldiers with similar characteristics
provide additional evidence that military service does not increase mortality. OLS estimates also indicate that
a soldier's first year in the Army is associated with substantial reductions in mortality relative to nonveteran
applicants.
* The views and findings expressed herein are those of the author and do not purport to reflect the position of the U.S.
Military Academy, the Department of the Army, or the Department of Defense. I am grateful to Josh Angrist, David
Autor, Alex Bartik, Brendan Price, Manasi Deshpandi, and seminar participants at M.I.T. and the United States Military
Academy. I would also like to thank Luke Gallagher of the Army Office of Economic Manpower Analysis and Marisa
Michaels of the Defense Manpower Data Center for outstanding research assistance and data collection.
122
3.1
Introduction
With over 5,300 soldiers killed as a result of hostile action in Afghanistan and Iraq, military
operations from September 2001 to date have marked the most lethal period of sustained conflict since the
beginning of the all-volunteer force (U.S. Department of Defense, 2015). In addition to the immediate
dangers of combat, political and military leaders widely interpret soldier suicides, which doubled between
2004 and 2009, as a symptom of the high operational tempo associated with recent conflicts (U.S. Army,
2010). Fear of respiratory complications from exposure to bum pits and concerns over the impact of
improvised explosive devices on traumatic brain injury add to the public debate surrounding the long term
health consequences of military service (Institute of Medicine, 2011, and CDC, NIH, DoD, and VA
Leadership Panel, 2013)
Attempts to quantify the effect of recent military service on mortality typically rely on comparisons of
veterans to nonveterans. Using mortality data through 2005, Kang and Bullman (2009) find that veterans
of Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) are at significantly lower risk
for all-cause mortality, but at slightly higher risk for suicide, than nonveterans after adjusting for age,
race, and gender. A more recent study using mortality data through 2011 finds that OEF/OIF veterans are
at greater risk of all-cause mortality than nonveterans, even after excluding combat deaths (Bollinger,
Schmidt, Pugh, Parsons, Copeland, and Pugh, 2015). These studies suffer from two major drawbacks.
First, they do not capture the entire association between military service and mortality because they
ignore deaths that occur during combat. Second, their findings are difficult to interpret because
comparisons of veterans to nonveterans suffer from selection bias, the direction of which is not
immediately clear. On one hand, many potential recruits do not meet the military's strict medical
standards, a fact that potentially explains why WWII veterans typically live longer than nonveterans of
the same age (Seltzer and Jablon, 1974). On the other hand, veterans who volunteered for military service
might be less risk-averse than nonveterans.
This chapter uses plausibly exogenous variation in applicant test scores to avoid the selection bias
inherent in comparisons of veterans to nonveterans. The U.S. Army requires most applicants to earn a 31
123
on their Armed Forces Qualification Test (AFQT) before enlisting. Moreover, many enlistment bonuses
are only available to applicants with an AFQT of 50 or higher. Using these cutoffs as instruments for
military service in a novel fuzzy regression discontinuity (RD) design, I find that military service
potentially reduces all-cause mortality for U.S. Army soldiers who applied between 1996 and 2007.80
However, I cannot rule out positive effects. To increase precision, I construct Ordinary Least Squares
(OLS) estimates of the effects of military service while controlling for applicant characteristics that
predict enlistment (age, race, test scores, education, state of birth), similar to the identification strategy
employed by Angrist (1998). OLS estimates indicate that veterans are 0.24 percentage points less likely to
die (as of February 2010) than similar nonveteran applicants, a difference equal to 18 percent of the
incidence of mortality among nonveteran applicants. Importantly, OLS estimates are similar to fuzzy RD
estimates and hardly change with the inclusion of controls that indicate if an applicant tested positive for
drugs or had at least one medically disqualifying condition.
I also find that soldier's are substantially less likely to die, relative to similar nonveteran applicants,
within a year of applying to join the military. Among potential recruits who applied prior to September
2001, the initial mortality gap between soldiers and nonveteran applicants widened (i.e. grew more
negative) before leveling off around 4 years after applying. However, among potential recruits who
applied between October 2001 and September 2004, the mortality gap narrowed before leveling off 4 to 5
years after applying. For soldiers in this latter group, the relative period of danger during their second,
third, and fourth years of service coincided with the most lethal years of operations in Iraq. Still, the net
effect of military service on mortality for the latter group of applicants is near zero, suggesting that
military service does not substantially increase mortality even during times of relative conflict.
I also estimate the effect of military service on receipt of Disability Compensation (DC) from the
Department of Veterans Affairs (VA). Since the purpose of DC is to compensate veterans for injuries
sustained or aggravated during military service, DC receipt should be strongly correlated with veteran
As far as I know, this is the first attempt to use such an RD design to identify causal effects of military service.
Unfortunately, the Social Security Administration's Public Death Master File (DMF), my source for mortality data,
does not identify specific causes of death.
80
124
health. DC also helps verify the stability and precision of two stage least squares (2SLS) estimates for
binary outcomes that have more variation than mortality. Fuzzy RD estimates imply that military service
increases DC receipt by 24 to 28 percentage points. This is probably a close approximation of the effect
of military service on receipt of any disability benefits, including DC, Social Security Disability Insurance
(SSDI), and Supplemental Security Income (SSI), because only 3 percent of adults under the age of 40
receive SSDI or SSI.8 1 One potential explanation for the large effect of military service on DC receipt is
that military service increases morbidity but not mortality. At the same time, factors other than health,
such as the relative attractiveness of DC when job opportunities are scarce, could also contribute to high
DC enrollment rates, a possibility discussed in Chapter 1 as well as in Angrist et. al (2010) and McNally
and Freuh (2013).
My results are broadly consistent with several studies that exploit Vietnam conscription lotteries to
identify causal effects of military service prior to the advent of the all volunteer force. Although Hearst,
Newman, and Hulley (1986) find an increase in deaths from suicides and motor vehicle accidents among
draft-eligible men, more recent research suggests that any elevated death risks among draft eligible men,
to include cause-specific death risks, had largely dissipated by 2002 (Conley and Heerwig, 2012).
Angrist, Chen, and Frandsen (2010) find a causal effect of Vietnam-era military service on receipt of any
Federal transfer income, particularly DC. Evidence from Australia's Vietnam draft lottery also indicates
that military service substantially increases disability receipt, but not post-service mortality, among
Australian conscripts (Siminski and Ville, 2011, and Siminski, 2013).
The analysis proceeds as follows. Section
1 describes how potential recruits apply to enlist in the U.S.
Army. Section 2 describes my data and discusses descriptive statistics. Section 3 explains my empirical
strategy, a fuzzy RD design that exploits U.S. Army testing thresholds, and presents first stage results.
Section 4 discusses 2SLS estimates and compares them to OLS estimates. Section 5 investigates how the
effect of military service on mortality changes over time and Section 6 concludes.
am currently in the process of acquiring SSDI, SSI, earnings, and employment outcomes from the SSA.
In 2013,
only 3.3 percent of nonveteran men between the ages of 25 and 39 received SSDI or SSI, compared to 3.1 percent
for veteran men of the same age. (American Community Survey (ACS), 2013).
81 1
125
3.2
Institutional Background
All persons interested in enlisting in the U.S. Army must first visit their local U.S. Army recruiting
office. After determining that a potential recruit meets basic age, citizenship, and background
requirements, the recruiter will schedule a two day appointment for the applicant at one of 65 Military
Entrance and Processing Stations (MEPS). All applicants take the Armed Services Vocational Aptitude
Battery (ASVAB) during their first day at the MEPS while the second day consists predominately of
physical and medical examinations.
Four of the 11 tests within the ASVAB contribute to an applicant's raw Armed Forces Qualification
Test (AFQT) score, which is then converted to a scaled AFQT score that represents the percentile-rank
(1-99) of an applicant's arithmetic and verbal reasoning skills relative to a nationally representative
sample of 18-23 year olds. (U.S. Department of Defense, 2004).82 The AFQT has an important role in
ensuring the quality of recruits for the All Volunteer Force. Law prohibits applicants with AFQT scores
below 10 and non-high school graduates with AFQT scores below 31 from enlisting in any branch of the
military. The Department of Defense further requires that 90 percent of non-prior service (NPS) recruits
have high school diplomas, that at least 60 percent of recruits have AFQT scores of 50 or higher, and that
no more than 4 percent of recruits have AFQT scores below 31 (U.S. Department of Defense 2004, p. 23). To meet DoD requirements, the U.S. Army rarely accepts applicants with AFQT scores below 31 and
generally only offers enlistment bonuses to applicants with AFQT scores of 50 or higher (U.S. Army
Recruiting Command, 2012). Applicants with low AFQT scores may retake the ASVAB one month after
their initial examination and can retest again one month after the initial retest. Applicants who wish to
retest a third time must wait an additional six months before doing so.
Prior to July 2004, the DoD scaled AFQT scores against young adults selected to participate in the
1980 Profile of American Youth (PAY). After July 1"t, 2004, the DoD scaled AFQT scores according to
82
Unless otherwise indicated, all subsequent references to AFQT scores refer to scaled AFQT scores.
126
the 1997 PAY.83 The 1997 norms produce lower AFQT scaled scores than the 1980 norms did. For
example, a scaled AFQT score of 31 under the 1980 PAY scale is equivalent to a scaled AFQT score of
26 under the 1997 PAY scale.84 Even though fewer applicants met enlistment test score qualifications
after the July 2004 re-norm, the U.S. Army did not adjust AFQT qualification requirements.
3.3
Data and Descriptive Statistics
My data combine administrative information for military enlistees with mortality information from
the Social Security Administration (SSA) and DC information from the VA. Military enlistment
information comes from an extract of the U.S.
Military Entrance and Processing Command
(USMEPCOM) Examination and Accessions File, which is warehoused at the Defense Manpower Data
Center (DMDC). The extract contains information on all persons who applied to join the Active Duty
Army between fiscal years 1996 and 2007. I further limit this sample to applicants with no prior military
service and who were between the ages of 17 and 34 at the time of their application. 85 The extract
includes demographic information, each applicant's three most recent AFQT scores, each veteran's
MEPS site, and each veteran's state of residence at the time of his application.
6
The extract also indicates
if an applicant tested positive for drugs and, for applicants who took their medical exam after August
2001, whether the individual had any medically disqualifying conditions. For applicants who enlisted, the
extract includes which service the applicant entered (e.g. Active Duty Army, Army National Guard, Army
Reserves, Active Duty Marine Corps, etc.) and the applicant's date of enlistment.
The United States Military Academy's Office of Economic and Manpower Analysis (OEMA) used
The 1980 PAY participants were members of the 1979 National Longitudinal Survey of Youth (NLSY) and 1997
PAY participants were members of the 1997 NLSY.
84 See Segall (2004) for a table that translates scaled AFQT scores under the 1997
PAY to scaled AFQT scores
under the 1980 PAY (Appendix E).
85 I also exclude approximately 6 percent of applicants who took their ASVAB in high school as part
of the ASVAB
Career Exploration Program because these applicants observe their AFQT scores before applying. This generates a
first order selection problem as most of these applicants only apply if their AFQT exceeds 31.
86 Less than 10 percent of applicants have no reported MEPS site. These individuals most likely took the
ASVAB at
a Mobile Examination Test (MET) site, a testing option for applicants who do not reside near a MEPS.
83
127
social security numbers to merge military applicant data to VA disability records, which consist of a
single cross-section of veterans enrolled in the DC program as of January 2014. VA records include each
beneficiary's Combined Disability Rating (CDR) and monthly disability payment. Unfortunately, VA
records do not indicate whether a veteran applied for an award but was ultimately denied. They also do
not indicate when a veteran first started to receive compensation.
OEMA also used social security numbers to merge military applicant data to mortality information
found in the SSAs publically available Death Master File (DMF). The DMF indicates the month of death
for individuals reported to the SSA as deceased through February 2015. In November 2011, the SSA
decided that it could no longer include protected state death records in the public DMF. As a result, the
organization removed 4.2 million death records from the public file and reduced the number of annual
additions to the DMF by
1 million, or approximately 40 percent of annual deaths in the U.S. (National
Technical Information Service, 2011). Appendix Table 3.1, which reports the total number of deaths
recorded in the DMF for each month between January 2009 and December 2011, reveals a substantial
decline in death records between January and April 2010.87 The table also indicates that reported death
rates in 2011 were just over half of the death rates reported in 2009. I therefore predominately rely on
DMF records from February 2010 or earlier to measure mortality in my analysis below.
Another potential drawback of the DMF is that it tends to under-report mortality for younger
individuals. Using the National Death Index (NDI) as the standard for mortality reporting, Hill and
Rosenwaike (2001) estimate that the DMF correctly reports 73 percent of deaths among persons aged 25
to 54 who died between 1994 and 1997.88 To investigate the accuracy of DMF data within my sample, I
used 2010 life tables from the Center for Disease Control to predict the percentage of applicants expected
to be deceased by 2010 based on gender, year of birth, and race. Appendix Table 3.2 compares these
predicted mortality percentages, reported in column 1, to actual mortality percentages for all applicants,
87
I observe a similar decline in death records within my sample of U.S. Army applicants.
I am currently in the process of merging my data with the National Death Index (NDI), which contains more
accurate records on all-cause mortality and includes information on cause of death (Hill and Rosenwaike, 2001).
88
128
applicants who enlisted in the military (veterans), and applicants who did not enlist in the military
(nonveterans).8 9 Among white men and white women, both veterans and nonveterans have a slightly
higher incidence of reported mortality than predicted mortality, potentially indicating that Army
applicants are more prone to risk than the general population. However, among male and female black
applicants, predicted mortality percentages are nearly twice as large as actual mortality percentages for
both veterans and nonveterans. The DMF might under-report mortality for low-income households; I
discuss how this could bias my results in more detail below. Appendix Table 3.2 also reveals a very low
incidence of mortality among female applicants. Considering this, I restrict my sample to male applicants
in the analysis that follows.
Panel A of Table 3.1 reports summary statistics for male applicants in my sample. Column 1 reports
statistics for all applicants while column 2 and column 3 report statistics on nonveterans (applicants with
no military service) and veterans (applicants with any military service), respectively. Veterans are less
likely to be minorities and are more likely to have a high school diploma than nonveterans. On average,
veterans also have substantially higher AFQT scores than nonveterans (59 relative to 47), reflecting the
Army's goal to recruit high quality soldiers. Notably, 26 percent of applicants were still enrolled in high
school at the time of their application.90
Nearly half of all applicants ultimately enlisted in the military. Of those who enlisted, 80 percent
joined the Active Duty Army, 9 percent joined the Reserve Component Army (the Army Reserve or the
Army National Guard), and another
11 percent enlisted in a separate military service. Slightly more than 1
percent of applicants were deceased by February 2010 and the share deceased is slightly higher among
veterans relative to nonveterans (1.25 percent relative to 1.10 percent). Column 3 also indicates that 23
percent of veterans received DC in January 2014. Of note, active duty service members may not receive
My definition of "veteran" includes applicants currently serving in the military. This differs from the traditional
definition, which indicates someone who previously served in the military but is not currently serving.
89
90 These applicants took their ASVAB at a MEPS site, not as part of the ASVAB Career Exploration Program
discussed above. The vast majority of soldiers who apply while they are students complete high school before
enlisting.
129
DC until they leave the service. Thus, the percentage of DC beneficiaries will likely grow over time as
more soldiers within my sample leave the service and as veteran health diminishes with age.
3.4
Empirical Strategy
The plots in Figure 3.1 depict the first stage that underlies my fuzzy RD identification strategy. The
left plot shows enlistment rates as a function of AFQT scores for applicants who applied prior to the July
2004 re-norm (pre-July 2004 applicants) and the right plot shows the same relationship for post-July 2004
applicants. As discussed above, July 2004 was the first month that USMEPCOM scaled AFQT scores
according to a new PAY sample. Both plots reveal a discontinuous jump in the probability of enlistment
when applicants achieve an AFQT score of 31 and again when applicants achieve an AFQT score of 50.
The jump at 50 is considerably smaller for post-July 2004 applicants than the same jump for pre-July
2004 applicants, but this is probably due to the high rate of enlistment bonuses offered to applicants with
AFQT scores below 50 as the Army grew to respond to operations in Iraq.
For the purpose of clarity, I henceforth refer to an initial AFQT score of at least 31 as an "enlistment
offer" and an AFQT score of at least 50 as a "bonus offer." In reality, the Army extends enlistment offers
to many applicants with AFQT scores below 31. Some of these offers occur when applicants earn higher
AFQT scores after retesting, but others are made to applicants with final AFQT scores below 31 but who
have otherwise strong applications. Similarly, the Army can offer enlistment bonuses to some applicants
with AFQT scores below 50, particularly if demand for new recruits is high. 91 Nevertheless, the
discontinuities in enlistment rates around both thresholds (Figure 3.1) suggest that enlistment offers and
bonus offers, as just defined, can serve as instruments for military service.
A key threat to the validity of any RD strategy is the possibility of precise manipulation of the
running variable around the threshold, as discussed in McCrary (2008) and Frandsen (2014). While direct
manipulation of exam scores seems unlikely (most exams are computerized adaptive tests), the ability to
Twenty percent of active duty Army enlistees with final AFQT scores below 50 received some sort of enlistment
bonus. The Army awarded many of these bonuses in fiscal years 2005 and 2006 during the buildup to the surge in
Iraq. By contrast, 65 percent of active duty Army enlistees with a final AFQT score above 50 received a bonus.
91
130
retest until an applicant receives an enlistment offer or a bonus offer is potentially problematic. Ideally I
would use an applicant's initial AFQT score to avoid this problem, but using the first on file (my data
only contains each applicant's three most recent AFQT scores) is probably sufficient for two reasons.
First, among applicants whose first AFQT score on file is below 31, 27 percent retested at least once and
only 5 percent retested at least twice, suggesting that few retested more than twice. Second, precise
manipulation of AFQT scores around either threshold can only occur if an applicant stops retaking the
exam after he achieves his desired score. For example, consider an applicant who tested a total of four
times. In all likelihood, this applicant did not achieve his desired score on his first exam or his second
exam. If this is the case, then using either exam score as his running variable in an RD design would not
show evidence of manipulation around a threshold.
To visually inspect for manipulation of the running variable around either threshold, Figure 3.2
displays histograms of AFQT scores for pre-July 2004 applicants (left plot) and post-July 2004 applicants
(right plot). Both histograms reveal extreme bunching at certain AFQT scores. A comparison of the preJuly 2004 histogram to Table A-3 of Mayberry and Hiatt (1992), which displays the conversion of AFQT
raw scores to AFQT scale scores under the 1980 PAY scale, indicates that all of the bunching occurs
where two raw AFQT scores equate to a single scaled AFQT score. Likewise, since no raw scores
equated to a scaled score of 60 prior to the re-norm, no pre-July 2004 applicants have a scaled AFQT
score of 60. The histogram for post-July 2004 applicants also displays extreme bunching at some AFQT
scores, but not at the same scores where bunching occurs for pre-July 2004 applicants. This is because the
PAY97 scaling adjusted the AFQT scale scores that equate to multiple raw scores (Segall, 2004; Table
2.5). Importantly, there does not appear to be any bunching at AFQT scores adjacent to the thresholds of
31 and 50 in either histogram, arguing against the possibility that applicants can precisely manipulate
131
AFQT scores. 92
I use the following reduced form parametric estimating equation to obtain 2SLS estimates of the
effects of military service on desired outcomes:
(3.1)
Yi = a + pZi,3 1 + 6Zi,so + fo(Ai) + Zi. 31 x f(Ai) + Zi.50 x f2 (Ai) + X$fl + Ili
Yi is the outcome variable for applicant i, Ai is i's first AFQT score on file, Zi,3 1 indicates an enlistment
offer and Zi,50 indicates a bonus offer-that is, Zi,31
=
1(A
! 31) and Zi,50 = 1(Ai
50)-and Xi is a
vector of 12 dummy variables indicating the fiscal year that i applied to join the U.S. Army and
potentially other individual covariates. I group applicants who took their first AFQT on file between July
and September 2004 into fiscal year 2005 to ensure that equation (3.1) only compares the effects of an
enlistment offer or a bonus offer for applicants who took the exam under the same scaled norms. I control
for the effects of the running variable, Ai, with three second-order polynomials: one for applicants with
AFQT scores below 31, another for applicants with AFQT scores in the interval [31, 49], and a third for
applicants with AFQT scores above 49. Specifically,
(3.2)
fj (Ai) = OpjAi + fjA?;
j = 0, 1, 2
The results that follow also include estimate equation (3.1) using third and fourth order
polynomial controls of the running variable. I also report nonparametric estimates of equation (3.1),
which only include linear controls for the running variable but weight observations according to the
following variant of a tent-shaped edge kernel:
Unfortunately, my data does not contain raw AFQT scores, which would provide a better running variable
than
scaled AFQT scores. The discrete nature of AFQT scores is also not amenable to the empirical test for manipulation
of the running variable suggested by McCrary (2008), which assumes a continuous running variable. Frandsen
(2014) devises a test that is similar to McCrary's, but allows for a discrete running variable. However, this test
assumes that the running variable is a discretized version of a continuous variable with equally spaced support bins.
This is clearly not the case for scaled AFQT scores.
92
132
Kh(AL) =.11 (IA
~1 ~
h 105 1
hi
i)
<( - 49. 1
)x
-
0h if A- 9 40
A
hi)if
Ag > 40
where h is the bandwidth. I begin with a default bandwidth of 15 before investigating alternative
bandwidths for nonparametric estimates. A bandwidth of 15 implies that the sample only contains
applicants with AFQT scores in the interval [16 , 64]. Panel B of Table 3.1 reports summary statistics for
applicants within this discontinuity window. 93
Appendix Figures 3.1a - 3.1h suggest that applicant characteristics do not vary discontinuously
around the score thresholds of 31 and 50, providing further evidence that applicants cannot precisely
manipulate their AFQT scores. 94 To complement these figures, Table 3.2 reports estimates of equation
(3.1) where the dependent variables are the baseline covariates. The results reveal little evidence of
systematic sorting around either threshold. Of note, nonparametric estimates, reported in panel C, suggest
statistically significant discontinuities in the probability of high school graduates around both thresholds.
However, Appendix Figures 3. ig and 3.1h both reveal a non-linear relationship between AFQT scores
and high school graduates within the [16 , 64] interval. Thus, the significant estimates reported in panel C
are likely due to misspecification rather than systematic sorting around either threshold. 95
Panel A of Table 3.3 reports first stage estimates of the effect of enlistment offers and bonus offers on
military service using AFQT score as the running variable. Columns
1 and 2 report parametric estimates
of equation (3.1) with second and third order controls of the running variable, respectively. Column 3
reports nonparametric estimates of the same equation. The point estimate of 0.1015 in column
1 of panel
A indicates that an enlistment offer increases the probability of enlistment by 10.15 percentage points.
When I examine each cutoff separately, I obtain optimal bandwidths of 1 or smaller for both mortality and DC
outcomes using the technique described by Imbens and Kalyanaraman (2012) (IK). These bandwidths are too small
for precise inference and are probably the result of extreme bunching at AFQT scaled scores that equate to multiple
raw scores.
94 This figure does not include plots of all covariates for post-July 2004 applicants. The excluded plots do not reveal
evidence of sorting around either threshold and are available from the author upon request.
93
95 It is also worth noting that an applicant's education status is determined at the time of his most recent ASVAB,
not at the time of her first ASVAB on file. Applicants who retake the ASVAB are therefore mechanically more
likely to be high school graduates.
133
Results from the same column also indicate that a bonus offer increases the probability of enlistment by
an additional 5.67 percentage points. Most of the estimates in Panel A indicate that enlistment offers
increase enlistment rates by 10 to
11 percentage points while bonus offers increase enlistment rates by an
additional 5 to 6 percentage points.
Using any military service as the only endogenous variable in a 2SLS model is potentially
problematic because 1) the AFQT threshold of 31 applies to the active Army and the reserve Army, but
not to other services, and 2) only the active duty Army extends bonus offers to applicants with AFQT
scores of 50 or higher. Subsequent panels of Table 3.3 therefore investigate whether enlistment offers and
bonus offers affect enlistment in the active duty Army (panel B), enlistment in a reserve component of the
Army (panel C), and enlistment in the Navy, Marines, Air Force, or Coast Guard (panel D). The results
indicate that enlistment offers predominately induce applicants to enlist in the active Army, but a few join
the reserve Army and some appear less likely to enlist in another military service (enlistment offers
decrease the probability of joining a non-Army service by 0.78 percentage points). Likewise, bonus offers
slightly decrease the probability of enlistment in the reserve Army or a non-Army service. While these
discontinuities potentially muddle the interpretation of 2SLS estimates, their consequences are probably
not substantial. The decrease in the probability of reserve service associated with the AFQT threshold of
50, the largest discontinuity associated with a service other than the active Army, is only about one-fifth
of the magnitude of the increase in the probability of active Army service at the same threshold. As an
additional check, the appendix to this chapter reports estimates from second stage models with two
endogenous variables: a dummy for any active duty service and a dummy for any reserve duty service.
3.5
Results
A. Effects of Military Service on Mortality and Veterans Disability Compensation
Figure 3.3 plots the reduced form effect of enlistment offers and bonus offers on mortality for preJuly 2004 applicants (left plot) and post-July 2004 applicants (right plot). The post-July 2004 plot
134
suggests that enlistment offers are associated with a small, but imprecise, decrease in mortality rates,
although there are no substantial jumps in mortality rates around other cutoffs.
Panel A of Table 3.4 reports the corresponding 2SLS estimates of the effects of military service on
mortality. The 2SLS specifications parallel the reduced-form estimates described by equation (3.1),
except that now enlistment offers and bonus offers act as instruments for any military service. Consistent
with the plots in Figure 3.3, the estimates in panel A reveal a negative but statistically insignificant effect
of military service on mortality. The point estimate of -0.0068 in column
1 suggests that military service
decreases mortality by 0.68 percentage points, which is substantial relative to mean mortality rates of 1.25
percentage points. However, the standard error of 0.0080 indicates that I cannot rule out positive effects
that are 75 percent larger than mean mortality rates. Subsequent columns in panel A of Table 3.4, which
report parametric 2SLS estimates with higher order polynomial controls for the running variable (columns
2 and 3) and nonparametric 2SLS estimates with varying bandwidth (columns 4-6), all indicate a
negative but insignificant effect of military service on mortality.
Panel A of Table 3.4 and the corresponding reduced form plots in Figure 3.4 indicate that military
service increases DC receipt by 24 to 28 percentage points, and all of these estimates are significant. It is
not surprising that military service increases DC receipt because only veterans qualify for DC. Still, the
magnitudes of these estimates are substantial when compared to the relatively low incidence of federal
disability among similarly-aged men. According to the 2013 ACS, 1.5 percent of nonveteran men
between 25 and 39 years of age receive SSDI, relative to 2.1 percent of veteran men of the same age. For
SSI, these percentages are 1.2 for veterans and 2.2 for nonveterans. A roughly 25 percentage point
increase in DC receipt is also large relative to earlier wartime cohorts. By 2000, the causal effect.,of
Vietnam-era military service on DC receipt was only 4 percentage points (Angrist et. al, 2010; Table 4). It
is also worth noting that estimates for DC receipt are probably a lower bound because some soldiers in
my sample were still in the military as of January 2014, and thus not eligible for DC.
In an effort to increase precision, panel B of Table 3.4 reports 2SLS estimates using only a single
control for the running variable. The reduced form equation is therefore:
135
(3.3)
Yi = a + pZi,31 + SZj,5 0 + fo(Ai) + XU3l +
m1i
Estimates of equation (3.3) using a second order polynomial control for AFQT, reported in panel B
(column 1), indicate a negative and marginally significant effect of military service on mortality.
However, this significance disappears with 3 rd and 4 th order polynomial controls for the running variable,
as seen in columns 2 and 3, respectively. Notably, the upper bound on the 95 percent confidence interval
for the estimates reported in column 3 allows me to rule out an effect of enlistment on mortality that is 45
percent larger than mean mortality rates within my sample. 96
A closer inspection of both plots in Figure 3.2 reveals a substantial decrease in the first stage slope
immediately to the right of the enlistment offer threshold and another, less substantial, slope change to the
right of the bonus offer threshold. These slope changes offer the possibility of adding Regression Kink
(RK) instruments to the RD identification strategy to increase power even further. Specifically, the
interaction of Zj, 31 and a linear term for AFQT forms one instrument while the interaction of Zj,50 and a
linear term for AFQT forms the other instrument. Nonparametric first stage results from a model that
includes two RD instruments and two RK instruments, reported in column 3 of Appendix Table 3.3,
reveal statistically significant slope changes around both cutoffs. First stage RK estimates under
parametric specifications are substantially less precise than corresponding nonparametric estimates.
However, even parametric estimates incorporating second order polynomial controls for AFQT indicate a
significant slope change at the 31 threshold and a marginally significant slope change at the 50 threshold.
Panel C of Table 3.4 displays 2SLS estimates of the effects of military service using RD and RK
instruments (henceforth referred to as 2SLS RD-RK estimates). The nonparametric estimates reported in
column 4 are substantially more precise than 2SLS estimates using only RD instruments (2SLS RD
estimates): the standard errors for both mortality and disability outcomes decrease by nearly 60 percent
with the inclusion of RK instruments. However, mortality estimates derived from a nonparametric 2SLS
Although reported in panel B of Table 3.4, nonparametric estimates of (3.3) are likely to be biased because the
underlying relationship between AFQT scores and military enlistment is clearly not linear, as seen in Figure 3.2.
96
136
RD-RK model with a bandwidth of 15 are the opposite direction of the corresponding 2SLS RD
estimates. This result, reported in column 4 of panel C, indicates that military service increases the
probability of mortality by 0.11 percentage points (approximately one-tenth the size of mean mortality
rates), although this estimate is still indistinguishable from 0.
Figure 3.3 suggests that the positive mortality estimate from the nonparametric 2SLS RD-RK model
is predominately driven by low mortality rates among applicants with AFQT scores below 23. On one
hand, this could reflect a positive effect of military service on mortality, a result that 2SLS RD estimates
cannot rule out. On the other hand, this could also be the result of poor mortality data among low-scoring
applicants. Several studies have documented a negative relationship between cognitive ability and
mortality. 97 Yet within my data, both veterans and nonveterans with relatively low AFQT scores have
substantially lower mortality rates than applicants with higher AFQT scores. This can be seen in
Appendix Figure 3.2, which plots mortality rates against AFQT scores separately for veterans and
nonveterans (to facilitate interpretation, I group AFQT score into bins of 3). The figure also indicates that
veterans have lower mortality rates than nonveterans over nearly all AFQT scores. If the causal effect of
military service on mortality is positive, then two conditions must hold: 1) soldiers must be more riskaverse than applicants who do not enlist 2) applicants with particularly low AFQT scores must be more
risk-averse than applicants with higher AFQT scores. 98
I cannot rule out both of these possibilities, but there are other reasons to believe the positive estimate
of military service on mortality is biased upwards. First, this estimate is not robust to different
bandwidths. Nonparametric 2SLS RD-RK estimates derived from a bandwidth of 8 suggest that military
service decreases mortality by 0.51 percentage points. Second, parametric 2SLS RD-RK estimates using
second order polynomial controls for AFQT suggest that military service decreases mortality by 0.54
Jokela, Elovainio, Singh-Manoux, and Kivimaki (2009) document a negative association between AFQT score
and early mortality among 1979 NLSY participants. See Pearce, Deary, Young, and Parker (2006), Batty, Deary,
and Gottfredson (2007), and Batty, Shipley, Mortensen, Boyle et. al. (2008) for systematic reviews on the link
between IQ and mortality.
98 By "risk-averse," I mean an applicant with a low probability of death absent
military service.
97
137
percentage points. Thus, nonparametric 2SLS RD-RK estimates derived from a bandwidth of 15 probably
reflect a non-linear relationship between AFQT and mortality rather than a discrete change in the slope of
the mortality-AFQT relationship at the enlistment offer threshold.
It is also worth mentioning that 2SLS RD-RK models estimating DC receipt produce very similar
point estimates, but substantially smaller standard errors, than the corresponding 2SLS RD models. In
particular, nonparametric 2SLS RD-RK estimates (column 4, panel C) indicate that military service
increases DC receipt by 25.5 percentage points, which is near the 2SLS RD point estimate of 26.6
percentage points (column 4, panel A). Furthermore, the standard error for the 2SLS RD-RK estimate is
small enough to detect an effect of 1.4 percentage points. This offers reassurances that the 2SLS RD-RK
identification strategy has enough power to investigate the effect of military service on earnings,
employment, education, and other outcomes.
B. OLS and Logit Estimates
To complement 2SLS estimates of the effects of military service on mortality and DC receipt, Table
3.5 reports OLS estimates of the following equation:
(3.4)
Yj = a + yDi + Xfl + ej,
where Y is the outcome variable for applicant i, Di is and indicator for enlistment in the military, and Xi
is a vector of controls that includes dummy variables for each application fiscal year, each AFQT score,
year of birth, race (white or nonwhite), 4 education levels (still in high school, GED or high school
dropout, high school diploma, some college or more), and state of birth. Similar to the identification
strategy used in Angrist (1998), OLS estimates of (3.4) are unbiased if factors that influence outcomes are
uncorrelated with military service after controlling for the individual characteristics captured in Xi. 99
99 Angrist (1998) includes a saturated set of controls for each combination of application year, AFQT category,
education level, and year of birth to construct a covariate-specific comparison of veterans to nonveteran applicants. I
do not estimate a saturated model because this leads to many combinations of the covariates where no applicants are
deceased as of February 2010. These individuals would then be excluded from the logit estimates that follow.
138
Column 1 of Table 3.5 reports estimates of (3.4), but where Xi only includes indicator variables for
fiscal year of application. The statistically significant point estimate of -0.0020 implies that mortality rates
among veterans are 0.20 percentage points lower than mortality rates among nonveteran applicants after
accounting only for an applicant's cohort. Estimates that include the full set of controls for Xi, reported in
column 4, indicate that veterans are 0.24 percentage points less likely to be deceased as of February 2010
than similar nonveteran applicants, a point estimate that is 18 percent as large as the incidence of
mortality among nonveterans. Notably, average marginal effects from logit estimates of equation (3.4),
reported in column 5, are nearly identical to OLS estimates for both mortality and DC outcomes.
A key advantage of OLS estimates is that they are substantially more precise than 2SLS RD and
2SLS RD-RK estimates. This is particularly important for outcomes that have little variation, such as
mortality among young adults. To facilitate comparison, columns 6 and 7 of Table 3.5 reproduce
parametric and nonparametric 2SLS RD-RK estimates. Importantly, OLS estimates are near the midpoint
of parametric and nonparametric 2SLS RD-RK estimates, offering reassurances that any unobservable
characteristics that influence selection into the military are probably not correlated with mortality.10 0 A
key concern with this interpretation, however, is that the military selects only applicants who meet
rigorous physical and health standards, which would cause OLS estimates of (3.4) to be biased
0
downwards (i.e. more negative than the true effect of military service on mortality).1 1
To investigate whether military health standards explain the negative OLS estimates of military
service on mortality, Table 3.6 reports estimates of equation (3.4) with additional controls for drug test
and medical exam results. Testing positive for drugs or having a medically disqualifying condition
substantially reduces an applicant's chances, of enlisting in the military. Among applicants in my sample
who tested positive for drugs-mostly marijuana-fewer than one-third enlisted, relative to nearly 79
'0 Note that even if this is true, OLS estimates might differ from 2SLS RD-RK estimates if applicants induced to
enter the military as a result of an enlistment offer or a bonus offer are not representative of typical volunteers, as
discussed in Angrist and Pischke (2009).
1 The means reported in Table 3.5 indicate that a few nonveterans (0.28 percent) receive DC. Most of these
applicants are probably veterans who were incorrectly coded as not enlisting in the military.
139
percent of applicants who took a medical exam and did not test positive for drugs. Similarly, only 59
percent of applicants with at least one medically disqualifying condition enlisted. 02
The military only administers drug tests and medical exams to applicants who attend the second day
of the military application process. 0 3 Column 1 of Table 3.6 therefore reports estimates of (3.4) using the
full RD sample while column 2 reports estimates using only applicants with drug test information on file.
The estimates reported in column 3 include an additional control that indicates whether an applicant
tested positive for drugs. Notably, testing positive for drugs is a strong predictor of early-life mortality:
applicants who tested positive for drugs are 0.35 percentage points more likely to be deceased than
applicants who tested negative for drugs. Importantly, however, the estimated effect of military service on
mortality remains negative and significant with the inclusion of a positive drug-test indicator (column 3),
suggesting that drug-screening does not fully explain the negative association between military service
and mortality.
As mentioned above, a further limitation of my data is that it only contains information on medically
disqualifying conditions, other than positive drug tests, for individuals who applied in September 2001 or
later. Subsequent columns of Table 3.6 therefore report estimates where the sample is restricted
individuals who applied after August 2001. The negative relationship between military service and
mortality is substantially smaller in magnitude for individuals who applied after August 2001, a result I
discuss in greater detail below. Still, the estimates reported in columns 7 and 8 suggest that the
association between military service and mortality hardly changes with the inclusion of a medical failure
indicator.
Furthermore, the estimated effect of any medical failure on mortality is not statistically
different from zero.
The military may grant enlistment waivers to applicants with medically disqualifying conditions. Thus, the
presence of a disqualifying condition does not necessarily preclude military service.
103 A disproportionate share of applicants who do not show for the medical exam
have AFQT scores that qualify for
neither an enlistment offer nor a bonus offer, as seen in Appendix Figure 3.3. As a result, restricting the sample to
applicants with medical exam information would invalidate RD estimates because medical examinees appear to sort
around both AFQT thresholds. It is also worth noting that about 10 percent of applicants who enlisted do not have a
medical exam on file, thus indicating that medical exam data is not complete
102
140
These results reported in Table 3.6 weigh against the hypothesis that health screening standards fully
explain the negative relationship between military service and mortality, at least within samples restricted
to applicants with valid medical exam information. Of course, even if positive selection on health factors
is not a major contributor to the negative association between military service and mortality, OLS
estimates of equation (3.4) provide less than conclusive evidence of a causal effect because veterans may
differ from nonveterans on unobservable dimensions.
C. Estimates by Subgroup
Table 3.7 explores OLS estimates of the effects of military service by race (white and non-white),
application timeframe (pre-September 2001 applicants and post-September 2001 applicants), and AFQT
score ranges. Since mortality outcomes to this point have defined mortality as being deceased as of
February 2010, Table 3.7 includes additional estimates where mortality is defined as being deceased as of
February 2015. A comparison of results for both of these outcomes generally indicates that the negative
relationship between military service and mortality persists through 2015. However, results for mortality
as of February 2015 should be interpreted with caution given the substantial reduction in Death Master
File Records after 2010, as discussed above.
The effects of military service on mortality and DC receipt appear similar for whites and nonwhites,
as suggested by the estimates reported in columns 2 and 3. On the other hand, the negative relationship
between military service and mortality is much stronger for pre-September 2001 applicants than the same
relationship for post-September 2001 applicants. While differential selection into the military could
partially explain these results, a more plausible explanation is that post-September 2001 applicants were
more likely to deploy to combat zones in Iraq and Afghanistan than pre-September 2001 applicants.
Consistent with mortality results by application timeframe, Table 3.7 also indicates that military service
has a larger effect on DC receipt for later applicants than for earlier applicants.
Columns 6 and 7 of Table 3.7 report estimates for two AFQT score ranges within the RD sample:
applicants with AFQT scores in the interval [16, 42] and applicants with AFQT scores in the interval [43,
141
64]. To further investigate the possibility of heterogeneous treatment effects by AFQT score ranges,
column 8 reports results for applicants with AFQT scores of 65 and higher. Notably, the effect of military
service on mortality appears to be much closer to zero for applicants within the highest AFQT category.
Thus, even if military service is protective for some individuals, it is not necessarily protective for
applicants with relatively high AFQT scores. On the other hand, the negative point estimates in column 8
still argue against the hypothesis that military service increases mortality.
Parametric and nonparametric 2SLS RD-RK estimates by race and application timeframe, as reported
in Appendix Table 3.4, are broadly consistent with OLS estimates reported in Table 3.7. One exception to
this is 2SLS estimates of the effect of military service on DC receipt for post-September 2001 applicants,
which are noticeably larger than OLS estimates. 2SLS estimates indicate that military service increases
DC receipt for post-September 2001 applicants by 31 to 35 percentage points, while OLS estimates only
suggest a 26 percentage point increase. Some of the difference between 2SLS and OLS estimates for later
applicants occurs because 2SLS "compliers"-applicants induced to enlist as a result of an enlistment
offer or a bonus offer-exit the military faster than typical enlistees. Although not reported, OLS
estimates of equation (3.4) where the outcome is a dummy variable that indicates if an applicant is still
serving in the active duty Army (as of September 2013) produces a point estimate of 0.175, implying that
17.5 percent of post-September 2001 veterans are still in the active duty Army and therefore ineligible for
DC benefits. This is significantly larger than the corresponding 2SLS estimates, which indicate that 12 to
13 percent of veteran 2SLS compliers are still in the active duty Army.
3.6
Mortality Trends over Time by Application Cohorts
The results up to this point suggest that military service does not increase mortality, but reveal little
about whether a veteran's time in the military or his post-service years are potentially protective or
dangerous in nature. To offer some descriptive evidence on this matter, Table 3.8 reports OLS estimates
of (3.4) where the outcomes indicate whether an applicant is deceased within a specified number of years
since applying for the military. To account for differences in service experience and data availability over
142
time, the table reports results by four groups of application cohorts: FY1996-FY1998 applicants,
FY1 999-FY2001 applicants, FY2002-FY2004 applicants, and FY2005-FY2007 applicants.1 04
The results for all four applicant cohorts suggest that soldiers experience substantial reductions in
mortality, relative to similar nonveteran applicants, during their first year of military service.1 05 For the
FY1996-FY1998 application cohort, military service is associated with a 0.12 percentage point
reduction in mortality one year after applying, a difference that is equal to 80 percent of the one-year
mortality rate for nonveterans in FY1996-FY1998 applicant cohort. Subsequent estimates for the
FY1996-FY1998 cohort suggest that mortality among veterans declines by an additional 0.12
percentage points, relative to nonveterans, during a soldier's second and third years of service. Within 11
years of applying, mortality among veterans is 0.36 percentage points lower than mortality among nonenlistees. FY1999-FY2001 applicants (panel B) follow a similar trend to FY1996-FY1998 applicants.
Within this application cohort, veterans witness a sharp drop in mortality, relative to nonveterans, during
their first year of service, with further reductions during the second and third years of service. After three
years, the overall difference in mortality between veterans and nonveterans from the FY1999-FY2001
applicant cohort remains relatively stable.
Similar to other application cohorts, veterans from the FY2002-FY2004 cohort are substantially less
likely to die than their nonveteran counterparts within a year of applying. However, unlike earlier cohorts,
this trend reverses over the next three years. After four years of service, the net effect of military service
on mortality is nearly zero. This period of relative danger associated with military service, which is
unique to the FY2002-FY2004 cohort during their second through fourth years of military service
(roughly 2003-2005 for FY2002 applicants, and 2005-2007 for FY2004 applicants), coincided with
the most lethal years in Iraq (U.S. Department of Defense, 2015). Interestingly, veterans and nonveterans
Differences in data availability occur because I am unable to determine whether relatively recent applicants are
deceased several years after applying. For example, I cannot determine if an FY2006 applicant is deceased within
ten years of applying because I only have valid mortality data through February 2010.
104
105 By years of service, I actually mean years since applying. While most service members enlist within a month or
two of applying, some enlist several months later.
143
have similar mortality rates between four and five years after applying, suggesting that the relative danger
associated with the military after the first year of service had leveled off by that time. This could be
attributed to a decline in hostilities in Iraq around 2007 and to soldiers leaving the service after
completing their initial four-year enlistment contract.
The results from this subsection are purely descriptive in nature. Many factors could cause the effects
of military service to vary by application cohort. Still, the results reported in Table 3.8 suggest that
military service is protective during a soldier's first year on duty. Furthermore, the gap in mortality
between veterans and nonveterans seems to persist, or even grow, during times of relative peace. On the
other hand, the protective effects of military service appear to diminish during times of relative hostility.
3.7
Conclusion
Any death of a soldier is a tragedy. Public concern over soldier deaths, whether by suicide, enemy
fire, or other causes, is therefore warranted. But the results presented here suggest that military service
does not increase mortality for individuals who applied to the active duty U.S. Army between 1996 and
2007. Although relatively imprecise, 2SLS estimates do not indicate that military service increases a
soldier's likelihood of death. OLS estimates that compare nonveteran applicants to soldiers with similar
characteristics (test scores, age, race, education, state of birth) provide additional evidence that military
service does not increase mortality. My findings also suggest that soldier's experience substantial
reductions in mortality during their first year in the Army. This is not terribly surprising considering that a
soldier's first year of service is characterized by close supervision from drill instructors, first line
supervisors, and other non-commissioned officers. Importantly, the gap in mortality between soldiers and
nonveteran applicants diminishes during times of relative hostility. However, the net effect of military
service on mortality appears to close to zero even for veteran cohorts that were most likely to deploy to a
combat zone.
Active duty Army soldiers and Marines bore the brunt of hostile casualties in Afghanistan and Iraq.
Thus, the results presented here could potentially reflect an upper bound on the mortality consequences of
144
military service, at least for enlisted personnel. Of course, a negative relationship between military service
and all-cause mortality could mask effects for specific causes of death, as suggested by (Conley and
Heerwig, 2012). For example, military service could conceivably increase the probability of suicide while
reducing the probability of all-cause mortality. My results also reveal little about the long-term effects of
military service because I only have valid mortality data through early 2010. My ongoing efforts to merge
military applicant data with the National Death Index, which includes all-cause mortality and causespecific mortality information through 2013, should resolve some of these questions.
While I find little evidence that military service increases mortality, my results indicate that military
service during the past two decades increases DC receipt by 24 to 28 percentage points. By comparison,
Angrist et. al (2010) estimate that Vietnam-era military service increased DC receipt by only 4 percentage
points nearly 30 years after the conclusion of the Vietnam War. The substantial, positive effect of recent
military service on DC receipt complicates the interpretation of mortality results. One possible
explanation is that military service reduces overall health and quality of life in manners that do not lead to
early-life mortality. Another interpretation is that large gains in DC receipt could potentially reflect health
problems not explicitly caused by military service. Alternatively, temporary reductions in earnings that
typically occur when soldiers depart the service, as documented by Angrist (1990) and Angrist (1998),
could induce some veterans to turn to DC to offset such reductions.
The last possibility is particularly concerning considering recent evidence that DC receipt reduces
labor force participation among Vietnam-era veterans (Autor, Duggan, Greenberg, and Lyle, 2015).
Future research on the causal effects of military service during recent decades is warranted, and the fuzzy
RD identification strategy presented in this chapter offers a new method for obtaining such estimates. I
am currently working with the Social Security Administration (SSA) to merge military applicant data
with employment, earnings, SSDI, and SSI outcomes. This and other research will hopefully shed more
light on the complicated effects of military service during the post-9/l1 era.
145
3.8
Chapter 3 Appendix: Multiple Endogenous Variables
Appendix Table 3.5 reports OLS and 2SLS estimates of models with two endogenous variables: an
indicator for active duty military service and an indicator for reserve duty military service.' 06 The OLS
estimates reported in panel A suggest that reserve duty is more protective than active duty service. Still
the point estimate of -0.0020 is very similar to OLS estimate of the effect of any military service on
mortality (-.0024, Table 5), reflecting how most individuals who apply to join the active duty Army
ultimately enlist in that service. 2SLS estimates of the effects of active duty service on mortality are also
similar to the corresponding estimates using only one endogenous variable, but estimates of the effects of
reserve duty service are very imprecise.
Panel B of Appendix Table 3.5 indicates that the effect of active duty service on DC receipt is
stronger than the effect of reserve duty service on DC receipt. OLS estimates reported in column
1
suggest that the association between active duty service and DC receipt is twice as large as the association
between reserve duty service and DC receipt. Nonparametric 2SLS RD-RK estimates of the effects of
reserve duty service on DC receipt, reported in column 3, are precise enough to confirm that this
association is likely a causal effect.
Reserve duty military service implies enlisting in the Army National Guard, the Army Reserves, or similar
reserve duty for the Air Force, Navy, or Marines. Applicants who enlist in the reserves and are called up for Active
Duty service are still considered as performing reserve duty within my data.
106
146
Percentage who Enlist in Any Service - Pre July 2004 Renorm
(D
(N
CO
Percentage who Enlist in Any Service - Post July 2004 Renorm
N-
(0
0.
0
0
(N
C6LO)
CO
N
0
6r-
III
20
20
25
25
30
30
354I111511- 5]
35
40
60iIi
45
rt1111
First AFQT on File
50
--
55
60
20
25
30
35
40
45
50
55
60
First AFQT on File
Figure 3.1. This figure plots the percentage of applicants who enlist in any military service by AFQT score
for applicants who applied before the July 2004 ASVAB renorm (left) and after the renorm (right).
Distribution of Initial AFQT Score - Post July 2004 Renorm
Distribution of Initial AFQT Score - Pre July 2004 Renorm
N J-
0
0
C6
C6)
CdJ
0
LO
0
0
-'-
20
25
30
35
40
45
50
First AFQT on File
55
60
-~-
20TTrTTTi i3 35m 4i1i41111111 11111111
20
25
30
35
40
45
50
First AFQT on File
Figure 3.2. This figure plots the distribution of the first AFQT scores on file for individuals who
applied before the July 2004 ASVAB renorm (left) and after the renorm (right).
55
60
Percentage Deceased - Post July 2004 Renorm
Percentage Deceased - Pre July 2004 Renorm
C0
0
4
0
*
0
00
.0
C~)
*
@0
0
0
0
4
0
0
0
.0
C~J
0
.0
*
0
0
*
0
0
.0
0
*
0 w
0
0
0
0
R
OR0-
I 3 401 45111 5011 551 601111111111111
20
25
30
l
35
40
45
First AFQT on File
50
55
60
20
251111
25
3
30
35
40
45
50
55
60
First AFQT on File
Figure 3.3. This figure plots mortality percentages, as of February 2010, by AFQT score for pre-July 2004
applicants (left) and post-July 2004 applicants (right).
Percentage Receiving DC - Post July 2004 Renorm
Percentage Receiving DC - Pre July 2004 Renorm
CO
mV
0
0
I
0
0
~w..
@0
0
0)_
*
0*
*
0
0
C)
e
0
M0
0
UO
0
0
6
I I I I IIII I I I I I I I I I I I I I I I I I I I I I I I I I5I I I I 5I I I I I45
20
25
30
35
40
45
First AFQT on File
50
55
60
20
25
30
35
40
45
50
55
60
First AFQT on File
Figure 3.4. This figure plots DC receipt percentages, as of January 2014, by AFQT score for pre-July 2004
applicants (left) and post-July 2004 applicants (right).
Table 3.1. Summary Statistics
Age at application
AFQT
White
A. Full Sample
No Mil
All
Service
Applicants
(2)
(1)
20.3
20.5
[3.7]
[3.5]
46.7
53.0
[23.9]
[25.4]
0.74
0.70
Any Mil
Service
(3)
20.1
[3.2]
58.7
[20.8]
0.77
B. RD Window
All
No Mil
Any Mil
Applicants
Service
Service
(2)
(1)
(3)
20.1
20.4
19.8
[3.3]
[3.6]
[3.1]
41.7
38.4
44.9
[13.3]
[13.8]
[11.9]
0.70
0.68
0.73
Black
0.18
0.21
0.15
0.22
0.24
0.20
Enrolled in HS
0.26
0.27
0.25
0.26
0.27
0.26
HS Graduate
0.55
0.52
0.57
0.50
0.47
0.52
Single (no dependents)
0.90
0.91
0.89
0.90
0.90
0.89
No MEPS Indicated
0.09
0.09
0.09
0.09
0.09
0.09
Any Military Service
0.52
0
1
0.51
0
I
Active Army Service
0.42
0
0.80
0.40
0
0.79
Reserve/Guard Army
0.05
0
0.09
0.05
0
0.10
Other Military Service
0.06
0
0.11
0.06
0
0.11
Deceased by FEB2010
0.0118
0.0125
0.0110
0.0125
0.0136
0.0115
0.12
0
0.23
0.12
0
0.24
1,101,703
529,947
571,756
668,149
330,233
337,916
DC Receipt by JAN2014
Observations
Notes: This table reports descriptive statistics for men who applied to join the U.S. Army during fiscal years 1996
through 2007. Columns 1, 2, and 3 report descriptive statistics for all applicants, nonveterans, and veterans,
respectively. AFQT scores indicate an applicant's first AFQT score on file (military records include AFQT scores
from the three most recent exams). An applicant's age is determined on the day of his first AFQT on file while his
education and marital status are determined on the day of his most recent exam. Other military service includes
service in the Marines, the Air Force, the Navy, and the Coast Guard. Panel B reports descriptive statistics for
applicants whose first AFQT scores on file are in the interval [16, 64]. Standard deviations are reported in
brackets.
151
Baseline Characteristic
Mean
(1)
Age
White
Black
20.09
[3.33]
0.70
0.22
Table 3.2. Covariate Balance
Parametric Estimates
2nd Order Poly
3rd Order Poly
1 (AFQT 3 1)
1(AFQT 50)
1 (AFQT23 1)
1(AFQT250)
(2)
(3)
(4)
(5)
-0.0141
(0.0304)
-0.0026
(0.0044)
0.0015
(0.0041)
0.0004
(0.0040)
0.0006
(0.0046)
-0.0284
(0.0284)
-0.0005
(0.0037)
0.0030
Still In High School
0.26
High School Diploma
0.50
Single
0.90
0.0013
0.09
(0.0027)
0.0010
(0.0027)
(0.0032)
-0.0013
(0.0037)
-0.0022
(0.0042)
-0.0014
(0.0026)
-0.0009
(0.0025)
0.996
668,149
0.606
668,149
No MEPS Indicated
P Val (Joint x2 Test)
Observations
668,149
-0.0410
(0.0424)
-0.0063
(0.0061)
0.0017
(0.0057)
-0.0036
(0.0056)
-0.0031
(0.0064)
0.0010
(0.0038)
0.0000
(0.0037)
-0.0026
(0.0384)
-0.0064
0.569
668,149
0.402
668,149
(0.0050)
0.0080*
(0.0043)
-0.0046
(0.0050)
0.0058
(0.0057)
-0.0022
(0.0035)
-0.0044
(0.0033)
Nonparametric Estimates
BW= 15
1 (AFQT 3 1)
1 (AFQT 50)
(6)
(7)
0.0031
-0.0196
(0.0213)
0.0060*
(0.0194)
-0.0009
(0.0031)
-0.0040
(0.0025)
0.0027
(0.0029)
-0.0028
(0.0022)
-0.0009
(0.0028)
-0.0102***
(0.0025)
0.0076***
(0.0032)
-0.0012
(0.0019)
0.0007
(0.0019)
(0.0029)
-0.0011
(0.0018)
-0.0002
0.000
668,149
0.039
668,149
(0.0017)
Notes: This table reports discontinuities in covariates estimated using models like the reduced-form specification described by equation (1). Columns 2, 4,
and 6 report discontinuities at the AFQT = 31 threshold while columns 3, 5, and 7 report discontinuities at the AFQT = 50 threshold. Columns 2 and 3
report estimates from a parametric equation with three 2nd order polynomial controls for the running variable: one for AFQT scores less than 31, another
for AFQT scores between 31 and 49, and another for AFQT scores greater than 49. Columns 4 and 5 report estimates from a parametric equation with
three 3rd order polynomial controls for the running variable. Columns 6 and 7 report estimates from a nonparametric equation with three linear controls
for the running variable. Nonparametric estimates weight observations according to the variant of the edge kernel described in the text using a bandwidth
of 15. All estimates include dummy controls for fiscal year of application. Joint x 2 tests are for the null hypothesis that discontinuities in all regressions
are equal to zero. Robust standard errors are reported in parentheses. ***,**, and * denote significance at the 1%, 5%, and 10% level, respecitvely.
152
Table 3.3. First Stage Estimates
Parametric Estimates
3rd Order Poly
2nd Order Poly
(2)
(1)
Instrument
1(AFQT 31)
l(AFQT>50)
A. Endogenous Variable: Any Military Service
0.1015***
0.1080***
(0.0045)
(0.0062)
0.0567***
0.0509***
(0.0042)
(0.0056)
First Stage F-Statistic
Observations
1(AFQT 31)
1(AFQT 50)
407.3
668,149
174.1
668,149
B. Endogenous Variable: Active Duty Army Service
0.1019***
0.1027***
(0.0042)
(0.0059)
0.0740***
0.0674***
(0.0042)
(0.0057)
First Stage F-Statistic
Observations
523.4
668,149
198.4
668,149
Nonparametric
BW = 15
(3)
0.1049***
(0.0032)
0.0506***
(0.0029)
615.0
668,149
0.1027***
(0.0030)
0.0711***
(0.0029)
752.5
668,149
C. Endogenous Variable: Reserve Component Army Service
I(AFQT 31)
0.0078***
0.0089***
0.0086***
(0.0021)
(0.0030)
(0.0015)
1(AFQT150)
-0.0127***
-0.0131***
-0.0131***
(0.0019)
(0.0025)
(0.0013)
First Stage F-Statistic
Observations
I(AFQT>31)
1 (AFQT>50)
26.1
668,149
20.3
668,149
D. Endogenous Variable: Other Military Service
-0.0078***
-0.0033
(0.0021)
(0.0029)
-0.0039
-0.0049**
(0.0028)
(0.0020)
First Stage F-Statistic
Observations
11.6
668,149
1.5
668,149
85.4
668,149
-0.0060***
(0.0015)
-0.0076***
(0.0014)
19.5
668,149
Notes: This table reports parametric and nonparametric.first stage estimates of the
effects of enlistment offers (AFQT 3 1) and bonus offers (AFQT50) on enlistment in
any military service (panel A), enlistment in the active duty Army (panel B), enlistment
in a reserve component of the Army (panel C), and enlistment in a non-Army service
(panel D). All estimates include dummy controls for fiscal year of application and three
polynomial (parametric estimates) or linear (nonparametric estimates) controls for the
running variable: one for AFQT scores less than 31, another for AFQT scores between
31 and 49, and another for AFQT scores greater than 49. Robust standard errors are
reported in parentheses. ***,**, and * denote significance at the 1%, 5%, and 10%
level, respecitvely.
153
Dependent Variable
Deceased by FEB10
E[Y]
DC Receipt by JAN14
E[Yj
Deceased by FEB10
E[Yj
DC Receipt by JAN14
E[Yj
Deceased by FEB10
E[Y,]
DC Receipt by JAN14
E[Yj
Table 3.4. 2SLS Estimates
Parametric Estimates
2nd Order Poly
3rd Order Poly
4th Order Poly
(3)
(1)
(2)
-0.0068
(0.0080)
0.0125
-0.0097
(0.0122)
0.0125
0.2814***
(0.0214)
0.1220
0.2417***
(0.0322)
0.1220
-0.0118*
(0.0064)
0.0125
A. 2SLS RD Estimates
-0.0083
(0.0143)
0.0125
0.2544***
(0.0380)
0.1220
Nonparametric Estimates (Edge Kernel)
BW=15
BW=12
BW=8
(6)
(5)
(4)
-0.0094
(0.0065)
0.0125
-0.0084
(0.0069)
0.0127
-0.0066
(0.0078)
0.0128
0.2664***
(0.0171)
0.1220
0.2656***
(0.0181)
0.1234
0.2607***
(0.0205)
0.1267
-0.0034
(0.0039)
0.0127
-0.0057
(0.0048)
0.0128
0.2503***
(0.0091)
0.1220
0.2531***
(0.0103)
0.1234
0.2562***
(0.0127)
0.1267
0.0011
(0.0026)
0.0125
-0.0013
(0.0031)
0.0127
-0.0051
(0.0042)
0.0128
0.2554***
(0.0071)
0.1220
0.2576***
(0.0085)
0.1234
0.2585***
(0.0114)
0.1267
B. 2SLS RD Estimates, Single Control for Running Variable
-0.0080
-0.0062
-0.0020
(0.0057)
(0.0058)
(0.0034)
0.0125
0.0125
0.0125
0.2655***
(0.0166)
0.1220
0.2729***
(0.0154)
0.1220
-0.0054
(0.0053)
0.0125
-0.0123
(0.0114)
0.0125
0.2764***
(0.0148)
0.1220
0.2388***
(0.0314)
0.1220
0.2742***
(0.0161)
0.1220
C. 2SLS RD-RK Estimates
-0.0123
(0.0126)
0.0125
0.2553***
(0.0349)
0.1220
Observations
668,149
668,149
668,149
668,149
599,218
473,276
Notes: This table reports 2SLS estimates of the effects of any military service on mortality and DC receipt where enlistment offers (AFQT>31) and
bonus offers (AFQT>50) serve as instruments for any military service. Estimates in Panel A include three polynomial or linear controls for the
running variable: one for AFQT scores less than 31, another for AFQT scores between 31 and 49, and another for AFQT scores greater than 49.
Estimates in panel B include a single polynomial or linear control for the running variable. Estimates in panel C include two additional instruments:
enlistment offers interacted with a linear term for the running variable and bonus offers interacted with a linear term for the running variable. All
regressions include dummy controls for fiscal year of application. Robust standard errors are reported in parentheses. ***,**, and * denote
significance at the 1%, 5%, and 10% level, respecitvely.
154
Table 3.5. OLS, Logit, and 2SLS Estimates
2SLS RD-RK
Logit
OLS
Dependent Variable
2nd Order Poly
(1)
Deceased by FEB10
E[Y 1 ID,=0]
DC Receipt by JAN14
E[Y, ID, =0]
App FY Controls
YOB Controls
AFQT Controls
Additional Controls
Observations
-0.0020***
(0.0003)
0.0136
0.2341***
(0.0007)
0.0031
(2)
-0.0019***
(0.0003)
0.0136
0.2372***
(0.0007)
0.0031
(4)
(3)
-0.0025***
(0.0003)
0.0136
0.0136
0.2368***
(0.0008)
0.0031
X
X
X
X
X
X
668,149
668,149
668,149
-0.0024***
(0.0003)
(5)
-0.0024***
(0.0003)
0.0136
0.2357***
(0.0008)
0.0031
0.2356***
(0.0008)
0.0031
X
X
X
X
668,149
X
X
X
X
668,149
(6)
NP (BW=15)
(7)
-0.0054
(0.0053)
0.0011
(0.0026)
0.0136
0.0136
0.2770***
(0.0148)
0.0031
0.2553***
(0.0070)
0.0031
X
X
668,149
668,149
*
Notes: This table reports OLS, Logit, and 2SLS estimates of the effects of military service on mortality and DC receipt. Column I reports mean outcomes
for persons who applied but did not join the military. Column 2 reports estimates from a regression of the dependent variable on an indicator for military
enlistment while controlling for application fiscal year. Estimates in column 3 include additional indicators for each possible AFQT scores and estimates
in columns 4 and 5 include additional controls for year of birth, race (white or nonwhite), education level (still in high school, high school dropout or
GED, high school diploma, some college), and state of birth. Columns 6 and 7 report parametric and nonparametric 2SLS RD-RK estimates. 2SLS
estimates include three polynomial controls (column 6) or linear controls (column 7) for the running variable: one for AFQT scores less than 31, another
for AFQT scores between 31 and 49, and another for AFQT scores greater than 49. Robust standard errors are reported in parentheses. ***,**, and
denote significance at the 1%, 5%, and 10% level, respecitvely.
155
Table 3.6. Mortality Estimates with Drug and Medical Test Controls
Dependent Variable: Deceased as of FEB10
(1)
Any Military
Service
Positive Drug
Test
Other Medical
Failure
-0.0024***
(0.0003)
-0.0030***
(0.0004)
(3)
-0.0025***
(0.0005)
0.0036***
(0.0009)
(4)
-0.0009**
(0.0004)
(5)
-0.0007
(0.0006)
(6)
-0.0003
(0.0006)
0.0028**
(0.0013)
(7)
-0.0008
(0.0006)
-0.0009
(0.0006)
Drug Tested
Applicants
Post-AUGO I
Applicants
E[YID,=0]
Observations
(2)
0.0136
668,149
X
X
0.0152
381,863
0.0152
381,863
0.0103
310,629
(8)
-0.0005
(0.0006)
0.0027**
(0.0013)
-0.0008
(0.0006)
X
X
X
X
0.0108
179,232
0.0108
179,232
0.0108
179,227
0.0108
179,227
Notes: This table reports OLS estimates of the effects of military service on mortality as of February 2010 for applicants with AFQT scores in the interval
[16, 64]. Column 1 reports estimates from the full RD sample while columns 2, 3, and 5 through 8 limit the RD sample to applicants with a valid drug test on
file. Columns 4 through 8 further limit the sample to individuals who applied in September 2001 or later since medical exam information is not available for
earlier applicants. "Other medical failure" indicates if an applicant failed any portion of his medical exam other than the drug test. All regressions include
indicator variables for each application fiscal year, each possible AFQT score, year of birth, race (white or nonwhite), education level (still in high school,
high school dropout or GED, high school diploma, some college), and state of birth. Robust standard errors are reported in parentheses. ***,**, and * denote
significance at the 1%, 5%, and 10% level, respecitvely.
156
Dependent
Variable
Full RD Sample
(1)
Table 3.7. OLS Estimates by Race, Application Timeframe, and AFQT Score
FY02-FY07
FY96-FY01
16 AFQT 42
Applicants
Applicants
Whites
Non-whites
(6)
(5)
(4)
(3)
(2)
435AFQT64
(7)
AFQT>65
(8)
Deceased by
FEB10
E[Y1 1D1 =0]
-0.0024***
(0.0003)
0.0136
-0.0015***
(0.0005)
0.0113
-0.0027***
(0.0004)
0.0148
-0.0037***
(0.0004)
0.0163
-0.0008**
(0.0004)
0.0102
-0.0025***
(0.0004)
0.0131
-0.0023***
(0.0004)
0.0145
-0.0010***
(0.0004)
0.0114
Deceased by
FEB15
E[Y,1D 1 =0]
-0.0029***
(0.0003)
0.0177
-0.0024***
(0.0006)
0.0149
-0.0030***
(0.0004)
0.0189
-0.0042***
(0.0005)
0.0203
-0.0012***
(0.0004)
0.0143
-0.0029***
(0.0005)
0.0169
-0.0028***
(0.0005)
0.0187
-0.0007*
(0.0004)
0.0144
DC Receipt
by JAN14
0.2357***
(0.0008)
0.0031
0.2219***
(0.0014)
0.0033
0.2409***
(0.0009)
0.0030
0.2119***
(0.0010)
0.0047
0.2640***
(0.0011)
0.0010
0.2367***
(0.0012)
0.0021
0.2349***
(0.0010)
0.0046
0.2241***
(0.0009)
0.0055
668,149
197,748
470,401
363,319
304,830
340,006
328,143
373,707
Observations
Notes: This table reports OLS estimates of the effects of military service on mortality and DC receipt. Column 1 reports results for the full RD sample, which
includes applicants with AFQT scores in the interval [16, 64]. Columns 2 through 7 limit the RD sample to the subgroups identified in the column header.
Column 8 reports estimates for applicants with AFQT scores in the interval [65, 99]. All regressions include indicator variables for each application fiscal
year, each possible AFQT score, year of birth, race (white or nonwhite), education level (still in high school, high school dropout or GED, high school
diploma, some college), and state of birth. Robust standard errors are reported in parentheses. ***,**, and * denote significance at the 1%, 5%, and 10%
level, respecitvely.
157
Table 3.8. Effects of Military Service on Mortality Over Time by Application Cohort
Years Since
Application
FY96 - FY98 Applicants
E[Y, ID =0]
OLS
(1)
(2)
1
0.0015
2
0.0029
3
0.0042
4
0.0054
5
0.0065
6
0.0076
7
0.0086
8
0.0100
9
0.0113
10
0.0127
11
0.0143
Observations
-0.0012***
(0.0002)
-0.0019***
(0.0002)
-0.0024***
(0.0003)
-0.0027***
(0.0003)
-0.0028***
(0.0003)
-0.0031***
(0.0004)
-0.0030***
(0.0004)
-0.0033***
(0.0004)
-0.0035***
(0.0005)
-0.0034***
(0.0005)
-0.0036***
(0.0005)
183,380
FY99 - FY01 Applicants
E[Y; ID =0]
(3)
0.0019
0.0032
0.0046
0.0061
0.0072
0.0086
0.0104
0.0120
OLS
(4)
-0.0017***
(0.0002)
-0.0024***
(0.0002)
-0.0027***
(0.0003)
-0.0027***
(0.0004)
-0.0024***
(0.0004)
-0.0022***
(0.0004)
-0.0022***
(0.0005)
-0.0024***
(0.0005)
179,939
FY02 - FY04 Applicants
E[Y, ID =0]
(1)
0.0019
0.0033
0.0048
0.0062
0.0079
OLS
(2)
-0.0015***
(0.0002)
-0.0013***
(0.0003)
-0.0007*
(0.0004)
-0.0002
(0.0004)
-0.0002
(0.0005)
161,464
FY05 - FY07 Applicants
E[Y, ID =0]
(3)
0.0029
0.0052
OLS
(4)
-0.0018***
(0.0002)
-0.0019***
(0.0004)
143,366
Notes: This table reports OLS estimates of the effects of military service on mortality within the number of years specified at left. Column I reports mean
outcomes for persons who applied but did not join the military. Column 2 reports estimates from a regression of the dependent variable on an indicator for
military enlistment while including indicator variables for each application fiscal year, each possible AFQT score, year of birth, race (white or nonwhite),
education level (still in high school, high school dropout or GED, high school diploma, some college), and state of birth. Robust standard errors are
reported in parentheses. ***,**, and * denote significance at the 1%, 5%, and 10% level, respecitvely.
158
a) Age at Application - Pre July 2004 Renorm
b) Percentage White - Pre July 2004 Renorm
CSJ
00
I
S
48jj% 0
IW~0"
*00'
OWSO
I,-
CO
004*
N-
0)-
(O
0)i
rU)
I
2III0
20
11125111111111111111 I 35 40 45 50 55 60
25
30
35
40
45
50
N-i
LO
4
55
se
60
20
25
30
35
40
50
45
55
60
First AFQT on File
First AFQT on File
c) Percentage Black - Pre July 2004 Renorm
d) Percentage without Dependents - Pre July 2004 Renorm
N.-
Oi
-
m~
C.
*e
0
0@
I
*--
0.
(6
0)
0
-
O-
*0
0
(N
201111111111 I
20
25
0"
I 1111111111111II I I II 25 I3 I3 4 4 5
30
35
40
45
First AFQT on File
50
55
60
0
20
25
30
35
40
45
50
0
55
60
First AFQT on File
Appendix Figure 3.1. This figure plots covariate percentages by AFQT score for pre-July 2004 applicants.
f) Percentage Still in High School - Pre July 2004 Renorm
e) Percentage with no MEPS Indicated - Pre July 2004 Renorm
.0
*
0
S
0
0
0
*
a
0
0
*.0
I
0
0
0
0
S
N
4
*
0
0
6
0*
C')
.0
S
S
S
0 *
0
0
0
0
0
0)6
0
0
6
0
=
35
30
25
20
40
45
60
55
50
First AFQT on File
First AFQT on File
g) Percentage High School Diploma - Pre July 2004 Renorm
U)
h) Percentage High School Diploma - Post July 2004 Renorm
CO
L6
LO
-
0)
0.
0
.0
0
I I I I I I I I I I I I I I IN I I I I I I I I I I I I I I I I I I I I I I I I I I I I I I
60
55
50
45
40
35
30
25
20
.000
0
00
C6
M6-
UO
*
0
0*
0
0.
0
00.
C')
LOj
0)O
M~J
6
00
0
0
0
"b
0)
0
U)-
*1.......
L
T111U111II1 1111111111111111111111111
_________________________
v~1
20
25
30
35
40
45
First AFQT on File
50
55
60
20
25
30
35
40
45
50
55
60
First AFQT on File
Appendix Figure 3.1 (Continued). Plots (e) - (g) are for pre-July 2004 applicants. Plot (h) is for post-July
2004 applicants.
Percentage Deceased - Post July 2004 Renorm
Percentage Deceased - Pre July 2004 Renorm
CO
O
LO)
LO)
0000
0
0
0
0)
0
0
(0-
)
(-
0
i
0
0
0
0
0
0
0
.
0
Cv)11111111111
20
25
11
ii
30
35
il
il illill
40
45
ill
50
ill ill1
55
First AFQT on File
* Nonveterans
e Veterans
60
l I1 I I||I11111I 11
20
25
30
1
35
1 II
40
1I I1 ill
|I1
45
50
55
i ii
60
First AFQT on File
l Nonveterans
*
Veterans
Appendix Figure 3.2. This fig ure plots mortality percentages, as of February 2010, for soldiers (red) and
nonveteran applicants (blue) by AFQT score.
Distribution of Initial AFQT Score - Pre July 2004 (with Med Exam)
Distribution of Initial AFQT Score - Post July 2004 (with Med Exam)
0
0
0
0
0U,
04
0
0
0
0
0~
0
LO
0-J1--'111
0-*'-
20
25
30
35
40
45
50
First AFQT on File
55
60
20
25
30
35
40
45
50
55
60
First AFQT on File
Appendix Figure 3.3. This plots in this figure show the distribution of the first AFQT scores on file for applicant.
with valid medical exam results.
Appendix Table 3.1. Monthly SSA DMF Records
Month
January
February
March
April
May
June
July
August
September
October
November
December
January
February
March
April
May
June
July
August
September
October
November
December
January
February
March
April
May
June
July
August
September
October
November
December
Number of Deaths
220,803
198,570
213,890
200,708
198,417
188,936
192,675
190,134
187,656
205,111
199,449
215,643
218,633
184,776
124,982
111,554
113,166
107,505
115,191
110,469
99,617
108,992
109,174
117,647
120,953
109,699
114,022
106,075
105,134
100,188
101,406
102,071
99,918
106,300
105,547
115,348
Year
2009
2009
2009
2009
2009
2009
2009
2009
2009
2009
2009
2009
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2010
2011
2011
2011
2011
2011
2011
2011
2011
2011
2011
2011
2011
Notes: This table indicates the number of deaths
recorded in the Social Security Administration's
Public Death Master File for each month between
January 2009 and December 2011.
163
Appendix Table 3.2. Predicted vs. Documented Mortality
Death Master File Merge Rates
Predicted Mortality
Nonveterans
Veterans
Percentage
All Applicants
(4)
(3)
(1)
(2)
0.0104
A. White Men
0.0124
0.0135
0.0115
0.0187
B. Black Men
0.0101
0.0106
0.0094
0.0045
C. White Women
0.0049
0.0053
0.0043
0.0072
D. Black Women
0.0039
0.0042
0.0033
Notes: Column 1 of this table reports predicted mortality percentages, by race
and gender, based on 2010 U.S. Life Tables, available at
http://www.cdc.gov/nchs/data/nvsr/nvsr63/nvsr63_07.pdf. Columns 2, 3, and 4
report merge rates with the public SSA Death Master File (DMF) for all
applicants (column 2), applicants who enlist (column 3), and applicants who do
not enlist (column 4). A merge with the DMF is defined as a matching Social
Security Number and a matching month of birth.
164
Instrument
I (AFQT>3 1)
Stage
Appendix Table 3.3. RD and RK First
Appendix Table 3.3. RD and RK First Stage
Parametric Estimates
3rd Order Poly
2nd Order Poly
(2)
(1)
(3)
(4)
0.1015***
(0.0045)
I (AFQT>50)
0.0567***
0.0567***
(0.0042)
(0.0042)
-0.0157***
(0.0012)
-0.0022*
(0.0012)
407.3
668,149
446.4
668,149
I (AFQT>3 1) x AFQT
1 (AFQT>50) x AFQT
First Stage F-Statistic
Observations
0.1015***
(0.0045)
0.1080***
(0.0062)
0.0509***
(0.0056)
174.1
668,149
0.1080***
(0.0062)
0.0509***
(0.0056)
-0.0079**
(0.0033)
0.0001
(0.0030)
96.9
668,149
Nonparametric Estimates
BW = 15
(5)
0.1049***
(0.0032)
0.0506***
(0.0029)
615.0
668,149
(6)
0.1049***
(0.0032)
0.0506***
(0.0029)
-0.0172***
(0.0004)
-0.0024***
(0.0004)
2122.6
668,149
Notes: This table reports parametric and nonparametric first stage estimates from parametric and nonparametric 2SLS RD and 2SLS RDRK specifications. The endogenous (LHS) variable for all estimates is an indicator for any military service. The RD instruments are
enlistment offers (AFQT>3 1) and bonus offers (AFQT>50) while the RK instruments interact enlistment offers and bonus offers with a
linear AFQT term. All estimates include dummy controls for fiscal year of application and a linear control for AFQT (the running variable).
Parametric estimates using second order polynomial controls for the running variable also include the following controls: (AFQT 2) X
I (AFQT<3 1), (AFQT 2) x 1 (AFQT>3 1), and (AFQT 2) x 1 (AFQT>50). Parametric estimates using third order polynomial controls include
these additional controls: (AFQT 3) x 1(AFQT<3 1), (AFQT ) x 1 (AFQT>3 1), and (AFQT 3) x 1 (AFQT>50)for Robust standard errors are
reported in parentheses. ***,**, and * denote significance at the 1%, 5%, and 10% level, respecitvely.
165
Dependent
Variable
Appendix Table 3.4. 2SLS Estimates by Race and Application Timeframe
FY96-FYO 1
FY02-FY07
Full RD Sample
Non-whites
Whites
Applicants
Applicants
(1)
(2)
(3)
(4)
(5)
A. Parametric 2SLS RD-RK Estimates: 2nd Order Polynomial Control for RV
Deceased by
-0.0054
-0.0108
-0.0046
-0.0055
-0.0049
FEB10
(0.0053)
(0.0080)
(0.0071)
(0.0073)
(0.0075)
E[Y,]
0.0125
0.0106
0.0134
0.0147
0.0099
Deceased by
FEB15
E[Y, ID1 =0]
-0.0026
(0.0061)
0.0163
-0.0093
(0.0093)
0.0139
-0.0015
(0.0080)
0.0173
-0.0050
(0.0082)
0.0184
0.0000
(0.0088)
0.0138
Observations
0.2764***
(0.0148)
0.1220
668,149
0.3020***
(0.0241)
0.1061
197,748
0.2645***
(0.0190)
0.1287
470,401
0.2207***
(0.0178)
0.1075
363,319
0.3499***
(0.0249)
0.1394
304,830
Deceased by
FEB10
E[Yj]
B. Nonparametric 2SLS RD-RK Estimates: Bandwidth = 15
0.0011
-0.0003
0.0009
0.0005
(0.0026)
(0.0038)
(0.0036)
(0.0037)
0.0125
0.0106
0.0134
0.0147
0.0019
(0.0036)
0.0099
Deceased by
FEB15
E[Y, ID1 =0]
0.0033
(0.0030)
0.0163
0.0026
(0.0045)
0.0139
0.0029
(0.0040)
0.0173
0.0023
(0.0042)
0.0184
0.0046
(0.0042)
0.0138
0.2554***
(0.0071)
0.1220
668,149
0.2501***
(0.0113)
0.1061
197,748
0.2592***
(0.0092)
0.1287
470,401
0.2119***
(0.0088)
0.1075
363,319
0.3144***
(0.0115)
0.1394
304,830
DC Receipt
by JAN14
DC Receipt
by JAN14
Observations
Notes: This table reports 2SLS estimates of the effects of military service on mortality and DC receipt.
Column 1 reports results for the full RD sample, which includes applicants with AFQT scores in the
interval [16, 64]. Columns 2 through 5 limit the RD sample to the subgroups identified in the column
header. All regressions include indicator variables for each application fiscal year. Robust standard errors
are reported in parentheses. ***,**, and * denote significance at the 1%, 5%, and 10% level,
respecitvely.
166
Endogenous Variable
Appendix Table 3.5. 2SLS Estimates with 2 Endogenous Variables
B. DV: DC Receipt by JAN14
A. DV: Deceased by FEB10
2SLS RD-RK
OLS
2SLS RD-RK
OLS
2nd Order Poly NP (BW=15)
2nd Order Poly NP (BW=15)
(3)
(2)
(1)
(3)
(2)
(1)
0.2846***
0.2794***
Any Active Duty Service
-0.0020***
-0.0051
-0.0021
0.2498***
Any Reserve Service
(0.0003)
-0.0054***
(0.0005)
(0.0057)
-0.0152
(0.0632)
(0.0053)
0.0209
(0.0269)
(0.0008)
0.1290***
(0.0018)
(0.0159)
-0.0112
(0.1841)
(0.0151)
0.1037
(0.0772)
668,149
First Stage F-Statistics
1,598.2
428.1
263.9
16.2
668,149
668,149
668,149
428.1
16.2
668,149
1,598.2
263.9
668,149
Any Active Duty Service
Any Reserve Service
Observations
Notes: This table reports OLS and 2SLS RD-RK estimates of the effects of active duty and reserve duty military service on mortality and DC
receipt. All regressions include dummy controls for fiscal year of application. OLS estimates include additional indicators for each possible
AFQT scores and estimates in columns 4 and 5 include additional controls for year of birth, race (white or nonwhite), education level (still in
high school, high school dropout or GED, high school diploma, some college), and state of birth. Robust standard errors are reported in
parentheses. ***,**, and * denote significance at the 1%, 5%, and 10% level, respecitvely.
167
References
Abbott, Michael and Orley Ashenfelter. 1976. "Labour Supply, Commodity Demand and the Allocation
of Time," Review ofEconomic Studies, 43(3), 389-411.
Air Force Health Study. 2000. "Air Force Health Study: An Epidemiological Investigation of Health
Effects in Air Force Personnel Following Exposure to Herbicides. 1997 Follow-up Examination
Results." Brook Air Force Base: Air Force Research Laboratory.
American Psychiatric Association. 1994. Diagnostic and StatisticalManual ofMental Disorders:DSM-4.
ManMag.
Angrist, Josh. 1990. "Lifetime Earnings and the Vietnam-Era Draft Lottery: Evidence from Social
Security Administrative Records," American Economic Review, 80(3), 313-336.
Angrist, Josh. 1998. "Estimating the Labor Market Impact of Voluntary Military Service Using Social
Security Data on Military Applicants." Econometrica, 66(2), 249-288.
Angrist, Josh. 2001. "Estimation of Limited Dependent Variable Models with Dummy Endogenous
Regressors." JournalofBusiness & Economic Statistics, 19(1).
Angrist, Josh and Jorn-Steffen Pischke. 2009. Mostly Harmless Econometrics: An Empiricist's
Companion. Princeton: Princeton University Press.
Angrist, Josh, Stacey Chen, and Brigham Frandsen. 2010. "Did Vietnam Veterans Get Sicker in the
1990s? The Complicated Effects of Military Service on Self-Reported Health." JournalofPublic
Economics, 94(11), 824-837.
Ashenfelter, Orley and James J. Heckman. 1974. "The Estimation of Income and Substitution Effects in a
Model of Family Labor Supply." Econometrica, 42(1), 73-85.
Autor, David H., Mark Duggan, Kyle Greenberg, and David S. Lyle. 2015. "The Impact of Disability
Benefits on Labor Supply: Evidence from the VA's Disability Compensation Program." NBER
Working Paper.
Autor, David H. and Mark G. Duggan. 2003. "The Rise in the Disability Rolls and the Decline in
Unemployment." QuarterlyJournalofEconomics, 118(1), 157-206.
Autor, David H. and Mark G. Duggan. 2007. "Distinguishing Income from Substitution Effects in
Disability Insurance." American Economic Review Papersand Proceedings, 97, 119-124.
Autor, David H., Mark G. Duggan, and David Lyle. 2011. "Battle Scars? The Puzzling Decline in
Employment and Rise in Disability Receipt among Vietnam Era Veterans." American Economic
Review Papersand Proceedings, 101, 339-344.
Autor, David H., Nicole Maestas, Kathleen J. Mullen and Alexander Strand. 2015. "Does Delay Cause
Decay? The Effect of Administrative Decision Time on the Labor Force Participation and
Earnings of Disability Applicants." NBER Working Paper.
Batty, G. David, Ian J. Deary, and Linda S. Gottfredson. 2007. "Premorbid (early life) IQ and Later
Mortality Risk: Systematic Review." Annals ofEpidemiology, 17(4), 278-288.
168
Batty, G. David, M.J. Shipley, L.H. Mortensen, S.H. Boyle, J. Barefoot, M. Gronbaek, C.R. Gale, and I.J.
Deary. 2008. "IQ in Late Adolescence/Early Adulthood, Risk Factors in Middle Age and Later
All-Cause Mortality in Men: The Vietnam Experience Study." Journal of Epidemiology and
Community Health, 62(6), 522-531.
Benitez-Silva, Hugo, Moshe Buchinsky, Hiu Man Chan, Sofia Cheidvasser, and John Rust. 2004. "How
Large is the Bias in Self-Reported Disability?" JournalofApplied Econometrics, 19(6) 649-670.
Bilmes, Linda J. and Joseph E. Stiglitz. 2008. The Three Trillion Dollar War: The True Cost of the Iraq
Conflict. New York: W.W. Norton and Company.
Black, Dan, Kermit Daniel, and Seth Sanders. 2002. "The Impact of Economic Conditions on
Participation in Disability Programs: Evidence from the Coal Boom and Bust."American
Economic Review, 92, 27-50.
Bollinger, Mary J., Susanne Schmidt, Jacqueline A. Pugh, Helen M. Parsons, Laurel A. Copeland, and
Mary Jo Pugh. 2015. "Erosion of the Health Soldier Effect in Veterans of U.S. Military Service in
Iraq and Afghanistan." PopulationHealth Metrics, 13(8).
Bonds, Timothy M., Dave Baiocchi, and Laurie L. McDonald. 2010. Army Deployments to OIF and OEF.
RAND Arroyo Center Santa Monica CA.
Borghans, Lex, Anne Gielen, and Erzo Luttmer. 2014. "Social Support Substitution and the Earnings
Rebound: Evidence from a Regression Discontinuity in Disability Insurance Reform." American
Economic Journal:Economic Policy, 6(4), 34-70.
Bound, John. 1989. "The Health and Earnings of Rejected Disability Insurance Applicants." American
Economic Review, 79(3), 482-503.
Bound, John and Richard Burkhauser. 1999. "Economic Analysis of Transfer Programs Targeted on
People with Disabilities." Handbook of Labor Economics, Volume 3 (eds. Orley Ashenfelter and
David Card), Amsterdam, North-Holland.
Bound, John and Timothy Waidmann. 1992. "Disability Transfers, Self-Reported Health, and the Labor
Force Attachment of Older Men: Evidence from the Historical Record." Quarterly Journal of
Economics, 1393-1419.
Boyle, Melissa A. and Johanna Lahey. 2010. "Health Insurance and the Labor Supply Decisions of Older
Workers: Evidence from a US Department of Veterans Affairs Expansion." Journal of Public
Economics, 94(7-8), 467-478.
Bradley, Rebekah, Jamelle Greene, Eric Russ, Lissa Dutra, and Drew Westen. 2005. "A Multidimensional
Meta-Analysis of Psychotherapy for PTSD." American JournalofPsychiatry, 162(2), 214-227.
Brewin, Chris R, Ruth A Lanius, Andrei Novac, Ulrich Schnyder, and Sandro Galea. 2009.
"Reformulating PTSD for DSM-V: Life after Criterion A." Journal of Traumatic Stress, 22(5),
366-373.
169
Calvert, G., Sweeney, M., Deddens, J. and D. Wall. 1999. "An Evaluation of Diabetes Mellitus, Serum
Glucose, and Thyroid Function among U.S. Workers Exposed to 2, 3, 7, 8-tetrachlorodibenzo-pdioxin." Occupationaland EnvironmentalMedicine, 56(4), 270-276.
CDC, NIH, DoD, and VA Leadership Panel. 2013. Report to Congress on Traumatic Brain Injury in the
United States: Understanding the Public Health Problem Among Current and Former Military
Personnel. Centers for Disease Control and Prevention (CDC), the National Institutes of Health
(NIH), the Department of Defense (DoD), and the Department of Veterans Affairs (VA).
Cesur, Resul, Joseph J Sabia, and Erdal Tekin. 2013. "The Psychological Costs of War: Military Combat
and Mental Health." JournalofHealth Economics, 32(1), 51-65.
Chen, Susan and Wilbert van der Klaauw. (2008) "The Work Disincentive Effects of the Disability
Insurance Program in the 1990s." JournalofEconometrics, 142(2), 757-784.
Chetty, Raj. 2012. "Bounds on Elasticities with Optimization Frictions: A Synthesis of Micro and Macro
Evidence on Labor Supply." Econometrica, 80(3), 969-1018.
Congressional Budget Office. 2014. Veterans Disability Compensation: Trends and Policy Options.
Congressional Budget Office, August 2014.
Coile, Courtney, and Jonathan Gruber. 2007. "Future Social Security Entitlements and the Retirement
Decision." Review of Economics and Statistics, 89(2), 234-246.
Coile, Courtney, Mark Duggan and Andrew Guo. (2015) "Veterans' Labor Force Participation: What
Role Does the VA's Disability Compensation Program Play?" forthcoming in American
Economic Review Papers and Proceedings.
Conley, Dalton, and Jennifer Heerwig. 2012. "The Long-Term Effects of Military Conscription on
Mortality: Estimates from the Vietnam-Era Draft Lottery." Demography, 49(3), 841-855.
Duggan, Mark, Perry Singleton and Jae Song. 2007. "Aching to Retire? The Rise in the Full Retirement
Age and its Impact on the Disability Rolls." JournalofPublic Economics, 91(7), 1327-1350.
Duggan, Mark, Robert Rosenheck and Perry Singleton. 2006. "Federal Policy and the Rise in Disability
Enrollment: Evidence for the VA's Disability Compensation Program." NBER Working Paper.
Duggan, Mark, Robert Rosenheck and Perry Singleton. 2010. "Federal Policy and the Rise in Disability
Enrollment: Evidence for the VA's Disability Compensation Program." Journal of Law and
Economics, 53, 379-398.
Eriksson, Stefan and Dan-Olof Rooth. 2014. "Do Employers Use Unemployment as a Sorting Criterion
When Hiring? Evidence from a Field Experiment." The American Economic Review, 104(3),
1014-1039.
Frandsen, Brigham R. 2014. "Party Bias in Union Representation Elections: Testing for Manipulation in
the Regression Discontinuity Design When the Running Variable is Discrete." BYU Working
Paper.
French, Eric and Jae Song. 2014. "The Effect of Disability Insurance Receipt on Labor Supply."
American Economic Journal:Economic Policy, 6(2), 291-337.
170
Frisch, Ragnar. 1959. "A Complete Scheme for Computing All Direct and Cross Demand Elasticities in a
Model with Many Sectors," Econometrica, 27(2), 177-196.
Frueh, B Christopher, Anouk L Grubaugh, Jon D Elhai, and Todd C Buckley. 2007. "US Department of
Veterans Affairs Disability Policies for Posttraumatic Stress Disorder: Administrative Trends and
Implications for Treatment, Rehabilitation, and Research." American Journal of Public Health,
97(12), 2143-2145
Gates, Robert M. 2014. Duty: Memoirs of a Secretary at War. Random House LLC.
Gros, Daniel F., Matthew Price, Erica K. Yuen, and Ron Aciemo. 2013. "Predictors of Completion of
Exposure Therapy in OEF/OIF Veterans with Posttraumatic Stress Disorder." Depression and
Anxiety, 30(11), 1107-1113.
Gruber, Jon. 2000. "Disability Insurance Benefits and Labor Supply," JournalofPoliticalEconomy, 108,
1162-1183.
Hagel, Charles. 2014. "MEMORANDUM: Supplemental Guidance to Military Boards For Correction of
Military/Naval Records Considering Discharge Upgrade Requests by Veterans Claiming Post
Traumatic Stress Disorder."
Hearst, N., T.B. Newman, and S. B. Hulley. 1986. "Delayed Effects of the Military Draft on Mortality: A
Randomized Natural Experiment." The New EnglandJournalofMedicine, 314, 620-624.
Hill, Mark E. and Ira Rosenwaike. 2001. "The Social Security Administration's Death Master File: The
Completeness of Death Reporting at Older Ages." Social Security Bulletin, 64(1), 45-51.
Hoge, Charles W., Carl A. Castro, Stephen C. Messer, Dennis McGurk, Dave I. Cotting, and Robert L.
Koman. 2004. "Combat Duty in Iraq and Afghanistan, Mental Health Problems, and Barriers to
Care." New EnglandJournalofMedicine, 351(1), 13-22.
Hoge, Charles W., Jennifer L. Auchterlonie, and Charles S. Milliken. 2006. "Mental Health Problems,
Use of Mental Health Services, and Attrition from Military Service After Returning from
Deployment to Iraq or Afghanistan" Journal of American Medical Association, 295(9), 10231032.
Home, David K. "The Impact of Soldier Quality on Army Performance." 1987. Armed Forces & Society,
13(3), 443-455.
Imbens, Guido and Karthik Kalyanaraman. 2012. "Optimal Bandwidth Choice for the Regression
Discontinuity Estimator." Review ofEconomics and Statistics, 79(4), 933-959.
Institute of Medicine. 2000. "Veterans and Agent Orange: Herbicide/Dioxin Exposure and Type 2
Diabetes." Washington, D.C.: The National Academies Press.
Institute of Medicine. 2010. "Returning Home from Iraq and Afghanistan: Preliminary Assessment of
Readjustment Needs of Veterans, Service Members, and Their Families."
171
Institute of Medicine. 2011. "Long-term Health Consequences of Exposure to Burn Pits in Iraq and
Afghanistan." Washington, D.C.: The National Academies Press.
Jokela,
Markus, Marko Elovainio, Archana Singh-Manoux, and Mika Kivimaki. 2009. "IQ,
Socioeconomic Status, and Early Death: The US National Longitudinal Survey of Youth."
PsychosomaticMedicine, 71(4), 322-328.
Kang, H.K. and Bullman, T.A. 2009. "Is There an Epidemic of Suicides Among Current and Former U.S.
Military Personnel?" Annals ofEpidemiology, 19(10), 757-760.
Kaplan, M.S., B.H. McFarland, N. Huguet, and M. Valenstein. 2012. "Suicide Risk and Precipitating
Circumstances Among Young, Middle-aged, and Older Male Veterans." American Journal of
Public Health, 102(Supplement 1), S131-S137.
Kolkow, Tonya T., James L. Spira, Jennifer S. Morse, and Thomas A. Grieger. 2007. "Post-Traumatic
Stress Disorder and Depression in Health Care Providers Returning from Deployment to Iraq and
Afghanistan." Military Medicine, 172(5), 451-455.
Kostol, Andreas and Magne Mogstad. 2014. "How Financial Incentives Induce Disability Insurance
Recipients to Return to Work." American Economic Review 104(2): 624-55.
Kroft, Kory, Fabian Lange, and Matthew J. Notowidigdo. 2013. "Duration Dependence and Labor Market
Conditions: Evidence from a Field Experiment." QuarterlyJournalof Economics, 128(3), 11231167.
Macklin, Michael L., Linda J. Metzger, Brett T. Litz, Richard J. McNally, Natasha B. Lasko, Scott P. Orr,
and Roger K. Pitman. 1998. "Lower Precombat Intelligence Is a Risk Factor for Posttraumatic
Stress Disorder." Journalof Consulting and ClinicalPsychology, 66(2), 323.
Maercker, Andreas, Chris R. Brewin, Richard A. Bryant, Marylene Cloitre, Geoffrey M. Reed, Mark van
Ommeren, Asma Humayun, Lynne M. Jones, Ashraf Kagee, Augusto E. Llosa et al. 2013.
"Proposals for Mental Disorders Specifically Associated with Stress in the International
Classification of Diseases- I1." The Lancet, 381(9878), 1683-1685.
Maestas, Nicole, Kathleen Mullen, and Alexei Strand. 2013. "Does Disability Insurance Receipt
Discourage Work? Using Examiner Assignment to Estimate Causal Effects of SSDI Receipt."
American Economic Review, 103(5), 1797-1829.
Mayberry, Paul W. and Hiatt, Catherine M. 1992. "Computing AFQT Scores from Historical Data."
Center for Naval Analysis.
McClelland, Robert and Shannon Mok. 2012. "A Review of Recent Research on Labor Supply
Elasticities." Congressional Budget Office Working Paper #2012-12, October.
McCrary, Justin. 2008. "Manipulation of the Running Variable in the Regression Discontinuity Design."
Journalof LaborEconomics, 13(2), 289-308.
McNally, Richard J. and B. Christopher Frueh. 2013. "Why are Iraq and Afghanistan War Veterans
Seeking PTSD Disability Compensation at Unprecedented Rates?" Journal ofAnxiety Disorders,
27(5), 520-526.
172
McNally, Richard J. and Lisa M. Shin. 1995. "Association of Intelligence with Severity of Posttraumatic
Stress Disorder Symptoms in Vietnam Combat Veterans." The American Journal of Psychiatry,
152(6), 936.
National Technical Information Service. 2011. "Important Notice: Change in Public Death Master File
Records." U.S. Department of Commerce.
Oberholzer-Gee, Felix. 2008. "Nonemployment Stigma as Rational Herding: A Field Experiment."
Journal ofEconomic Behavior & Organization,65(1), 30-40.
Parsons, Donald. 1980. "The Decline in Male Labor Force Participation." The Journal of Political
Economy 88: 117-134.
Pearce, M.S., I.J. Deary, A.H. Young, and L. Parker. 2006. "Childhood IQ and Deaths Up to Middle Age:
The Newcastle Thousand Families Study." PublicHealth, 120(11), 1020-1026.
Ruggles, Steven J., Trent Alexander, Katie Genadek, Ronald Goeken, Matthew B. Schroeder, and
Matthew Sobek. 2010. Integrated Public Use Microdata Series: Version 5.0. Minneapolis:
University of Minnesota.
Schiller, Jeannine, Jacqueline Lucas, Brian Ward, Jennifer Peregoy. 2010. "Summary Health Statistics for
U.S. Adults: National Health Interview Survey, 2010." Vital and Health Statistics, 10(252).
Schnurr, Paula P., Carole A. Lunney, Anjana Sengupta, and Avron Spiro III. 2005. "A Longitudinal
Study of Retirement in Older Male Veterans." Journal of Consulting and Clinical Psychology,
73(3), 561.
Scribner, Barry L., D. Alton Smith, Robert H. Baldwin, and Robert L. Phillips. 1986. "Are Smart Tankers
Better? AFQT and Military A Productivity." Armed Forces & Society, 12(2) 193-206.
Segall, Donald 0. 2004. "Development and Evaluation of the 1997 ASVAB Scale Score." Defense
Manpower Data Center.
Seltzer, Carl C., and Seymour Jablon. 1974. "Effects of Selection on Mortality." American Journal of
Epidemiology, 100(5), 367-372.
Siminski, Peter, and Simon Ville. 2011. "Long-Run Mortality Effects of Vietnam-Era Army Service:
Evidence from Australia's Conscription Lotteries." American Economic Review Papers and
Proceedings, 101, 345-349.
Siminski, Peter. 2013. "Employment Effects of Army Service and Veterans' Compensation: Evidence
from the Australian Vietnam-Era Conscription Lotteries" Review of Economics and Statistics,
95(1), 87-97.
Tanielian, Terri L. and Lisa Jaycox. 2008. Invisible Wounds of War: Psychological and Cognitive
Injuries, Their Consequences, and Services to Assist Recovery. Santa Monica, CA: RAND
Corporation.
U.S.
Army. 2010. "Army Health Promotion, Risk Reduction, and Suicide Prevention Report."
Washington, D.C.: United States Army.
173
U.S. Army Recruiting Command. 2012. "USAREC Regulation 601-96: Enlistment, Accessions, and
Processing Procedures." Fort Knox, KY.
U.S. Department of Defense, 2004. PopulationRepresentation in the Military Services: Fiscal Year 2002.
Washington D.C.: Office of the Under Secretary of Defense (Personnel and Readiness).
U.S. Department of Defense. 2015. U.S. Military Casualties- Operation Iraqi Freedom (QIF): Casualty
Summary by Month and Service. Defense Manpower Data Center (Defense Casualty Analysis
System).
U.S. Department of Veterans Affairs. 2003. "Agent Orange Brief." Environmental Agents Service, (131)
Al.
U.S. Department of Veterans Affairs: Office of the Inspector General. 2005. "Review of State Variances
in VA Disability Compensation Payments."
U.S.
General Accounting Office. 2006. "VA Should Improve Its Management of Individual
Unemployability Benefits by Strengthening Criteria, Guidance, and Procedures." Report GAO06-309.
U.S.
General Accounting Office. 2009. "Social Security Disability: Additional Outreach and
Collaboration on Sharing Medical Records Would Improve Wounded Warriors' Access to
Benefits." Report GAO-09-762.
U.S. Veterans Benefits Administration. Selected Years. "Veterans Benefits Administration Annual
Benefits Report." Available online at http://www.vba.va.gov/reports.htm.
U.S. Veterans Benefits Administration. 2013. "Trends in the Utilization of VA Programs and Services."
Von Wachter, Till, Joyce Manchester, and Jae Song. 2011. "Trends in Employment and Earnings of
Allowed and Rejected Applicants to the SSDI Program." American Economic Review, 33083329.
White House. 2010. "Weekly Address: President Obama Announces Changes to Help Veterans with
PTSD Receive the Benefits They Need." 10 July 2010.
Winkler, John D. 1999. "Are Smart Communicators Better? Soldier Aptitude and Team Performance."
Military Psychology, 11(4), 405-422.
Worthen, Mark D. and Robert G. Moering. 2011. "A Practical Guide to Conducting VA Compensation
and Pension Exams for PTSD and Other Mental Disorders," PsychologicalInjury and Law, 4(34), 187-216.
Yankovich, Michael. 2011. "The Effects of Combat Exposure on Military Experience." Working Paper.
174