Does Objectification Affect Women`s Willingness to Compete

advertisement
Macalester Journal of
Economics
Volume 25
Spring 2015
Table of Contents
Foreword
……………..………………………………………….………. Professor J.Peter Ferderer 2
A Tale of Two Climate Scenarios: The Nordhaus and Stern Models
……………………………………. Rowena Foo, Kevin Fortune, and Jessica Timmerman 4
Do Expected Marginal Revenue Products for National Hockey League Players Equal
Their Price in Daily Fantasy Games?
…………………………….……………………………………………… Benny Goldman 21
Does Objectification Affect Women’s Willingness to Compete
…………………………………………………...…… Disa Hynsjö and Vincent Siegerink 50
Airline Performance: Taking Off After 30 Years On The Tarmac
……………………………………………………………………..………..Kaspar Mueller 75
Back to School: Drivers of Educational Attainment Across States
(1990-2010)
……………………………………………………………………...……Tyler J. Skluzacek 98
Published annually by the Macalester College Department of Economics and
Omicron Delta Epsilon
2
Macalester Journal of Economics
Volume 25
Spring 2015
Omicron Delta Epsilon
Kap Mueller ‘15, President
Jose Caballero Ciciolli ‘15
Pukitta Chunsuttiwat ’15
Morgan Widuch ‘15
Editors
Tyler Krentz ‘15
Siyabonga Ndwandwe ‘15
Anandi Somasundaram ‘15
Economics Faculty and Staff
Paul Aslanian
Jeffery Evans
Jane Kollasch
Raymond Robertson
Emeritus Professor
Adjunct Professor
Department Coordinator
Professor
Amy Damon
Peter Ferderer
Joyce Minor
Mario Solis-Garcia
Associate Professor
Edward J. Noble
Professor
Karl Egge Professor
Assistant Professor
Liang Ding
Gary Krueger
Karine Moe
Vasant Sukhatme
Associate Professor
Cargill Professor
F.R. Bigelow Professor,
Department Chair
Edward J. Noble
Emeritus Professor
Karl Egge
Sarah West
F.R, Bigelow
Emeritus Professor
Professor
A Note from the Editors
When appointed at the beginning of the 2014-2015 academic year, none of us truly knew what the
position of Editor would entail. However after months of reading, deliberating and editing, it is safe
to say that we are astounded by the quality of work Macalester undergraduate students consistently
produce. We congratulate the authors of the five papers in this journal.
We would like to thank all of the faculty who recommended papers for this journal. We would also
like to thank Jane Kollasch for her assistance and guidance throughout this process.
Thank you, Macalester Economics, for making these past four years so memorable!
Tyler Krentz ’15
Siyabonga Ndwandwe ’15
Anandi Somasundaram ’15
Foreword
The Macalester Journal of Economics is produced by the Macalester College chapter of Omicron
Delta Epsilon, the international honors society in economics. The editors – Tyler Krentz ’15
(Shakopee, Minnesota), Siyabonga Ndwandwe ’15 (Hluti, Swaziland) and Anandi Somasundaram ’15
(Cupertino, California) – have done an outstanding job selecting five papers on a variety of
important topics and molding them into journal articles. These articles represent the best
scholarship to emerge from our courses in 2014.
Focusing on extreme market failures, Rowena Foo ’16 (Kuala Lumpur, Malaysia), Kevin
Fortune ’17 (Milwaukee, Wisconsin) and Jessica Timerman ‘17 (Stevens Point, Wisconsin) explore
the setting of optimal carbon taxes to slow climate change in this first paper. They show that
relatively small changes in the rate at which contemporary society discounts the future costs of
climate change have dramatic implications for the degree to which fossil fuel consumption should be
discouraged through taxation. It is difficult to overstate the importance of this issue and it is
gratifying to see Macalester students taking it on.
One of the most important developments in professional sports over the past few decades has
been the Moneyball revolution where teams, starting with the Oakland Athletics of Major League
Baseball, have used new metrics to identify players who are “mispriced” in the market for their
services. Benny Goldman ’16 (Goldens Bridge, New York) explores whether mispricing occurs in
the “daily game” of fantasy hockey leagues. Using panel data techniques, he finds that the market
undervalues the impact of home ice and the relative strength of a player’s team, while it overvalues
the recent performance of players. This novel study raises interesting questions about the efficiency
of these market and the behavioral biases of sports fans that participate in them.
It is well known that men occupy more positions of power in society than women. One
explanation for this is that men are more willing to engage in competition than women and a lively
2
debate has emerged in the literature about the roles that nature and nurture play in this
phenomenon. Disa Hynsjö ’14 (Lerum, Sweden) and Vincent Siegerink ’14 (Utrecht, Netherlands)
attempt to measure the impact of sexism on the competition gender gap by conducting an
experiment where subjects, prior to playing a ladder toss game, choose between a piece rate (noncompetitive) payment scheme and a tournament (competitive) payment scheme. Subjects in one
treatment were primed for sexism by watching a short clip from the movie An Indecent Proposal, while
those in the base case watched a commercial for a safari lodge. Disa and Vincent find that men
were much more willing to compete than women, but their small sample size prevents them from
uncovering a significant impact of sexism. Nevertheless, this is an ambitious project and the
students have inspired two of their professors to explore it further.
Continuing on the theme of competition, Kaspar Mueller ’15 (Iowa City, Iowa) investigates why
the profitability of U.S. airlines has increased so dramatically in recent years after decades of low
profits following deregulation in 1978. Based on panel data analysis of 15 different carriers, he finds
that higher demand, increased industry concentration and lower fuel prices are the primary drivers
of increased profitability. Kap’s results suggest that it might not be appropriate to view the airline
industry as a contestable market, where a small number of firms produce sufficient competition, and
that mergers between large carries such as Northwest and Delta have had important consequences
for consumers.
Tyler Skluzacek ’16 (New Prague, Minnesota) begins his article by posing an intriguing question:
Why do nearly half the residents of some states like Massachusetts obtain a college degree, while
other states have much lower rates of college degree attainment? Tyler uses panel data analysis to
show that market forces are clearly at work, with degree attainment rates correlated with the college
wage premium and regional dummy variables that reflect underlying economic structures. However,
he also suggests that government intervention plays an important role as states with higher
government spending on education also have higher rates of college degree attainment. Tyler’s work
has the potential to inform ongoing debates about income inequality in the U.S.
On behalf of my colleagues in the Economics Department, I am delighted to present the
research of these talented students. I am confident that you will find it enlightening and be
impressed by the value of a liberal arts education.
J. Peter Ferderer
Edward J. Nobel Professor of Economics
3
A Tale of Two Climate Scenarios: The Nordhaus and Stern Models
Rowena Foo ‘16, Kevin Fortune ‘17, and Jessica Timerman ‘17
Climate Change: Science, Economics, and Policy
This paper compares the viewpoints of two prominent climate economists, William Nordhaus and
Nicholas Stern. We seek to explain why they recommend different policy solutions to climate
change. The paper uses a new descriptive quantity, the Sensitivity Ratio, to compare the economists.
The Sensitivity Ratio describes the effect of the discount rate used by each economist on the
sensitivity of the optimal carbon price to changes in predicted climate damages. Data come from
projections of the DICE 2013 Integrated Assessment Model developed by William Nordhaus. We
find that the discount rates used by Nordahus and Stern significantly explain their differences in
policy recommendations and propose further research into the selection of appropriate discount
rates for climate economics.
I. Introduction
Global climate change represents an extreme market failure that has prompted many economists to
analyze the costs, benefits, and risks associated with climate change. Climate change economists and
scientists face the challenge of determining how much carbon emissions should be reduced and
within which time frame. Scientists and economists use integrated assessment models (IAMs) to
help answer these questions, but due to the complexity of climate change and the global economy,
many uncertainties remain about the assumptions underlying IAMs.
William Nordhaus and Samuel Stern propose different policies to reduce carbon emissions
based on different assumptions about climate damages and the discount rate. Climate damages refer
to estimates of the economic damages that result from climate change, specifically reductions in
Gross Domestic Product (GDP). The discount rate, or discounting, affects how much the value of
an impact decreases as the impact occurs farther into the future. No consensus exists about the
exact values of the discount rate or climate damages. As a result, Nordhaus and Stern each generate
their own estimates to use in climate models. In general, Stern evaluates greater expected climate
damage with a lower discount rate, while Nordhaus considers lesser climate damage with a higher
discount rate. The present experiment aims to examine how differences in assumptions of climate
damages and the discount rate lead to different emission policies, specifically in regards to the
optimal carbon price.
4
In this experiment, we will analyze the slope of carbon price with respect to climate damages
in terms of Nordhaus’ discount rate and Stern’s discount rate. The slope refers to the sensitivity of
the efficient carbon price to changes in climate damages or in other words how much each author
increases their efficient carbon price for an increase in estimated climate damages. After stating our
hypothesis for the experiment, we will review the literature on the debated estimates of the discount
rate and climate damages, layout the methods of our experiment, and conclude with a discussion of
our results and the implications of our results on climate change policy.
Hypothesis
We hypothesize that the main differences between the Nordhaus’ and Stern’s emission
policies are driven by their different assumptions of climate damages and the discount rate. Due to
the differences in discount rates, we predict that the sensitivities of the optimal carbon price to
changes in damages will be different. More specifically, we predict the efficient carbon price with
Nordhaus’ discount rate will be less sensitive to changes in climate damages than with Stern’s
discount rate because Nordhaus’ higher discount rate places less value on climate damages that
occur in the future.
II. Literature Review
In order to understand the different viewpoints on the assumptions behind Nordhaus’ and Stern’s
climate scenarios, we examined the literature on the discount rate and climate damages. The
discount rate represents the independent variable changed in our experiment, while graphing carbon
price with respect to incrementally increasing damages yields the result of which to compare the
different discount rates. The following discussion provides important background information for
understanding the motivations behind our experiment and contextualizing the results and policy
implications of our experiment within the current debate.
The Discount Rate
According to Weitzman, “It is not an exaggeration to say that the biggest uncertainty of all in
the economics of climate change is the uncertainty about which interest rate to use for discounting”
(Weitzman, 2007; p. 3). As Weitzman highlighted, much debate exists on the opportunity cost of
capital, or the real interest rate, to be used for discounting. According to the Ramsey equation for
the real interest rate (π‘Ÿ ∗ ), when economic growth (𝑔∗ ) is constant and social welfare is optimized,
5
the real interest rate depends on two factors: the rate of social time preference (rho, 𝜌) and the
elasticity of the marginal utility of consumption (alpha, 𝛼).
π‘Ÿ ∗ = 𝜌 + 𝛼𝑔∗
The rate of social time preference represents the importance of welfare of future generations
compared to today, and the elasticity of the marginal utility of consumption represents the rate at
which marginal utility of consumption changes over time. If the marginal utility falls rapidly over
time, that means society is more risk averse and prefers the certainty of consumption today to
uncertain consumption in the future. Nordhaus and Stern use different values for rho (.015 and .001
respectively) and alpha (1.01 and 1.45 respectively) due to their different assumptions and beliefs of
discounting and the real interest rate. Consequently, Nordhaus’ values call for a less stringent policy,
while Stern’s values call for a more aggressive carbon reduction strategy because the welfare of
future generations weighs more equally and the marginal utility of consumption falls less quickly
over time.
Nordhaus uses rho and alpha values that reflect the real interest rate observed in the market.
For example, according to Nordhaus, in the last four decades the pre-tax return on US corporate
capital has averaged 6.6% per year and historically estimates on human capital gain returns between
6 to 20% per year (Weitzman, 2007; p. 689). Thus, Nordhaus uses values of rho and alpha that
reflect a real interest rate of about 6% when discounting benefits accrued in the future. Nordhaus
takes a descriptive approach and uses a rho and alpha value derived from historical observations and
current economic realities.
Stern, on the other hand, adopts a prescriptive view. He believes discounting the welfare of
future generations is unethical, writing “We take a simple approach in this Review: if a future
generation will be present, we suppose that it has the same claim on our ethical attention as the
current one” (Weitzman, 2007; p. 31). As a result, his rho and alpha values do not reflect real
interest rates seen in the economy. Many economists, including Mendelsohn, Nordhaus, and
Weitzman, criticize Stern’s prescriptive approach. For example, Mendelsohn argues giving future
generations’ welfare more equal weight damages current generations because trends of economic
growth show that future generations will have more wealth (Mendelsohn, 2008; p. 52).
Furthermore, Nordhaus specifies the opportunity costs of several other worthy investments, such as
education and health care, which Stern fails to take into consideration with a smaller discount rate
(Nordhaus, 2013; p. 192). Weitzman disapproves of the Stern Review “for giving readers an
6
authoritative-looking impression that seemingly-objective best-available-practise professional
economic analysis robustly supports its conclusions”, as most of the conclusions are based on
Stern’s moral judgments (Weitzman, 2007; p. 28).
Of the literature reviewed, only Paul Krugman leans toward the side of Stern. While
Krugman sees zero discounting of future generations extreme, he feels government should take a
longer view than reflected in the 6% interest rate of Nordhaus and questions the riskiness of carbon
price ramp-up strategies advocated by Nordhaus due to the unknown effects of large temperature
increases (Krugman, in press). Weitzman also sees benefit in the Stern Review, but not in terms of
its discount rate. Specifically, Weitzman values Stern’s contributions in acknowledging the potential
gravity and uncertainty of climate damages. However, Weitzman argues increasing climate damages
accomplishes a similar outcome to Stern’s proposed reductions without the “back door” method of
decreasing rho and alpha to never before seen values in economic reality (Weitzman, 2007; p. 23).
Climate Damages
Although Weitzman supposes discounting to be the most uncertain aspect of climate change
economics, Nordhaus declares that climate damages are “the thorniest issue in climate-change
economics” (Nordhaus, 2013; p. 10). Climate damages are highly uncertain and subject to forecasts
of complicated global climate systems. More importantly, climate damages are difficult to quantify,
especially the non-market impacts, such as sea-level rise, ocean acidification, and hurricane
intensification. As our knowledge of climate damages is ever changing with the advancement of
scientific understanding, learning carbon price’s reaction to changing damages in relation to the
discount rate proves to be important
Stern and Nordhaus have large variations in their estimation of climate damages. Stern’s
estimation of economic damages, which is significantly greater than that of Nordhaus, has received a
wide array of criticisms. The Stern Review claims that market damages are estimated to be 5% of GDP
while non-market damages are an additional 5% of GDP. The Review further adds that other factors
such as catastrophic events can drive economic damages as high as 20%-35% of GDP after 2105.
Critics argue that Stern focuses too much on pessimistic outcomes (Weitzman, 2007) and that his
numbers have far exceeded anything presented in literature (Mendelsohn, 2008). Other studies have
suggested much smaller market damages, ranging between 0.1% - 0.5% GDP in 2100. Meanwhile,
non-market damages are estimated at most 1% of GDP (Mendelsohn, 2008). The Review also
predicted additional damages of 5% of GDP by 2200 due to extreme storms and an additional
7
“knock on damages” (Mendelsohn, 2008; p. 57) when people reduce investments in the gloomy
future. Critics claim that the extreme storms capable of decreasing GDP by 5% has yet to be
confirmed by scientific evidence and people will not reduce investments as climate damages are only
estimated at 1% of GDP (Weitzman, 2007).
Nordhaus, on the other hand, assumes damages are a quadratic function of temperature
change. This function is largely based on expert surveys, and the function estimates monetized
damages. Specifically in the DICE model, damages are estimated to be as high as 2% when the
global mean temperature increases to 3oC. (Nordhaus and Sztorc, 2013; p. 12) While The Review
states that economic damages of catastrophic events are as high as 20-30% of GDP, the DICE
model accounted for these non-monetized impacts with a multiplier of 1.25 onto the monetized
impacts. Overall, higher damages results in a more stringent emission policy and calls for immediate
and highly coordinated global action.
The above discussion on discounting and climate damages highlights the importance of our
experiment in understanding the effects of the discount rate on the carbon price sensitivity to
climate damages. Policymakers need to understand the implications of their chosen discount rate,
especially in a world where climate damages remain uncertain and new scientific research
continuously modifies our ability to predict damages into the future. Should policymakers follow
Weitzman’s advice to simply increase expected damages, keeping Nordhaus’ descriptive discount
rate, or follow Krugman’s call for a discount rate with a longer view in-between that of Nordhaus
and Stern? Furthermore, if damages do increase significantly as described by Stern, do policymakers
want a discount rate that creates a more sensitive carbon price to such increases and if so, how
sensitive? Our experiment hopes to shed light on these important questions raised from the
literature.
III. Methods
In order to compare Nordhaus’ and Stern’s carbon price sensitivity to changes in damages, we
incrementally increased the damage coefficient on temperature squared in two different scenarios:
one using Nordhaus’ discount rate and one using Stern’s discount rate. Nordhaus derives his
discount rate from the values of .015 for the rate of social time preference (rho) and 1.01 for the
elasticity of the marginal utility of consumption (alpha). For Stern’s lower discount rate, we used the
values .001 for rho and 1.45 for alpha, which are the values Nordhaus uses in the Stern Tab of the
8
DICE 2013R model, which represents Nordhaus’ low discounting scenario according to the Stern
Review (Nordhaus and Sztorc, 2013; p. 25).
We ran our experiments in one tab, the Optimal Scenario, or “Opttax,” tab of the DICE
model, in order to have a controlled experiment with all other variables equal. The Optimal Scenario
in the DICE model assumes full participation and efficient abatement policies by 2015 (Nordhaus,
2013; p. 24). Additionally, because examining the effects of bringing Nordhaus’ model closer to
Stern’s assumptions represents a key motivation for this experiment, it makes sense to run our
experiments in Nordhaus’ Optimal Scenario tab of the DICE model.
In order to run our trials over a reasonable range of damage coefficients on temperature
squared, we examined the existing literature, specifically the Tol (2009) survey of climate damage
estimates. We used the global “Warming” and “Impact” variables from the survey as shown in
Table 1. In the Tol survey, “Warming” is represented as temperature change in degrees Celsius and
“Impact” is represented as percent loss or gain of global GDP. We dropped the 1°C cases of
“Warming” because they have extreme variation in their damage coefficients and only 2 cases exist
at that level of temperature change. We also eliminated redundancy between authors, only choosing
their most recent study, so as to not to over-represent any author in our uncertainty range and base
our range on the most up to date information. The studies used to determine the reasonable range
are bolded in Table 1.
To calculate the damage coefficients of each study used from the Tol (2009) survey, we
assumed the linear damage coefficient (𝛹1 ) to be zero. This assumption reflects the value of the
coefficient in the DICE-2013R model. We rearranged Nordhaus’ quadratic climate damage formula
such that:
𝛹2 =
Ω(𝑑)
𝑇(𝑑)2
In this formula, Ω(𝑑) equals climate damages and 𝑇(𝑑)2 equals temperature-squared. We used each
temperature value and its associated impact value from Table 1 to provide a damage coefficient for
each study. Then, we averaged the damage coefficient set and found the point estimate to be -0.13
% GDP per °C squared. We took the standard deviation of the damage coefficient set to find the
half-range of our damage coefficient uncertainty range and then found the total uncertainty range is
0.06 to -0.32 % GDP per °C squared. This reasonable range in the DICE model is represented by
the damage coefficient on temperature squared ranging from
-.0006 to .0032. The positive and
9
negative signs of our range are opposite compared to the Tol study because the DICE model
expresses the damages as a positive number and subtracts damages, whereas the Tol study expresses
damages as a negative quantity and adds damages. In our trials, we incrementally increased the
damage coefficient by .0001, producing 39 trials with Nordhaus’ discount rate and 39 trials with
Stern’s discount rate.
To compare carbon prices respective to changes in climate damages, we had to determine
which year or years of carbon price to examine in order to provide an accurate comparison and
accurate results. First, we decided to compare our experiments at the year of greatest difference of
carbon price between Nordhaus’ and Stern’s discount rate in the Opttax tab with Nordhaus’
Optimal Scenario damage coefficient of .0027. We hypothesized that looking at the year of greatest
difference would give us the greatest contrast between the models. However, this hypothesis was
incorrect. Figure 1 shows the year of greatest difference in carbon price between the two discount
rates to be 2050.
Furthermore, we chose to examine our data in the years 2075 and 2100, in addition to the
year 2050, in order to assess the effect the given year had on our results. We chose not to examine
years after 2100 because the DICE model gets significantly less accurate after year 2100 (Nordhaus
and Sztorc, 2013). After running our trials, we graphed the carbon price according to Nordhaus’ and
Stern’s discount rate in years 2050, 2075, and 2100 compared to the damage coefficient, as seen in
Figure 2 with Nordhaus’ discount rate and Figure 3 with Stern’s discount rate. In our graphical
analysis, we excluded the data for negative damage coefficients in our range because the results
inaccurately skewed our data and are not relevant.
As seen in Figure 3, the carbon price with respect to the damage coefficient in the Stern
discount scenario plateaus at a certain damage coefficient depending on the chosen year. In order to
analyze our data and apply a linear fit, we excluded the Stern data points in the plateau range, as seen
in Figure 5. We chose a linear fit for both the Nordhaus and Stern data because it best models our
data and produces an equation with one coefficient associated with the dependent variable.
Specifically, the linear fit produced a slope by which to compare the Nordhaus and Sterns’ discount
rates effect on the carbon price sensitivity to damages, as shown in the slopes labeled in Figures 4
and 5.
Finally, we divided Nordhaus’ slope by Stern’s slope for a given year to create a descriptive
quantity that we will call the Sensitivity Ratio. This quantity shows how much Nordhaus’ efficient
10
carbon price reacts to a change in climate damages compared to Stern’s efficient carbon price. Table
2 displays the slopes and ratios of our data for each year. Table 2 also shows the R2 value for each
linear fit. As all R2 values are greater than .990, our linear models accurately explain almost all of our
data.
IV. Results
The Sensitivity Ratio of 0.22 supports our hypothesis that Nordhaus’ model reacts less to changes in
climate damages than Stern’s model. In other words, our hypothesis that the efficient carbon price
with Nordhaus’ discount rate would be less sensitive to changes in climate damages than with
Stern’s discount rate was correct. Quantitatively, the 0.22 sensitivity ratio means that Nordhaus’
model calls for a change in the carbon price equal to 0.22 less than Stern’s change in the carbon
price for an equal change in damages. In other words:
βˆ†πΆπ‘ = 0.22 ∗ βˆ†πΆπ‘†
Where 𝐢𝑁 is the optimal carbon price of Nordhaus and 𝐢𝑆 is the optimal carbon price of Stern. Our
results show the substantial impact the discount rate has on optimal carbon price with respect to the
damage coefficient. Furthermore, we state conclusively that the data supports our hypothesis
because we controlled for all variables in the experiment, as only the discount rate differed between
the Nordhaus and Stern scenarios.
Although we have established causality between the variables, one question remained about
whether the hypothesis holds true under examination at different years. As seen in Table 2, the
Sensitivity Ratio increases slightly between years because the slopes increase between each 25-year
increase. This difference makes sense theoretically because the climate damages equation has
temperature squared as a variable.
𝑇(𝑑)2 ∗ 𝛹2 = Ω(𝑑)
We took the first derivate of climate damages, Ω(𝑑), with respect to the damage coefficient
on temperature square, 𝛹2 , and found the first derivative to be temperature squared, 𝑇(𝑑)2 . In
other words, temperature squared is the slope of the relationship between climate damages and the
damage coefficient. Furthermore, the carbon price is positively related to climate damages, meaning
that the carbon price increases as damages increase. Therefore, as temperature increases over time,
we expect the first derivative – or slope – of carbon price with respect to the damage coefficient to
increase over time as well, as seen in Figures 4 and 5. Our explanation is shown to be correct as
11
temperatures increase over time from years 2000 to 2100 for both discount rates, as seen in Figures
6 and 7.
The trend of continuous temperature increase from 2000 to 2100 for both discount rates
supports our explanation that the slopes increase between each year because temperature also
increases. Overall, the difference between slopes at each year does not overturn our conclusion
because this leads to only a slight increase in the Sensitivity Ratio. The Sensitivity Ratio only
increases by 0.02 over a 50 year time period, which amounts to a percent change of 8.7%. The
hypothesis holds true for each year, so these differences do not strongly affect our conclusion.
Additionally, one more phenomenon in our data remains to be explained: the plateaus
observed in Figure 3: The carbon price vs damage coefficient on temperature squared with Stern’s
discount rate. Although we did not include the plateau regions in our analysis because the nondifferentiable change would skew any model fit to the data, we will attempt to explain the trend here.
First, we considered that these plateaus seemed to indicate the existence of an arbitrary constraint in
the model. We found two constraints that limited the value of the carbon price in the DICE Model
were a minimum value of 1 and a maximum value of 300. The carbon prices in the plateau regions
were less than 300, so these constraints did not affect the data as the plateaus remained when we ran
the experiment without the constraints.
Next, in order to explain this data, we examined our results for other variables in the DICE
model that also exhibit this trend and found that carbon emissions over the damage coefficient
correlate, as seen in Figure 8.
We found that by the damage coefficient .0028 in 2050, .0016 in 2075, and .0009 in 2100, carbon
emissions with respect to the damage coefficient plateaus, which mirrors the start of the plateau
region for carbon price versus the damage coefficients in those years as seen in Figure 3. The
correlation between carbon emissions and the carbon price suggests that, with Stern’s lower
discount rate and more stringent emission reduction policies, the carbon price plateaus because
carbon emissions fall to levels so low that there is no more reason for the carbon price to change in
reaction to increases in the damage coefficient. In other words, the marginal cost of carbon
emissions will be the same regardless of increases in the damage coefficient because emissions have
already fallen so low. Under this explanation, we do not find that the existence of the plateau regions
affects our conclusion. However, as carbon price is affected by many variables, if other variables
12
exhibit this trend, they could be contributing to the plateaus in our data as well and more research is
needed to fully understand the cause of the plateaus.
V. Conclusion
In terms of climate change policy, our findings highlight the importance of the discount rate in
determining the optimal carbon price for changes in damages. Precisely, Nordhaus’ discount rate
calls for a carbon price 0.22 times less than Stern’s carbon price for any increase in damages in the
year 2050. As climate change damages are uncertain and subject to change, understanding the large
effect the discount rate has on optimal carbon price sensitivity to changes in climate damages proves
to be powerful. As demonstrated by our data, carbon price with a lower discount rate, as prescribed
by Stern, reacts much more to a change in damages than Nordhaus’ higher discount rate. If policymakers want a carbon price that reacts strongly to changes in climate damages, either from increased
occurrence of damages or updated scientific research that predicts greater damages, policy-makers
need to apply a lower discount rate as advocated by Stern. Furthermore, if policy-makers choose a
higher discount rate that is determined by observed historical data, such as advocated by Nordhaus,
they need to be aware of the fact that the carbon price will react less to increased damages.
In terms of improving and extending our research, further research on discount rates in
between that of Nordhaus and Stern, as suggested by Krugman, would greatly benefit policy-makers
in understanding of the effect of their chosen discount rate on carbon price in relation to changing
damages. Additionally, as climate change affects regions of the world differently and regions of the
world have different economic systems, running our experiment for a specific region of the world,
such as with Nordhaus’ RICE model, instead of in the globally aggregated DICE Model, would
provide more relevant, specific information for policy makers. Moreover, expanding our research to
a greater continuum of discount rates in specific world regions would enhance our findings and
understanding of the topic. Finally, our experiment is constrained by the degree of accuracy in
which IAMs are capable of modeling global climate and economic systems. As scientists and
economists continue to improve IAMs, the accuracy of our experiment would also improve. Not
withstanding, the robustness of our experiment stems from examining changes in two of the greatest
uncertainties in IAMs: the discount rate and climate damages.
Overall, understanding the behavior of the optimal carbon price with discounting and
damages proves essential as the optimal carbon price directly determines the amount of carbon
13
abated and the total consequences society experiences due to climate change. As a result, our
experiment has shed light on the important questions of the discount rates’ affect on determining
the optimal carbon price with regards to changes in damages. While Nordhaus and Stern represent
two ends of the climate policy spectrum, exploring the Tale of Two Climate Scenarios provides a
meaningful starting point in understanding the discount rates’ affect on emissions reduction policy
and the future of warming world
14
References
Krugman, P. (2010, April 7th). Building a Green Economy. The New York Times. Retrieved from
http://www.nytimes.com/2010/04/11/magazine/11Economy-t.html?pagewanted=all&_r=0
Mendelsohn, R. (2008). Is the stern review an economic analysis? Review of Environmental Economics
and Policy, 2 (1), 45-60.
Nordhaus, W., & Sztorc, P. (2013). DICE 2013R: Introduction and user's manual. Retrieved November.
Nordhaus, W. D. (2013). The climate casino: Risk, uncertainty, and economics for a warming world. Yale
University Press.
Nordhaus, W. D. (2007). A review of the 'Stern review on the economics of climate change' (critical
essay). Journal of Economic Literature, 45(3).
Stern, N. H., Britain, G., & Treasury, H. (2006). Stern review: The economics of climate change. HM treasury
London.
Tol, R. S. (2009). The economic effects of climate change. The Journal of Economic Perspectives, 29-51.
Weitzman, M. L. (2007). A review of the stern review on the economics of climate change. Journal of
Economic Literature, 45(3), 703-724.
15
Appendix
Table 1: Estimates of the Welfare Impact of Climate Change
Study
Warming (degrees C)
Nordhaus (1994a)
3.0
Nordhaus (1994b)
3.0
Frankhauser (1995)
2.5
Tol (1995)
2.5
Nordhaus and Yang (1996)a
2.5
a
Plambeck and Hope (1996)
2.5
Mendelsohn, Schlesinger, and 2.5
Williams (2000)a, b, c
Nordhaus and Boyer (2000)
2.5
Tol (2002)
1.0
a, d
Maddison (2005)
2.5
Rehdanz and Maddison (2005)a, d
1.0
Impact (percent GDP)
-1.3
-4.8 (-30.0 to 0.0)
-1.4
-1.9
-1.7
-2.5 (-0.5 to -11.4)
0.0
0.1
-1.5
2.3 (1.0)
-.01
-.04
Hope (2006)a, d
2.5
.09 (-0.2 to 2.7)
Nordhaus (2006)a, e
2.5
-.09 (0.1)
Note: Where available, estimates of the uncertainty are given in parentheses, either as standard
deviations or as 95 percent confidence intervals. Source: Tol Survey (2009)
a
The global results were aggregated by the current author.
b
The top estimate is for the “experimental” model, the bottom estimate for the “cross-section
model.
c
Mendelsohn et al. only include market impacts.
d
Maddison only considers market impacts on households.
e
The numbers used by Hope (2006) are averages of previous estimates by Frankhauser and Tol;
Stern et al. (2006) adopt the work of Hope (2006)
Difference in Carbon Price (t
CO2)
Figure 1: The greatest difference between Nordhaus’ Opttax and Stern tabs of the DICE 2013R
model is in the year of 2050.
250
200
150
100
50
1950
2000
2050
2100
2150
Year
2200
2250
2300
2350
16
Figure 2: Nordhaus’s Discount Rate - Carbon Price vs Damage Coefficient on Temperature
Squared
Carbon Price in Dollars
250
200
150
2050
100
2075
50
2100
0
0
0.0005 0.001 0.0015 0.002 0.0025 0.003
Damage Coefficient on Temperature Squared
0.0035
Figure 3: Stern’s Discount Rate - Carbon Price vs Damage Coefficient on Temperature
Squared
Carbon Price in Dollars
300
250
200
150
2050
100
2075
2100
50
0
0
0.0005 0.001 0.0015 0.002 0.0025 0.003
Damage Coefficient on Temerature Squared
0.0035
17
Figure 4: Nordhaus’ Discount Rate - Carbon Price vs Damage Coefficient on Temperature
Squared with Linear Fit
Carbon Price in Dollars
250
y = 61342x + 1.9993
200
2050
150
y = 38371x + 0.8552
2075
2100
100
Linear (2050)
50
y = 21320x + 0.5752
Linear (2075)
Linear (2100)
0
0
0.001
0.002
0.003
0.004
Damage Coefficient on Temperature Squared
Figure 5: Stern’s Discount Rate – Carbon Price vs Damage Coefficient on Temperature
Squared with Linear Fit
Carbon Pric e in Dollars
350
300
y = 165687x + 11.618
250
y = 250906x + 6.9298
y = 95402x + 19.025
200
2050
150
2100
100
Linear (2050)
2075
Linear (2075)
50
Linear (2100)
0
0
0.001
0.002
0.003
0.004
Damage Coefficient on Temperature Squared
18
Table 2: The Sensitivity Ratio and R2
Year
2050
2075
2100
Nordhaus'
Slope
21320
38371
61342
Stern's Slope
The Sensitivity Ratio
95402
165687
250906
0.22
0.23
0.24
Nordhaus'
R2
0.998
0.998
0.998
Stern's R2
0.991
0.994
0.995
Figure 6: Temperature change over time with Stern’s discount rate at a damage coefficient of 0.0018
2.5
2
Temperatur 1.5
e increase
(°C)
1
0.5
0
1900
2000
2100
2200
Year
2300
2400
Figure 7: Temperature change over time with Nordhaus’ discount rate at a damage coefficient of
0.0018
3
2.5
2
Temperatur
e increase 1.5
(°C)
1
0.5
0
1900
2000
2100
2200
2300
Year
2400
19
Figure 8: Total Carbon Emissions vs the Damage Coefficient on Temperature Squared in years
2050, 2075, and 2100
20
Do expected marginal revenue products for National Hockey
League players equal their price in daily fantasy games?
Benny Goldman ’16
Introduction to Econometrics
The equality between wages and marginal revenue products is a backbone of competitive labor
markets. This study will seek to test the congruity between the two in the market for players in daily
fantasy hockey games. Any observed and statistically significant incongruity would lead to the
conclusion that an individual can earn long run profit playing daily fantasy games. Both fixed
effects and pooled regressions are employed to isolate inequalities between prices and expected
marginal revenue products for players in daily fantasy hockey games. Any deviation of such could
potentially be explained by utility maximizing gamblers or incomplete information. Robust results
suggest that players playing at home and players playing against weak opponents relative to their
own team strength are undervalued. Players who have performed above their average performance
in recent games are overvalued. Although it is clear expected marginal revenue products and prices
do not equate, performance is still largely random and hard to predict.
I. Introduction
Do expected marginal revenue products for National Hockey League (NHL) players equal their
price in daily fantasy games? The immense popularity of professional sports in the United States
has led to rapid growth in a number of secondary markets. A fairly recent development has been
the emergence of fantasy sports in general, and fantasy leagues, in particular. Fantasy leagues give
fans the opportunity to "draft" and trade for players, as a general manager of a sports franchise
would, in order to compete against teams chosen by other fantasy owners.
Scoring in fantasy leagues is based on the performance data of real athletes in live
competition. These fantasy games are traditionally played over the course of a season, beginning
with a draft. The players chosen in the draft are yours to keep for the ensuing season (barring any
trades or additions of players not chosen in the draft). The popularity of fantasy games in the last
decade has led to a new format: the daily game.
The daily fantasy games give avid players an opportunity to assemble a new team and
compete on a daily basis. The daily games have markets for players. You are given an artificial
budget of $55, 000 and asked to select a roster of nine players, each of whom range in price from
$3, 000 to $12, 000.
21
The market for NHL players in daily fantasy games may be partially composed of
consumers seeking to maximize utility instead of points. In a competitive market with perfect
information the value of all players would be equal. In other words, the cost of a fantasy point
would be equal across all players. The standard deviation of the cost of a fantasy point based on
player season averages is $191.20. The Calgary Flames' Mark Giordano can get you a fantasy point
for $1, 264.71, but if you want a point out of Jef Carter of the Los Angeles Kings that is going to
run you $3, 850. If you believe that season averages are a decent predictor of daily performance
(this hypothesis will be tested in this paper), then you believe there is money to be made by playing
daily hockey games.
The goal of this paper will be to analyze the relative efficiency of the player markets and
determine whether there is potential to make long run profit as a team owner in a daily fantasy
game. This analysis will be done using the point structure and player prices from FanDuel, a
leader in the daily fantasy game industry. According to FanDuel's "about" page, it pays out
$6,000,000 in prizes every week. Some players, it is claimed, gross $5,000 a day in winnings
(https://www.FanDuel.com/about-us).
In section 2 a review of the literature on hockey performance and marginal revenue
products is presented. Section 3 will build a theoretical framework for analysis. Sections 4, 5, and 6
will include summary statistics, analytics, and robustness checks. In section 7 conclusions are
drawn and directions for future research are discussed.
II. Literature Review
Any deviation in prices from expected marginal revenue products could be due to utility
maximizing behavior (instead of fantasy point or profit maximizing behavior) on the behalf of
consumers or incomplete information. Paul and Weinbach (2010) looked closely at the forces that
drive consumer behavior with respect to sports gambling. They found that investment-based
gamblers are in the minority and that consumers' betting decisions are often determined by which
team is being broadcast on television or which team is believed to be "better." Both team quality
and television availability had positive and significant effects on betting volume. These types of
biases among consumers can lead to inefficiencies in betting lines, and in the case of daily fantasy
games, player prices. If players who are going to be playing in a nationally broadcast game see an
increase in demand, they become overvalued and priced above their anticipated productivity levels.
22
Inefficient pricing, defined as deviations from the price called for by competitive markets with
perfect information, are certainly not unique to daily fantasy games. After all, it was mispricing that
launched the Moneyball revolution in Major League Baseball. Oakland Athletic's general manager
Billy Beane used statistics like slugging percentage and on base percentage to value prospects (as
opposed to accepted measures like batting average and runs). Hakes and Sauer (2006) confirmed
that on base percentage was indeed undervalued in the market for baseball players. As a result, the
Oakland Athletics were able to thrive in Major League Baseball's American League despite a
consistently small pay roll. This performance boost lasted only a short while. Hakes and Sauer
(2007) determined that the benefits of the strategy are largely dependent on the number of imitators.
Once teams processed the value of on base percentage, prices adjusted, and the advantage was all
but gone.
Billy Beane's success in Oakland was well documented. Author Michael Lewis wrote a book titled
"Moneyball" and soon after teams in all four major sports began to employ advanced statistical
analytics to improve in game strategy and prospect valuations. Mason and Foster (2007) concluded
that the implications of Moneyball to hockey might be limited. Baseball play is isolated; the batter
and pitcher exist in a near vacuum. Much of a player's hockey performance, on the other hand, is a
function of the talent around him. Predicting and evaluating hockey performance is a challenge
because there are limited statistics available. That being said, after the implementation of the
salary cap in the NHL, the valuation of prospects became all the more important and teams started
to open up to the idea of using statistical techniques to value players (Mason et al., 2007).
The first step required to understand the value of an NHL player is determining the variables that
contribute to performance. Gramacy et al. (2012) argues that the plus-minus statistic is flawed. The
plus minus is a common statistic used to measure hockey performance. You earn a point for being
on the ice when your team scores an even strength goal and you lose a point for being on the ice
when the opposing team scores an even strength goal. Gramacy et al. (2012) notes that this only
measures marginal effects of players and becomes a rather inaccurate predictor of performance
because it does not control for skill of your teammates or your opposition. Gramacy et al. (2012)
use logistic regressions and conclude that most players do not have a "measurably strong" effect on
team performance. This supports the belief that the NHL is a star-driven league. The small
variability in player effect on team performance allows the stars to look especially great and leads to
the existence of undervalued prospects (Gramacy et al., 2012). Not only do undervalued prospects
23
exist, but "some of the higher paid players in the league are not making contributions worth their
expense," (Gramacy et al., 2012).
Beyond the plus-minus, teams have traditionally used other statistics in an attempt to value
performance. Kahane (2001) concludes that much of the variability in NHL player pay can be
attributed not only to differences across players, but differences across teams. Kahane (2001) uses a
maximum likelihood estimator to demonstrate the difference across teams. Kahane (2001) estimates
that 2.2% of variability in player salaries can be attributed to the fact that different teams have
diferent willingnesses to pay, mostly due to varying levels of revenue. Teams also differ in how
they reward changes to performance. Certain teams reward or punish deviations from expected
production more so than others (Kahane, 2001).
Much of the variability in pay among players can be explained with a few basic hockey statistics.
Points per game, all star game appearances, penalty minutes per game, and being picked in the first
or second round of the draft all had positive and statistically significant effects on pay (Eastman et
al., 2009). Eastman et al. (2009) also found that the plus-minus statistic was a strong determinant
of earnings, specifically for defenseman. Eastman et al. (2009) reported heteroskedasticity as an
estimation issue. It turns out that career statistics are far more predictive of pay for high paid stars
than for low paid stars.
The literature on hockey performance and its relation to player value is very focused on
determinants of career performance and how those affect teams' willingness to pay a player. Instead
of focusing on teams and seasons, this paper will seek to fill a niche in the literature by determining
the variables that affect daily performance of NHL players and then testing the efficiency of daily
player prices.
III. Theory
In order to build a theoretical framework to understand pricing deviations from marginal revenue
products, the factors that drive expected marginal products must be established, and then it must
be determined if those same variables affect prices. It will be assumed that players of daily fantasy
games are attempting to maximize points subject to choosing nine different players and constrained
by the $55, 000 (of FanDuel money) salary limit. This assumption will be relaxed later.
Let X1 indicate player 1, X2 indicate player 2...all the way to player Xn where n is the number of
possible players. These are dummy variables that take the form of 1 (active) or 0 (not active) based
24
on whether the player is chosen by the daily fantasy owner. Let S equal the budget of each daily
team owner (the $55, 000 of "FanDuel" money given to be spent on players). M1 is the expected
marginal product of labor for player X1, M2 is the expected marginal product of labor for player
X2...Mn is the expected marginal product of labor for player Xn. Therefore the points a daily
fantasy game owner can expect to score are given by:
ExpectedPoints = M1X1 + M2X2 + M3X3... + MnXn
To maximize points, the fantasy owner will need to use the full budget.
S = PX X1 + PX X2 + PX X3... +PX Xn
1
2
3
n
The maximizing decision can be expressed as:
Max Φ = (M1X1 + M2X2 + M3X3 +…+ MnXn) – λ(S – PX1X1 – PX2X2 – PX3X3 -… PXnXn)
X(1– n), λ
First order conditions 1, 2, and n + 1:
πœ•Φ
πœ•λ
πœ•Φ
πœ•π‘‹1
πœ•Φ
πœ•π‘‹2
= 𝑀1 + λPX1 = 0
(1)
= 𝑀2 + λPX2 = 0
(2)
= 𝑆 + PX1X1 + PX2X2 + PX3X3 + β‹― + PXnXn = 0
(3)
First order conditions one and two can be rearranged such that:
𝑀1 = −λPX1
𝑀2 = −λPX2
(4)
(5)
Equations 4 and 5 can be combined to show the optimization decision of the daily fantasy
player.
−λPX1 𝑀1
=
−λPX2 𝑀2
The above equation can be simplified and rearranged:
PX1
𝑀1
=
PX2
𝑀2
A profit-maximizing individual will maximize points by setting the ratio of the price of the
player to his expected productivity equal across all players. Because individuals face the same
prices and expected points, these ratios would equate across the market. This would mean
that all individuals are indifferent to the players they choose so long as the whole $55,000
25
budget is used. In such a scenario, only risk loving or risk neutral individuals would choose
to participate in daily fantasy games.
To demonstrate this concept, imagine that player 1 and player 2 have prices and
expected points such that
PX1
𝑀1
>
PX2
𝑀2
. This would mean that a point from player 1 is more
expensive than a point from player 2. An individual who is attempting to maximize points
on their fantasy team would never choose player 1 over player 2.
The demand for player 1, M1, is perfectly elastic. Any price above P1 would exceed the
expected marginal product. No profit maximizing fantasy player would choose player 1 if that were
the case. Any price below P1 would make player 1 undervalued. His expected marginal product
exceeds his cost. In such a scenario, all profit maximizing individuals would select player 1. This
would put upward pressure on price which would continue to rise until it returns to P1.
As Paul and Weinbach (2010) suggest, individuals may not be making optimal decisions
with respect to sports gambling. Many are influenced by exterior variables such as television
availability, or individuals do not have complete information. This would suggest a departure from
perfect competition as some individuals would be willing to pay above a player's marginal product if
that player's game is on television.
For example, assume player 1 is playing in a nationally broadcast game. Individuals who
receive utility from selecting player 1 and then watching him on television would have a willingness
to pay for player 1 above P1. This would put upward pressure on price, driving P1 above M1.
These utility-maximizing individuals will now be selecting an overvalued player. As a result of the
increase in demand for player 1, there is less demand for the remaining space of players (X2ο€­n). This
would put downward pressure on price and result in markets where the marginal product exceeds the
price. The increased demand for player 1 causes the rest of the players to be undervalued.
Individuals who are profit maximizing and do not choose player 1 will score more points than
those who do choose player 1. The presence of utility maximizing behavior creates potential long
run profit for daily game players.
Uncertainty may also result in a departure from perfect competition. Because prices are
pre- determined and displayed for all individuals, there is no uncertainty with respect to prices.
That being said, the values of M1ο€­n are not known. In order to maximize points, individuals must first make a
26
prediction or a probabilistic distribution of the values of M1ο€­n. Those who are able to make the most
accurate predictions of M1ο€­n will have the highest probability of maximizing points and an opportunity to
earn long run profit.
The remainder of this paper will analyze whether or not expected marginal revenue
products do in fact equal prices. Any deviation between the two leaves an opportunity for
individuals to make a long run profit by selecting undervalued players. To do this, two equations
must be estimated.
First, it must be determined what variables afect player performance on a night-to-night
basis. The quality of a player, his recent performance, his health, his opponent's fatigue, whether or
not the player is playing at home, and the relative strength of his team compared to his opponent
would all be expected to have a positive correlation with fantasy points. The fatigue of a player is
hypothesized to have a negative correlation with fantasy points.
The following will be the guiding equation:
FantasyPoints it = + 1PlayerQualityit + 2RecentPerformanceit + 3Fatigueit + 4StrengthDiferentialit +
5OpponentF atigueit + 6HomeIceit + 7Healthit + ο₯it
The next step will be to determine if the variables that afect performance are the same variables
that affect price.
Priceit = + 1PlayerQualityit + 2RecentPerformanceit + 3Fatigueit+ 4StrengthDiferentialit + 5OpponentFatigueit
+ 6HomeIceit + 7Healthit + ο₯it
Although it may seem obvious that the above variables are determinants of performance,
correctly estimating the magnitude of each effect as well as assembling the optimal team requires
fantasy owners incur a large time cost. This time cost may help explain any potential incongruity
between expected marginal products and prices.
If it becomes clear that there are variables that affect performance, but not price, then prices
and expected marginal revenue products are not equal. The presence of variables that affect
price, but not performance, would also suggest an inequality between prices and marginal
revenue products. If price and performance are determined by the same variables and
27
magnitudes, then marginal revenue products and prices are equal.
IV. Summary Statistics
Theory states that all variables that may affect a hockey player's performance for a given game
must be present in both the regression on price and the regression on fantasy points. These are
measures of player quality, recent performance, fatigue, team strength, opponent strength,
opponent fatigue, and whether or not the player is playing at home.
Panel data were collected from the hockey statistics website Hockey Reference
(http://www.hockey- reference.com). Hockey Reference uses the official statistics provided by the
NHL and has game by game data dating as far back as 1917 (http://www.hockey-reference.com).
For sake of consistency, performance statistics were only collected on players for which there
were also data available on their FanDuel market prices. There is no archive of FanDuel market
prices, and the data therefore had to be collected daily. The collection began on January 9, 2014,
and continued for approximately a month until the NHL Olympic break on February 8, 2014. The
resulting dataset contained 2240 observations of FanDuel market prices on 201 different players.
Data were collected for every game in the 2013-2014 NHL season for the 201 players. A
summary of the data collected appears in Table 1. Only games where there are observations of
the player's price are used in analysis. The collection on prices began on January 9, 2014. This is
about halfway through the NHL season, and the data have a few helpful qualities as a result. As
seen in Figure 1, the winning percentages of the NHL teams are very stable by the beginning of
January. This means that measures such as team strength and opponent strength are more
accurate than they would be earlier in the season. The same goes for the player's season averages.
The season averages are more representative of the player's ability as the season goes on.
The first dependent variable, fantasy points, are defined by FanDuel to be:
FantasyPoints =3Goals + 2Assists + PlusMinusRating + .25PenaltyMinutes+
.5PowerPlayGoals + .5PowerPlayAssists + .4Shots
Observations of fantasy points were between -5 and 14.60. Fantasy points had a mean of 2.08 and
a standard deviation of 2.58 points. Clearly, player performance is very variable with a standard
deviation that is larger in magnitude than the mean. There were 2240 observations of fantasy
points, an average of 11.2 observations per player (meaning there was about 11 games of data for
28
each of the 201 players). The calculation of fantasy points serves as an all-encompassing
performance variable. It includes all of the relevant statistical measures of hockey performance, and
weights them according to their importance.
The second dependent variable, FanDuel market prices, are posted daily on FanDuel. The 2240
observations ranged from 3000 to 10500 with a mean of 4747.77 and a standard deviation of
1395.94. The large variability in prices complements the large variation in performance.
The player quality variable can be treated in one of two ways. In the main results section of the
paper (Table 2) a fixed effects regression is used to control for time invariant differences across
players. This is the most accurate way to control for player quality because it is effectively generating
a dummy variable for each player. A second method (which will also be tested) is to use the player's
season average fantasy points as a proxy variable for player quality. The reason this method is
inferior is that it becomes more accurate and less variable as the player has more observations. The
season averages have a mean of 2.35 with a standard deviation of 0.65, a minimum of 1.20, and a
maximum of 4.88. These can be compared to the daily measurements of fantasy points, which are
far more variable. A player's performance in a given game may have a large random component,
but better players have higher averages when large samples of games are pooled. Figure 2 shows
the kernel density plots of season average fantasy points and daily fantasy points.
Recent performance is measured in a few different ways. The first is simply an average of the
player's fantasy points in his last 5 games. This measure ranged from -0.74 to 8.20 with a mean of
2.23 and a standard deviation of 1.37. Using the same technique but including only the last 3 games
yields a mean of 2.22, a standard deviation of 1.69, and a range from -1.50 to 9.57. This continues
to support the notion that performance is more volatile over short samples of games. The 5 games
metric has a lower standard deviation. A second set of recent performance variables for which the
player's season average is subtracted from their performance in the last 3 and 5 games was also
created. The moments of these transformed variables are given in Table 1.
The team strength differential variable measures how strong a player's team is relative to his
opponent. The variable was calculated by simply subtracting opponent winning percentage from
team winning percentage (both of these variables are also shown in Table 1). The strength
differential variable ranges from -.44 to .44 with a mean of 0.03 and a standard deviation of 0.16.
Negative values suggest a team is playing a stronger opponent, and positive values are given for
teams playing a weak opponent relative to their own team.
29
Home ice, fatigue, and opponent fatigue are all dummy variables. Whether or not a player or
opponent played in a game yesterday are used as proxies for fatigue and opponent fatigue. The
home ice variable takes on a value of 1 if the player is playing home and a value of 0 if the player is
on the road. The mean is .51 and the standard deviation is .50. The played yesterday variable is 1 if
the player had a game yesterday and 0 if the player did not. It has a mean of .17 and a standard
deviation of .37. Similarly, the opponent played yesterday variable is 1 if the opposing team had a
game the previous day and 0 if they did not. The mean of the opponent played yesterday variable is
.16 with a standard deviation of .37. Hockey games are usually not scheduled on back-to-back days,
hence the low means of the played yesterday and opponent played yesterday variables.
V. Analysis
Estimation Issues
The first issue is multicollinearity between the regressors. This is only an issue between the
variable used to control for player quality and the measures of recent performance. A player who
has a high season average is likely to have a similarly high average for performance in their last 5
games. This is because the player's performance in the last 5 games is included in the season
average. Season average and performance in the last 5 games have a correlation coefficient of .53.
Season average and performance in the last 3 games have a correlation coefficient of .42. The
variance inflation factors on season average and the average of performance in the last 3 games are
1.28 and 1.27, respectively. Although the variance inflation factors are low, the large correlation
coefficients are still cause for concern.
By simply differencing the player's season average from his recent performance, the variance
inflation factors on both variables fall to 1.01. Not only do the variance inflation factors fall, but
also the interpretation of the coefficients is now straightforward. The coefficient can be interpreted
as the effect on fantasy points of a one point increase in the difference between recent performance
and season average. The demeaning of the season averages allows for easy comparison across
players of different quality.
Heteroskedasticity is present in all tested specifications. Figure 3, a plot of the combined residuals
(fixed component and the overall component) against the fitted values for the main estimation
equation (Table 2 column 3), has the appearance of homoskedasticity. A modified Wald test (which
has a null hypothesis of homoskedastic data) on the fixed effects regression yields a P value of
30
0.0000. This would lead to a rejection of the hypothesis that the data is homoskedastic.
Due to the unavailability of the proportionality factor, a weighted least squares method to remedy
the heteroskedasticity is not feasible. Not only is the proportionality factor unobserved, but
weighted least squares also complicates the coefficient interpretations. Instead, I report robust
standard errors that account for the probable downward bias on the standard errors caused by
heteroskedasticity.
A third potential issue is the relative accuracy of the proxy variables used to capture fatigue.
Because team practice and travel schedules are not made public, it is difficult to estimate the fatigue
of one player compared to another. While playing yesterday certainly would induce fatigue, not
playing yesterday does not mean a player is fresh. Often teams have practices or spend their of
days traveling, both of which can be close to as draining as playing a game.
Similar to fatigue, injuries are also likely to affect performance, but information on injuries is not
made available to the public unless the injury is extreme. Some teams go the full season without
ever listing a player on the injury report, and nearly all injury reports are for players that have been
placed on injured reserve and will be forced to sit out. Unlike the National Football League, the
National Hockey League does not require teams to submit a full list of their injured players and the
severity of the player's injuries before each game. The lack of public injury reports is a potential
estimation problem. Hockey is an extremely violent sport, and it is not realistic to assume that the
only players bothered by injury are the very few (if any at all) who the teams list on their injury
reports.
The danger of an inaccurate proxy for fatigue and the lack of an injury variable is endogeneity. If
injuries affect fantasy points on a given night, the error term would now include the effect of the
injuries. Endogeneity would then be present if injuries are correlated with any of the regressors.
This does not appear to be the case. A regression of the residuals on the independent variables
results in coefficients that approach 0 and t statistics of 0.00 (and P values of 1.000). The lack of an
injury variable and a potentially poor proxy for fatigue do not appear to cause endogeneity.
Main Results
The main results are presented in Table 2. The first step in examining the relative efficiency in the
market for players is pinpointing the determinants of the marginal products, and then the
determinants of the prices. If these two are determined by different variables, then undervalued
and overvalued players exist in the market. The results presented show that the determinants of the
31
marginal products and prices are in fact different.
Given the name of a player, it would be very tough to predict that player's performance on a
given night. The opposite is true for prices. A simple regression of price on a dummy variable for
each of the 201 players yields an R-squared of .96 and an F test with a P statistic of 0.000. This
means that 96% of the variation in prices can simply be explained by the players, and prices are
statistically significantly different across players. A similar phenomenon does not exist for the
marginal products. A regression of fantasy points on a dummy variable for each player has an Rsquared of 0.14 and an F test with a P statistic of 0.000. Although marginal products are
statistically significantly different across players, differences across players only account for 14% of
the variation in fantasy points.
The next step is to find the determinants of fantasy points and also test if players are truly the
only variable that prices are dependent upon. As determined by theory, a control for player quality
is relevant in estimating fantasy points. There are two methods of controlling for player quality.
The first involves using a proxy variable, season averages, that would control for differences in
player skills by differentiating players via their average production. The potential issue with season
averages is that they are more variable in the beginning of the season when the player has played
fewer games, and they become a more accurate measure of quality as the season goes on.
A second method of controlling for player quality is simply estimating a fixed or random effects
regression. The random effects regression would be appropriate if the unobserved differences in
player quality are uncorrelated with the regressors, whereas the fixed effects regression would be
appropriate if correlation between the time invariant differences between players and the regressors
existed. A Hausman test with a null hypothesis of no systemic differences in coefficients was
employed to determine which regression was appropriate. The test returned a P value of 0.0000.
This would lead to a rejection of the null hypothesis that a random effects regression is appropriate.
The fixed effect regressions were estimated on both fantasy points (column 3) and price (column
4). The regressions were estimated with the same independent variables, but the results were quite
different. Robust standard errors are reported in order to correct for downward bias of the standard
errors caused by heteroskedasticity.
Theory predicted that a player who is performing in his home arena would have a higher marginal
product than a comparable player who is playing an away game. The regression in column 3
confirms this result. Holding team strength, opponent strength, recent performance, fatigue, and
32
opponent fatigue constant, being at home increases a player's production by 0.39 fantasy points.
The result is statistically significant at the 0.01 level. The same result, however, is not present in the
regression on price. Playing at home has an extremely small and non-statistically significant effect
on price. Home players are undervalued in the market. Given a choice between two players who
only differ in game location, the daily fantasy owner can increase their expected points by selecting
the player who plays at home (it does not, however, appear that all fantasy owners are doing that).
It was also projected that players would score fewer fantasy points against a strong opponent
than they would against a weak opponent. The regression supported that hypothesis. For a oneunit increase in the difference between a player's team strength and his opponent's strength, that
player can be expected to score .98 more points. A one-unit increase is not realistic, however,
because the winning percentages range from 0 to 1. An increase by 0.16 units (the standard
deviation) would be expected to increase a player's output by .157. The result was statistically
significant at the 0.05 level. Opponent strength relative to team strength was not a statistically
significant determinant of price. The standard error (50.40) was nearly five times as large as the
estimated coefficient (12.64). The coefficient, 12.64, was also very small when taking into account
the average player price is 4747.77. This means that players who are matching up against strong
teams relative
to the strength of their own team are overvalued on the market. Players matching up against weak
teams relative to their own team are undervalued on the market, and avoiding the selection of these
players can lead to long run profit.
Contrary to what was predicted by theory, the recent performance of a player had a negative
and statistically significant effect on fantasy points. The effect, however, was not economically
significant. The regression predicted that holding all else constant, a player who increases his
recent performance in the last 3 games over his season average by one point would experience a
decline in fantasy points of .17. The result was significant at the 0.01 level. The effect of recent
performance on price actually had the opposite effect. "Hot" players seem to become more popular
and apply upward pressure on price. A one-unit increase in the average performance of a player in
his last 3 games over his season average yields an increase in price of 85.44. The estimated
parameter was statistically significant at the 0.01 level. Players who have performed above their
season averages in the last 3 games are overvalued. There is a close to 0 change in expected fantasy
points, but a sizeable increase in price for a player that has been "hot."
33
Neither playing yesterday nor facing an opponent who played yesterday had a statistically
significant effect on fantasy points. Both estimated coefficients were negative and small. Theory
predicted that if playing yesterday is an accurate proxy of fatigue, players who played yesterday
would score fewer points and players who are facing an opponent who played yesterday would
score more points. Although the effect was not statistically significant from 0, an opponent playing
yesterday caused a statistically significant (at the 0.01 level) drop in price of 40.62. A player who
played yesterday did not, however, have a statistically significant effect on price. This suggests that
players who are facing an opponent who played yesterday are improperly valued, and the market
believes players will score fewer points if their opponent had a game yesterday.
The results emphatically support the notion that marginal revenue products do not equate to
prices in the market for players. Players who are playing at home and playing against a team that is
weak relative to their own are undervalued. Players that have performed above their season average
in their last 3 games are overvalued. Lastly, players facing an opponent who played yesterday are
undervalued due to a drop in price with no observable effect on their performance. Paul and
Weinbach (2010) found this same effect, and determined that people sometimes make betting
decisions based on factors that do not affect outcomes.
Expected performance is still relatively unpredictable. Much of the variation in prices can be
explained by the player, but only 16% of the variation in fantasy points can be explained by the
variables presented. This characteristic will be further explored in the section to follow.
Robustness
This section will offer alternative approaches to the estimation methods presented in the main
results table in order to test the sensitivity of the parameters to various specifications and
regressions. Table 3 uses a pooled regression approach, as opposed to a fixed effects regression, in
order to control for differences in player quality. Also in Table 3 (columns 3 and 4), a near identical
regression to those presented in the main results table is used, but a five game standard to measure
recent performance is implemented (instead of the 3 games standard used in Table 2). Table 4 uses
fixed regressions with controls for first order autoregressive disturbances. This method, originally
presented by Baltagi and Wu (1999), is useful to estimate parameters of unbalanced panel data
where observations are not equally spaced over time or equally frequent across actors. The method
is also helpful in controlling for issues caused by heteroskedasticity (which is present). Table 5
estimates a random effects regression. Table 6 returns to the original fixed effects estimation
34
method, but makes substitutions for the team strength differential and recent performance
variables. Lastly, Table 7 uses pooled regressions with season averages to assess the sensitivity of
the variable substitutions made in Table 6.
The results presented in Table 2 all use a fixed effects regression to control for player quality, or
time invariant differences across players. Table 3, columns 1 and 2, present similar regressions to
those presented in Table 2, but they instead use season averages as a proxy for player quality instead
of a fixed effects regression. The results only slightly differ from those found in Table 2. Robust
standard errors are reported in Table 3 as well do to the persistence of heteroskedasticity in the
pooled regression. The Breusch-Pagan/Cook-Weisberg test for heteroskedasticity (null hypothesis
of homoskedastic data) returns a P value of 0.000 for all four regressions in Table 3.
Both the fixed effects regression and the pooled regression with season average controls (Table 3,
column 1) yield positive and statistically significant effects of playing at home. The size of the effect
is not statistically different between the two regressions. Similarly, in both regressions the effect of
playing home on price is not statistically significantly different from 0. Players who are playing at
their home arena are still under valued when season averages are used as a proxy for player quality.
The parameter estimate on season averages are positive and statistically significant on both
fantasy points and price. A one unit increase in a player's season average would mean the player is
predicted to score .76 more points in a given game and cost 2005.62 more (both are significant at
the 0.01 level).
The estimate on the effect of opponent strength relative to team strength becomes statistically
insignificant on performance, and remains insignificant on price. Using season averages to control
for player quality diminishes the returns to playing a poor opponent relative to your own team.
The effect falls from 2.12 to -.25 (and loses its significance). The effect on price is still small and
insignificant with a standard error roughly 3.5 times the size of the coefficient estimate. A pooled
specification would suggest that fantasy owners are behaving optimally with respect to their
treatment of team strength differentials.
The previous economic insignificant effect of recent performance is now no longer statistically
different from 0. The effect remains positive on price, but it is not quite as large. It dropped from
85.44 to a 26.51 increase in price for every one-unit increase in the recent performance variable.
Players who have performed above their season average in the previous 3 games remain overvalued.
Lastly, coefficients on playing yesterday and the opponent playing yesterday were both
35
economically and statistically insignificant on fantasy points and price. This is a departure from the
main results where an opponent playing yesterday put downward pressure on a player's price.
Under a pooled specification, the market appears to correctly estimate the effect of playing
yesterday or facing an opponent who played yesterday on performance.
Using season averages as a proxy control for player quality yields approximately the same results
as the fixed effects regression in Table 2. The largest difference is the diminished effect of playing a
weak opponent relative to a player's own team, and a lack of price adjustment for opponents who
played the day before. The explanatory power of both models, fantasy points and prices, fell when
the pooled regression was employed.
Similar to the fixed effects regression, a pooled regression with dummy variables for each player
was implemented in columns 3 and 4 of Table 3. The big difference between the two methods is
the use of a 5 game lag on recent performance as opposed 3 games. This style of regression
actually provides an equally good fit for the data as the main results regression. The R-squared (and
adjusted R-squared) on both the fantasy points regression and the price regression improve over
those in Table 2.
Although the overall fit of the data improves, there are few differences in the parameter estimates
between the two methods. Home ice remains positive, economically significant, and statistically
significant on fantasy points, but has no significant effect on price. The team strength differential is
estimated to be roughly the same size and significance as the fixed effect regression while still
having an insignificant and statistically indifferent from 0 effect on price.
Including a recent performance metric based on the previous 5 games has a dramatic effect. The
variable suggests that recent performance and current performance are negatively related. A oneunit increase in the player's season average subtracted from the average of the player's last five
games' performance leads to a drop in fantasy points of 0.28 (significant at the 0.01 level). It has an
even larger effect on price than the effect estimated in Table 2. The parameter nearly doubles in size
from 85.44 (from the main results) to 151.47, both of which are statistically significant at the 0.01
level. The effect of playing yesterday still remains economically and statistically insignificant on both
price and fantasy points. The parameters appear to be insensitive to the removal of the opponent
played yesterday variable. The exclusion of which failed to cause any disturbance.
Employing a pooled regression with a recent performance variable dating back to 5 games and
dummy variables for all players results in the same conclusions as presented by a fixed effects
36
regression. The market continues to undervalue or overvalue the same attributes, but lagging the
recent performance variable back to 5 games appears to double the effect that performing above
season average has on price.
Returning to the fixed effects specification used in Table 2, but employing a control for an auto
regressive disturbance of the first order, yields large differences in the parameters. Although serial
correlation is not present, the regression control also assists with heteroskedasticity (which is
present) and is appropriate for unbalanced panel data. This estimation method yields improved
explanatory power for variation within each player, but has close to 0 explanatory power for
variation between players (R-squared values lower than .001).
The effect of home ice and strength of opponent relative to a player's own team become
statistically insignificant with respect to both price and fantasy points. Player's who have performed
above their season average in the past three games appear to be even further over valued. Recent
performance has a negative effect on fantasy points but a positive effect on price (both statistically
significant). Playing yesterday now has a negative and statistically significant effect on performance,
and no effect on price. Facing an opponent whop played yesterday is insignificant on both.
The results are only slightly sensitive to changes in specification. Using the same auto regressive
correction as just described, but dropping the played yesterday variable, dropping the opponent
played yesterday variable, and replacing the 3 game lag of recent performance with a 5 game lag,
leads to slightly different conclusions. Also, the effect of playing at home and playing a bad team
become positive and statistically significant, as they were in the main results, but continue to have a
small and statistically insignificant effect on price. The 5 game lag appears to again have a greater
pull on price and performance than the 3 game lag. In Table 4 columns 3 and 4, a one unit increase
in average performance in the last 5 games minus a players season average would lead to a drop in
production of 1.88 points, but a rise in price of 93.82 (both statistically significant at the 0.01 level).
Again, the market appears to be severely overvaluing players who have performed above their
standard levels in previous games.
Table 5 uses a random effects estimator, as opposed to a fix effects estimator, with the same
independent variables as used in the main results section. The results are nearly identical, except for
the estimate of opponent strength relative to a player's team strength. This result becomes
statistically and economically insignificant. The explanatory power of both models fall. The random
effects regression can only explain .1% of the variation in fantasy points and 19% of the variation
37
in prices.
In Table 6, a fixed effects equation was used. The regressions differ from the main results
because the opponent's winning percentage is used instead of the strength differential. This
removes the effect of a player's own team and isolates the opponent. Instead of using a demeaned
performance in the last 3 or 5 games, the recent performance variable simply evaluates the player's
average fantasy points in the last 3 or 5 games. The results stick closely to those of the main
results. The only significant change is the statistical insignificance of the opponent strength
variable. Adjusting the specification to remove a player's own team's quality reveals a relationship of
the same sign as the main results, but with no statistical significance.
Table 7 repeats the regressions in Table 6, but uses season averages as a proxy for player quality
instead of estimating a fixed effects regression. The conclusions drawn from the results are the
same, except for the variables on recent performance. Using averages of the last 3 games and the
last 5 games becomes insignificant when a pooled regression is used instead of a within effects
estimator. The positive and statistically significant effect on price remains.
VI. Conclusion
It is clear that markets for players in daily fantasy games do not equate prices to expected marginal
revenue products. The mistakes in estimations of the expected marginal products are relatively
consistent over various specifications and types of estimation methods. Different specifications
slightly adjusted market adjustment to facing a fatigued opponent, and the performance returns to
performing well in recent games and playing a weak opponent (relative to a player's own team).
There is a set of patterns that persisted through all estimation methods. Players who are playing
at home score more fantasy points, but do not cost more. The market is undervaluing players who
are playing in their home arena and overvaluing visiting players. The strength of a player's
opponent relative to their own team has a large effect on performance, but no observable effect on
price. Playing a poorer team leads to economically significant increases in performance and
undervaluation in the market. Lastly, above season average performance in recent games has an
effect on fantasy points that is not statistically differentiable from zero, but a large, positive, and
significant effect on price. Players who have performed very well in the past few games relative to
their usual performance are highly overvalued. In fact, in some specifications this actually had a
negative effect on performance and a positive effect on price, increasing the market error.
38
While the results show that the independent variables have statistically significant effects on
performance, the explanatory power of the models is limited. The highest observed R-squared
was below .25, and many models could not explain 10% of the variation in performance. Figure 2,
which compares the distributions of season average performance to daily performance, shows the
variability in daily performance. Prices are relatively stable for each player, and there are
opportunities to capitalize on variables that have significant effects on performance but not price.
Due to the randomness of daily performance, however, profit is far more feasible in the long run.
The main concern with the results presented is the persistence of heteroskedasticity across all
specifications and estimation methods. Robust standard errors were reported in an attempt to
avoid committing a type I error. Unavailable data on player health leaves open the possibility of an
endogenous error term, but all tests point endogeneity not being an issue. Also, the use of whether
a player or team played yesterday as a proxy variable for fatigue is potentially improper. Travel
schedules and practices can make of days equally as tiring as playing a game.
Studying the relative efficiency of season long (as opposed to daily) fantasy markets would be an
appropriate follow-up to this study. As seen in Figure 2, season performance is far less variable
then daily performance, and if markets fail to process that, it is possible there is an even larger
opportunity for profit playing season long fantasy games. A different set of independent variables,
including information on age and position, may be relevant in such a study.
39
References
Baltagi, B. H., and P. X. Wu. (1999). Unequally spaced panel data regressions with
AR(1) disturbances. Econometric Theory 15: 814823.
Gramacy, R. B., Jensen, S. T., & Taddy, M. (2013). Estimating player contribution in
hockey with regularized logistic regression. Journal of Quantitative Analysis in
Sports, 9(1), 97-111.
Hakes, J. K., & Sauer, R. D. (2006). An economic evaluation of the moneyball hypothesis.
Journal of Economic Perspectives, 20(3), 173-185.
Hakes, J. K., & Sauer, R. D. (2007). The Moneyball Anomaly and Payroll Efficiency: A
Further Investigation. International Journal of Sport Finance, 2(4).
Kahane, L. H. (2001). Team and player efects on NHL player salaries: A hierarchical linear
model approach. Applied Economics Letters, 8(9), 629-632.
Mason, D. S., & Foster, W. M. (2007). Putting moneyball on ice? International Journal of
Sport Finance, 2(4), 206-213.
Paul, R. J., & Weinbach, A. P. (2010). The determinants of betting volume for sports in
North America: Evidence of sports betting as consumption in the NBA and NHL.
International Journal of Sport Finance, 5(2), 128-140.
Vincent, C., & Eastman, B. (2009). Determinants of pay in the NHL. A quantile
regression approach. Journal of Sports Economics, 10(3), 256-277.
40
Appendix
41
42
43
44
45
46
47
48
49
Does Objectification Affect Women’s Willingness to Compete?
Disa Hynsjö ‘14 and Vincent Siegerink ‘14
Behavioral Economics
This paper applies Fredrickson and Roberts’ (1997) Objectification Theory in order to investigate
whether objectification may be one factor explaining why women self-select into competition at a
lower rate compared to men. We use priming to implement an experiment which elicits competition
preferences in one control (neutral prime) and one treatment (objectification prime) group. Due to a
small sample size, we are unable to detect causal effects of objectification on female competition
preferences. Women who were exposed to objectification indicated that men are favored in the
labor market to a greater extent, and marginally significant lower degrees of self-esteem and selfconfidence, compared to women who were not exposed to objectification. This suggests that, while
we were unable to detect causal effects on competition decisions, objectification has an influence on
factors that may be contributing to women’s unwillingness to engage in competition.
I. Introduction
Since the latter half of the past century, women have taken on an increasing share of leadership roles.
Still, men occupy the overwhelming majority of positions of power within business and politics and
tend to earn higher wages, both in the United States and around the world (Eagly and Carli 2007;
Weichselbaumer and Winter-Ebmer 2005; Niederle and Vesterlund 2007). One explanation for the
persistent gap is that men would have a higher preference for competition than women. Several
economists find such gender differences using experimental studies1 (Croson and Gneezy 2009) and
often attribute this to differences in innate qualities. From an evolutionary perspective, engaging in
competition would have maximized males’ procreation opportunities, while women would have
developed a more cooperative character due to their stake in child rearing (Gneezy and Rustichini
2004).
Several recent economic studies question the argument that inherent sex traits are the only
explanation for the competition gender gap and show that societal and cultural influences relating to
gender may have contributed to the male-female differences observed in Western societies. One
comparative study between a matrilineal society (the Khasi in India) and a patriarchal society (the
Masaai in Tanzania) finds that women in the matrilineal society were significantly more likely to
Niederle and Vesterlund (2007) find that 73 percent of men select into competition while only 35 percent of
women do, even when there was no gender difference in performance.
1
50
compete than both the men in the same society and the women in the patriarchal society (Gneezy,
Leonard and List 2009). Another study finds no significant gender difference in the willingness to
compete in 7-12 year old children in the same matrilineal society and another patriarchal society (the
Kharbi in India). At age 15, boys in the patriarchal society had a significantly higher preference for
competition than girls in the same society. No gender gap had emerged in the matrilineal society at
this age, but 15 year-old girls in the matrilineal society were significantly more likely to compete than
girls in the patriarchal society (Andersen et al. 2013). These findings suggest that the Western gender
gap in competition may be due to learnt behavior that is particular to societies that favor men over
women. Boys around the ages 3-5 in the United States, and around the ages 9-10 in Israel, improve
their performance when competing, while girls do not (Samak 2013; Gneezy and Rustichini 2004).
However, Dreber, Essen and Ranehill (2011) do not detect any difference in performance under
competition in 10 year-old boys and girls in Sweden, and attribute these results to Sweden’s relatively
high gender equality compared to other Western countries.
Differences in competition preferences may be sensitive to a person’s every-day
environment and the nature of the competitive task. Teenage girls who attended single-sex schools
were more willing to compete, even in mixed-sex groups, than girls who attended coed institutions
(Booth and Nolen 2012). Booth and Nolen (2012) contend that these results suggest that the
competition gender gap stems from social learning differences between mixed and single-sex
environments. One criticism that extends to most research investigating the competition gender gap,
is that the competitive task2 may be considered ‘masculine,’ and that women’s lower degree of
competitiveness may stem from ‘stereotype threat.’3 Indeed, the male-female difference disappears
when both including a task that is considered to be within the ‘feminine’ domain (fashion) as well as
a task considered to be within the ‘masculine’ domain (mathematics) (Wieland and Sarin 2012).
The cited evidence suggests that the male-female difference in willingness to compete, as
detected in Western societies, is the result of either only societal and cultural factors, or the
interaction of innate traits and such factors. Most economists seem to argue in favor of the latter
2
3
Mathematical and verbal tasks, mazes, and ball throwing tasks have been used in the literature.
There exist social stereotypes that women are particularly bad at certain tasks; mathematics and the
sciences for example. ‘Stereotype threat’ then, refers to a situation in which a woman is expected to
perform such a ‘masculine’ task, but since she has internalized the negative stereotype of her
performance in this particular area, she performs worse than she would have, absent any stereotypes
(Quinn et al. 2006).
51
argument, but very little attention so far has been devoted to specifying precisely what those other
factors may be, or how we may determine them. The only paper, to our knowledge, that addresses
this uses priming to show that gender stereotypes affect women’s willingness to compete, even
within the very specific population of students in a highly selective MBA program in Canada
(Cadsby, Servatka and Song 2013).
We contend that sexism, as an overarching phenomenon, affects the inclination of women
to engage in competitive environments. This umbrella term encapsulates different forms of sexism,
such as gender discrimination, objectification and misogyny. This paper contributes to the literature
that investigates which specific factors influence women’s willingness to compete by focusing
specifically on one particular aspect of sexism: the objectification of women.
To our knowledge, this is the first paper within the economic literature that attempts to use
Objectification Theory4 and self-objectification to explain the competition gender gap. This paper
thus contributes to the existing literature by offering new insights into whether ‘nurture’ helps
explain the competition gender gap, and specifically whether growing up and interacting in a sexist
society depresses women’s willingness to compete.
II. Theoretical Framework
Sexual objectification occurs whenever a person’s body, or a part of the body, is regarded as if they
were capable of representing that entire person (Fredrickson and Roberts 1997). Although all
persons can be victims of sexual objectification, contemporary culture objectifies women’s bodies to
a larger extent than men’s bodies (Fredrickson and Roberts 1997; Gay and Castano 2010). For the
purpose of this paper, we therefore focus specifically on the issue of female objectification.
Objectification of women’s bodies occur in a wide variety of forms, ranging from subtle gazes, or
‘evaluations’, to extreme sexual violence: their common demeanor is that, in a transient moment to
long periods of time, a person is treated as if she were ‘just’ a body, valued predominantly for its use
to others (Fredrickson and Roberts 1997). Frederickson and Roberts (1997) argue that constant
interaction in environments were such objectification is present, lead girls and women to ‘selfobjectify’. A woman who self-objectifies internalizes an outsider’s view of herself, is continuously
4
Frederickson and Roberts (1997) formalized Objectification Theory, and defined the concept of
‘self-objectification’ as a part, or consequence, of that theory.
52
cognizant of how she appears to others and subsequently how others value her. Several
psychologists find support for Frederickson and Roberts’ self-objectification theory and assert,
among other things, that it leads to depression (Szymanski and Henning 2007) and lower cognitive
performance (Quinn et al 2006; Gay and Castano 2010).
We hypothesize a negative relationship between self-objectification, as a consequence of
female objectification, and women’s willingness to compete. A person will typically only engage in
competition when there is a possibility of success. This necessitates that the decision to engage in
competition is made contingent on the belief that one’s own abilities outmatch that of the
competitor. Whenever information of one’s own abilities relative to that of the competitor is
imperfect, it is necessary to rely on expectations of relative ability. Niederle and Vesterlund (2007)
show that confidence in one’s personal performance relative to others is positively related to
willingness to compete, and that men’s higher level of overconfidence explains a big portion of the
competition gender gap. Several other economists reach similar conclusions.5 Self-objectification –
the viewing of oneself as an ‘object’ that is being ‘acted upon’ as opposed to an ‘agent’ which ‘acts’
by, and for, itself – inherently contradicts the notions of being ‘able’ or ‘skillful.’ Therefore, it is
possible that self-objectification, by means of lowering women’s perception of their relative abilities,
helps explain why women in Western societies avoid competition.
For simplicity, we model willingness to compete as a decision of one person, person A, to
engage in a one-shot competition with another person, person B. We are interested in person A’s
decision and we assume that he or she maximizes expected gain, conditioned on risk preferences.
We consider person A’s decision to compete to depend on three independent factors. Firstly,
he or she has some perceived personal skill ρ, which is based on previous experience with the task,
or comparable tasks, and a general level of self-confidence. Secondly, person A perceives person B
to have some ability, σ, which is based on person A’s subjective evaluation of person B’s previous
experiences, and the self-confidence person B exhibits. Thirdly, person A has some preference for
risk, ω.
We let δ be the perceived skill, ρ, conditional on risk preference, ω, so that δ > ρ for a riskloving person, δ = ρ for a risk-neutral person, and δ < ρ for a risk-averse person. Person A will then
engage in the competition if δ(ρ, ω) > σ. The likelihood that Person A engages in a competition, θ, is
5
See Niederle and Vesterlund (2011) for a summary.
53
thus modeled by the probability function6
πœƒ = 𝑃 (𝛿(𝜌, πœ”) > 𝜎)
(1)
We can elicit the following three partial relationships:
πœ•πœƒ
πœ•πœƒ πœ•π›Ώ
=
∗
> 0
πœ•πœŒ
πœ•π›Ώ πœ•πœŒ
πœ•πœƒ
πœ•πœƒ πœ•π›Ώ
=
∗
> 0
πœ•πœ”
πœ•π›Ώ πœ•πœ”
πœ•πœƒ
< 0
πœ•πœŽ
For women, we hypothesize that female objectification affects ρ negatively. The internalization of
objectification, referred to as self-objectification, will cause a decrease in the evaluation of one’s own
skill ρ. If we let λ be a ‘female objectification parameter’, then we have ρ(λ, X), where X is a vector
which includes all other factors that may influence perceived ability and self-confidence. According
to our theory
πœ•πœŒ
< 0
πœ•πœ†
for women. Thus
πœ•πœƒ
πœ•πœƒ πœ•π›Ώ πœ•πœŒ
=
∗
∗
< 0
πœ•πœ†
πœ•π›Ώ πœ•πœŒ πœ•πœ†
when person A is a woman. When person A is a man, the sign of ∂ρ/∂λ is ambiguous and may be
equal to zero. The theoretical equation to model the likelihood that a person engages in competition
is then
𝑓(πœ†, πœ”, 𝜎, 𝑋, πœ€),
6
(2)
If P (δ(ρ, ω) = σ), then person A is indifferent.
54
where ε is random variation in person A’s willingness to compete.
The vector X typically includes age, education and variables measuring socioeconomic status
(Wieland and Sarin 2012 and Gupta et al. 2013). Theoretically, we would expect higher levels of
education, and higher socioeconomic status to be associated with a higher preference for
competition. This is because independent of risk preference; the risk of engaging in competitions
would be lower for those who are better off. As well, levels of education may proxy ambition and
general ability, which would be assumed to be associated with a greater willingness to compete.
III. Empirical Strategy
Outcome Variable
We follow the majority of the literature which elicits willingness to compete by measuring whether
participants self-select into a competitive activity. In our experiment, participants were presented
with a ball-throwing task that would yield them a small amount of money per successful attempt.
The participants were offered the choice between a piece rate scheme of $0.75 per successful
attempt, independent of everybody else’s performance, or a tournament scheme with a reward of
$1.50 per successful attempt, but only if they had more successful attempts than an anonymous
competitor.
Female Objectification
The comparison of primary interest in this research is the competition preferences of women who
self-objectify and women who do not self-objectify. Ideally, it would be possible to identify two
different societies between which only the level of female objectification, and no other cultural
aspects, differs. This is not a possible scenario. Instead, we use ‘priming’ as a treatment to simulate
an objectifying experience. Half of the participants in our experiment were subjected to this
treatment, with the other half serving as a control group.
Priming is a method that has been used successfully by both Psychologist and Behavioral
Economists to subconsciously expose subjects to an idea or thought. Consequently, participants are
expected to change their behavior according to the ‘prime’ that they are subjected to. For example,
in a famous paper by Bargh, Chen and Burrows (1996), subjects were shown to walk out of a room
more slowly after having been exposed to words related to aging and the elderly. The female
objectification parameter λ as specified in the theory section will thus be approximated by the
55
exposure to an objectification prime.
Control Variables
The theory states that the probability that person A engages in competition depends on how he or
she perceives the ability and confidence of the competitor, σ. We follow previous authors and limit
the variance of σ by creating pairs of ‘competitors’ by matching experiment participants of the
treatment and control groups randomly and anonymously. Participants observed the composition of
each group at the start of the experiment, but once the groups were separated, they each had equal
probability of being matched with any of the participants in the other group. If no person in either
group has any specialized knowledge of the ability and confidence of any of the participants in the
other group, this implies that σ is constant and should not differ across participants. Since our
sample population was students who are likely to know each other outside of the classroom, it is
likely that the participants do have some knowledge of each other’s abilities. Moreover, the student
population is heterogeneous regarding previous experience with different activities. In order to limit
the influence of these possible sources of bias in σ, a physical activity was chosen with a minimal
potential for previous experience. This is explained further in the Experiment section. Given this
experimental design, we can assume that σ is constant. Excluding it from the estimation should
therefore not introduce estimation bias.
Theoretically, a person’s willingness to compete partly depends on his or her risk preference
ω (Croson and Gneezy 2009). Due to resource constraints it was not possible to collect base-line
information about participants’ risk-preferences7 and we can therefore not control for ω in the
estimation. We do not expect this to influence the accuracy of the estimation for two reasons. Firstly,
Niederle and Vesterlund (2007) find that risk preferences explain only a negligible part of the
competition gender gap. Secondly, the fact that we divide women into the treatment and control
groups randomly limits the probability that women in the two groups differ significantly in terms of
risk-preferences.
In order to control for the participants’ general taste for competition we include a variable
for whether the participant is currently in a sport’s team. Participants’ Grade Point Average (GPA)
Collecting base-line risk-preference information during the experiment would not have been a sensible
option. If questions concerning risk were introduced prior to the question concerning which payment scheme
the participant choose, the questions concerning risk could have had a priming effect in themselves.
Questions concerning risk introduced after the prime may have been affected by the prime.
7
56
would be a reasonable means to control for ability, but since several participants did not indicate
their GPA, we do not include this in the estimation. We control for the student’s age, the family’s
societal class, parent’s level of education, and whether the student is a scholarship recipient, which is
in line with previous literature (Wieland and Sarin 2012; Gupta et al. 2013).
Estimation Issues
Our experiments yielded 20 observations, of whom eight were women and twelve were men. Of the
women, three were in the treatment group, and five in the control group. Of the men, seven were in
the treatment group and five in the control group. The small number of observations, and in
particular the small number of women, implies that our results should be interpreted with caution.
The probability of obtaining extreme responses in any group of people consistent of such few
individuals is high. Our results should therefore be considered as preliminary, rather than definitive.
In addition, none of the women who participated in the experiment chose to compete. This
implies that both the base-rate, and the rate under treatment, of female competition in our data is
zero. For this reason, we may not adequately answer the research question of primary interest in this
research.
We do not have any serious problems with multicollinearity.8 Depending on the number of
control variables included in the estimation, the error terms of some regressions are heteroskedastic.
We account for this by reporting robust standard errors when the Breusch-Pagan test reports
heteroskedasticity with a level of statistical significance of at least 0.15.
Identification Strategy and Estimation Equation
We identify the effect of female objectification on willingness to compete by implementing a
regression among both men and women. The explanatory variable of interest is the interaction term
between a dummy variable which is equal one if the participant was in the treatment group, Treatment,
and a dummy variable which is equal to one if the participant is female, Female. Given the specified
control variables, and the identification strategy a linear estimation of (2) is specified as follows
πœƒπ‘– = 𝛽0 + 𝛽1 π‘‡π‘Ÿπ‘’π‘Žπ‘‘π‘šπ‘’π‘›π‘‘π‘– + 𝛽2 πΉπ‘’π‘šπ‘Žπ‘™π‘’π‘– + 𝛽3 π‘‡π‘Ÿπ‘’π‘Žπ‘‘π‘šπ‘’π‘›π‘‘π‘– ∗ πΉπ‘’π‘šπ‘Žπ‘™π‘’π‘– + 𝛽4 𝐴𝑔𝑒𝑖 +
𝛽5 π‘†π‘π‘œπ‘Ÿπ‘‘π‘– + 𝛽6 π‘†π‘œπ‘π‘–π‘’π‘‘π‘Žπ‘™ πΆπ‘™π‘Žπ‘ π‘ π‘– + 𝛽7 π‘ƒπ‘Žπ‘Ÿπ‘’π‘›π‘‘ ′ 𝑠 πΈπ‘‘π‘’π‘π‘Žπ‘‘π‘–π‘œπ‘›π‘– + 𝛽8 π‘†π‘β„Žπ‘œπ‘™π‘Žπ‘Ÿπ‘ β„Žπ‘–π‘π‘– + πœ–π‘–
(3)
There is some collinearity between the independent variables that measure socioeconomic status. However,
there is no serious collinearity between the explanatory variables of interest: Treatment and Female.
8
57
The majority of the competition literature estimates the probability of selecting the competitive
option by implementing logit or probit models. Given the small number of observations in our data,
we are not able to implement either of those models. Instead we use Ordinary Least Squares (OLS)
to estimate (3).
IV. Experiment
In order to assess different people’s preferences for competition we follow previous experimental
literature (i.e. Niederle and Vesterlund 2007; Gneezy and Croson 2009) that evaluate tournament
entry decisions, by asking experiment participants to choose between a competitive and a
noncompetitive payment scheme before performing a task. The noncompetitive payment scheme
awards the participant with a fixed money amount per successful attempt at the task. In contrast, the
competitive payment scheme awards the participants a larger amount of money per successful
attempt at the task, but only if he or she outperforms an (anonymous) competitor. The choice for
the competitive payment scheme is then the measure for engagement in competition.
This experiment is used in different configurations. In some cases, the participants can
familiarize themselves with the task before choosing a compensation scheme. This allows
participants full knowledge of their own ability at the task (Niederle and Vesterlund 2007). Due to
resource constraints, we do not implement this experimental design. Instead, we choose a task that
is not likely to be familiar to the participants. This limits the possibility that participants have
different levels of previous exposure to the task. This experimental design is not likely to influence
our results qualitatively, since performance feedback does not explain a large portion of willingness
to compete (Niederle and Vesterlund 2007). Our experiment is conducted with a mixed gender
group, in order to be representative of real world situations where women and men compete in the
same marketplace.
In the experiment, we test whether priming a woman with the idea of female objectification
affects the assessment of her own skill ρ and subsequently affects her likelihood to compete πœƒ. To
prime the treatment group with female objectification, we showed them an excerpt from the
Hollywood film ‘An Indecent Proposal.’ In the excerpt, a millionaire offers a man $1 million in
exchange for a night with his wife. In the scene, the men negotiate the deal, while the woman is left
on the sideline with no agency. The control group is shown a short video clip depicting an
advertisement for a safari lodge. The purpose of the control group’s prime is simply to expose them
58
to an environment that is neutral of gender or competitiveness related imagery.
After viewing the video excerpts, each participant was given a questionnaire. The first
question of the questionnaire explained the task and asked the participant to indicate the desired
payment scheme. Thus, the outcome variable was determined directly after the exposure to the
prime.
The task performed in this experiment consisted of throwing a string with two balls attached
to each respective end, across a distance of about ten feet onto a rack with horizontal bars. The
participants were told that they would be allowed to make ten attempts to throw the string with balls
such that it would wrap around one of the horizontal bars. Each time the participant threw the
string with balls such that it remained on a horizontal bar, this would count as one successful
attempt. The participants would then be rewarded with a money amount according to the terms of
their chosen reward scheme. The nature of the task is such that it does not resemble any popular
game or sport. This eliminates potential extreme variations in ability. The participants were matched
with an unknown competitor of the other group (treatment or control), who were separated into
another room. Participants were matched with another participant, independently of their reward
schemes. That is, a participant who chose the piece-rate reward scheme could be matched with a
participant who chose the competitive reward scheme. We had an even number of participants in
each group, in each of the two sessions of the experiment.
In most studies, the participants executed the task in a private setting (Niederle and
Vesterlund 2007). Due to resource constraints, we were not able to implement this experimental
design. Instead, participants executed the task in the same room where they had been exposed to the
prime, in front of the other participants in that group (control or treatment). As a result, it is
possible that the last participants to perform the task learned from watching the other participants
perform the task. However, we do not expect this to influence our results, since the participants did
not know the order in which they would be performing the task at the time they made their
competition decision.
The reward scheme choice was between a piece rate scheme and a tournament reward
scheme. The piece rate scheme rewards the participants with $0.5 per successful shot. In the
tournament scheme, each successful attempt is rewarded with $1.5 if the participant outperforms the
anonymous competitor. The participant receives nothing if the number of successful attempts is
lower than the competitor. At a tie, the participant is rewarded at piece rate. Thus, the participant
59
will choose the competitive payment scheme if:
δ(ρ, ω) > σ.
V. Data
Table I presents summary statistics for participants’ age, participation in sport’s teams and
demographic data. As Table I shows, the experiment yielded a total of 20 participants, of which
eight were women and twelve were men. Of the women, three were randomly assigned to the
treatment group and five to the control group. These numbers are seven and five for the men.
Level of statistical significance is indicated within genders and between control and
treatment groups. As Table I shows, the only difference that is statistically significant is the number
of males from conservative households, with none in the treatment group and 60 percent in the
control group. Both the treatment and control group contain a range of financial backgrounds as
indicated by scholarship reception, though none of the women in the treatment group receives a full
scholarship. Most of the parents have completed an undergraduate or graduate degree, with the
control group having marginally more educated parents than the treatment group. All women in the
treatment group consider their family to be ‘upper middle class’, compared to more variation in the
control group, who on average define themselves as ‘middle class’. The distribution of political
orientation is similar among the women in both groups, though the treatment group contains
relatively more women from moderate as opposed to liberal families. Overall, the participants across
groups come from relatively affluent backgrounds that is characteristic for a liberal arts college
student population.
Payment scheme choices and participants’ self-evaluations are presented in Table II. These
responses were given directly after exposure to the primes and before the demographic questions.
Level of statistical significance is indicated within genders and between control and treatment groups.
None of the women, either in the treatment group or in the control group, chose the tournament
scheme reward structure. Interestingly, 57 percent of the men in the treatment group choose the
competitive payment scheme, compared to 40 percent in the control group, although this difference
is not statistically significant.
A secondary outcome variable was constructed in the form of a survey question, giving
respondents the choice between three different summer internships with varying levels of
60
competitiveness and monetary reward. The most rewarding internship was the most competitive,
while the least rewarding was the least competitive. All women in the treatment group chose the
medium-level internship, whereas there is more variation among the women in the control group.
The women in the treatment group reported lower levels of self-esteem and self-confidence than the
women in the control group. These differences are significant at the ten percent level.
Lastly, participants were asked two questions regarding their subjective evaluation of
discrimination in the labor market in the US. Participants were asked to rank to what extent men are
favored in the workplace in general, and how likely it is that a man receives a promotion rather than
a woman, given that the two have equal qualifications. As Table II shows, in general, the participants
were of the opinion that there is some discrimination in the labor market. Women in the treatment
group held this opinion more strongly than women in the control group and the difference for
whether men are favored in the labor market in general is significant at the five percent level.
VI. Analysis
Results: Effect of Female Objectification on Women’s Willingness to Compete
Table III presents the results of the reward scheme choice, our main outcome of interest. The
coefficient on the interaction term between the female and treatment variables is negative. This
should not be interpreted as to support our hypothesis, since none of the women choose the
competitive payment scheme. Being female is correlated with a lower probability of choosing the
competitive payment scheme, which corresponds to the summary statistics presented above.
Column three in Table III shows that apart from being a woman, only being part of a sport’s team,
being of the upper middle class, and having a partial scholarship receive coefficients that are
significant at least at the ten percent level, when including all control variables specified for the
model. Being part of a sport’s team, and being of the upper middle class is associated with lower
probability of choosing the competitive payment scheme. This counters a priori expectations, since
we would expect that people who have a general preference for competition and who have higher
socioeconomic status would be more likely to choose to compete. Having a partial scholarship
affects the probability to engage in competition negatively. This corresponds with a priori
expectations, since the omitted category is to have no scholarship, and students with some
scholarship would therefore be expected to live in a less well-off household.
61
Table IV presents the results when the outcome variable indicates which internship position
the participant would choose. A higher value on the internship variable indicates a less competitive
internship. The interaction term between the ‘female’ and ‘treatment’ variables is positive, which
corresponds with the hypothesis. However, this coefficient is not significant. As columns one, two
and three of Table IV show, only having a parent who has a masters degree influences the internship
choice significantly. This coefficient is positive, which counters prior expectations, since the omitted
category is if the parent’s level of education is only a college degree, and we would expect children of
more highly educated parents to be more willing to compete.
None of the results presented in Tables III and IV changes qualitatively if we include the
participants’ GPA in the model.
Effect of Female Objectification on Factors That May Influence Women’s Willingness to
Compete
The additional questions included in the questionnaire after the prime allow us to investigate
whether female objectification has the potential to influence other factors that may affect women’s
willingness to engage in competition.
It is possible that women’s subjective experience of how likely it is that men are to be
favored in the labor market increases when they self-objectify. Tables V and VI show the results
when the outcome variable of interest is the participants’ answers to questions about the extent to
which men are favored in the US workplace in general, and, the probability that a man receives a
promotion rather than a woman, given equal credentials. Column one in Table V shows that being
in the objectifying environment affected women’s perception of the extent to which men are
favored in the labor market positively and significantly. Being a man in the treatment group, or a
woman in the control group does not affect the perception of how likely it is that men are favored in
the labor market. However, when including additional controls, like in columns 2 and 3, the positive
effect that self-objectification has on women’s perception of how likely it is that men are favored in
the labor market increases and becomes more significant. When controlling for all other variables,
men in the treatment group were significantly less likely to believe that men were favored in the
labor market.
Table VI shows the results when the outcome variable is whether men are more likely to be
promoted. Here, none of the explanatory variables of interest is significant, independently of the
number of control variables.
62
Table VII shows the results for when the outcome variable is the participant’s indicated level
of confidence. None of the explanatory variables of interest is significant at least at the ten percent
level. However, the coefficient on female is positive, while the coefficient on the interaction between
female and treatment is negative. This corresponds with a priori expectations, given that being in an
objectifying environment would be expected to decrease confidence, while being in a neutral
environment should increase confidence, as compared to being in an objectifying environment.
These coefficients are significant at the twelve percent level in columns one and two. When
restricting the sample to only include the women, the size and significance of the effect of selfobjectification increases and becomes more significant, when implementing the specifications of
columns one and two. This implies that the effect of self-objectification on women’s confidence is at
least weakly significant, when not conditional on socioeconomic status. Table VIII shows that a
similar trend is found on the subjective self-esteem levels of participants. Here, the interaction term
between female and treatment is significant at the 10.1% level, when also controlling for age and
whether the participant plays sports.
Robustness
The finding that no women and about half the men in each group chose the competitive option is
likely to be evidence of a difference in willingness to participate in this specific activity, rather than
into competitive environments in general. It was noted by some female participants that they
assumed that they would perform worse than the male participants in the experiment. Regardless, it
is necessary to replicate a similar experiment with different activities that include more ‘genderneutral’ tasks to acquire more robust results.
The results retrieved as a result of exposure to the prime on the confidence and self-esteem
questions may be skewed by participants’ base confidence levels. One source of self-esteem and
confidence for students may be their academic performance as represented by their Grade Point
Average (GPA). We tested this by including GPA in the self-confidence and self-esteem regressions.
The treatment remained somewhat statistically significant among the women, up until about the
14% level (Tables VII and VIII, columns 4 and 5).
VII. Conclusion
This paper set out to explore the potentially harmful effects of female objectification on women’s
63
willingness to compete, in order to address the sources of the competition gender gap documented
in Western societies. As Niederle and Vesterlund (2011) point out, the fact that high-ability women
opt out from competitions is costly from a societal perspective, as it hinders resources to be
allocated where they are most productive. Increased knowledge about the relationship between our
social environment and competition preferences can provide insights into how we could create more
socially optimal outcomes.
Our experiment provides a foundation for future research concerning the effect of sexism as
an overarching concept, and female objectification specifically, on women’s competitive choices.
The results are preliminary, since the number of observations obtained in this particular experiment
does not provide sufficient statistical power to draw definitive conclusions.
Some tentative conclusions can be drawn from the available data. The finding that none of
the women chose the competitive option, as opposed to around half of the men, is potentially
indicative of a lower competitive inclination among women in general. However, the nature of the
particular activity chosen in the experiment may have biased this result. Since the baseline for
women to enter into competition was too low, we were not able to identify an effect of the prime on
this outcome variable.
Our prime appears to have some validity, as found in the survey questions regarding gender
issues in the workplace. Women that were exposed to our prime of objectification were found to
estimate discrimination based on sex in the labor market significantly more severe. This suggests that
there is potential for future research using similar primes.
64
References
Andersen, S., Ertac, S., Gneezy, U., List J.A. and Maximiano S. “Gender, Competitiveness, and
Socialization at a Young Age: Evidence from a Matrilineal and a Patriarchal Society”. The
Review of Economics and Statistics. 95(4), 2013, 1438-1445.
Bargh, J. A., Chen, M., & Burrows, L. Automaticity of Social Behavior: “Direct Effects of Trait
Construct and Stereotype Activation on Action”. Journal of Personality and Social
Psychology. 71(230), 1996.
Booth, Alison L, and Patrick Nolen. "Gender Differences in Risk Behaviour: Does Nurture
Matter?" The Economic Journal. 122(558), 2012.
Byrnes, J. P., Miller, D.C. and W D. Schafer. "Gender Differences in Risk Taking: a Meta-Analysis."
Psychological Bulletin. 125.3 (1999): 367-383.
Cadsby, C.B, Servatka M., and Song F.. "How Competitive Are Female Professionals? a Tale of
Identity Conflict." Journal of Economic Behavior and Organization. 92, 2013: 284-303.
Datta Gupta, N., Poulsen, A. and Villeval, M. "Gender Matching and Competitiveness:
Experimental Evidence." Economic Inquiry. 51(1), 2013: 816-835.
Dreber, A., Essen, E. and Ranehill, E. "Outrunning the Gender Gap-Boys and Girls Compete
Equally." Experimental Economics. 14(4), 2011: 567-582.
Eagly, A.H. and Carli, L.L. "Women and the Labyrinth of Leadership." Harvard Business Review. 85(9),
2007.
Gay, R.K. and Castano, E. “My Mind or My Body: The Impact of State and Trait Objectification on
Women’s Cognitive Resources.” European Journal of Social Psychology. 40, 2010: 695-703
Gneezy, U. and Croson, R. “Gender Differences in Preferences.” Journal of Economic Literature, 47(2),
2009.
Gneezy, U., Leonard, K. L., and List, J. A. “Gender Differences in Competition: Evidence from a
Matrilineal and a Patriarchal Society.” Econometrica, 77(5), 2009, 1637–64.
Gneezy, U., and Rustichini, A. “Gender and Competition at a Young Age.” American Economic
Review (Papers and Proceedings), 94(2), 2004, 377–81.
Samak, A.C. "Is There a Gender Gap in Preschoolers’ Competitiveness? An Experiment in the U.S."
Journal of Economic Behavior and Organization. 92, 2013: 22-31.
Szymanski, D. M. and Henning, S.L. “The Role of Self-Objectification in Women’s Depression: A
Test of Objectification Theory.” Sex Roles. 56, 2007: 45-53
Wieland, A, and Sarin, R. "Domain Specificity of Sex Differences in Competition." Journal of
Economic Behavior and Organization. 83(1), 2012: 151-157.
Quinn, D. M., Kallen, R.W., Twenge, J.M. and Fredrickson, B.L. “The Distributive Effect of SelfObjectification on Performance.” Psychology of Women Quarterly. 30, 2006: 59-64.
65
TABLE I
Demographic Information
Experiment Data
Prime Group
Control Group
Mean (Std. Dev.)
Mean (Std. Dev.)
Pooled
Women
Men
Pooled
Women
Men
Age
19.2(0.92)
20(1.00)
18.9(0.69)
18.8(0.63)
19(0.71)
18.6(0.55)
Sport
0.4(0.52)
0.00(0.00)
0.57(0.53)
0.30(0.48)
0.2(0.45)
0.4(0.55)
Scholarship
No
0.2(0.42)
0.33(0.58)
0.14(0.38)
0.2(0.42)
0.2(0.45)
0.2(0.45)
Partial
0.5(0.53)
0.67(0.58)
0.43(0.53)
0.7(0.48)
0.6(0.55)
0.8(0.45)
Full
0.3(0.48)
0(0.00)
0.43(0.53)
0.1(0.32)
0.2(0.45)
0(0.00)
High School
0.1(0.32)
0(0.00)
0.14(0.38)
0(0.00)
0.0(0.00)
0(0.00)
College
0.4(0.52)
0.67(0.58)
0.29(0.49)
0.7(0.48)
0.8(0.45)
0.6(0.55)
Master
0.4(0.52)
0.33(0.58)
0.43(0.53)
0.1(0.32)
0(0.00)
0.2(0.45)
PhD
0.2(0.42)
0(0.00)
0.29(0.49)
0.2(0.42)
0.2(0.45)
0.2(0.45)
Poorest 10%
0(0.00)
0(0.00)
0(0.00)
0(0.00)
0(0.00)
0(0.00)
Lower middle class
0(0.00)
0(0.00)
0(0.00)
0.2(0.42)
0.2(0.45)
0.2(0.45)
Middle class
0.4(0.52)
0(0.00)
0.57(0.53)
0.4(0.52)
0.4(0.55)
0.4(0.55)
Upper middle class
0.6(0.52)
1(0.00)
0.43(0.53)
0.4(0.52)
0.4(0.55)
0.4(0.55)
0(0.00)
0(0.00)
0(0.00)
0(0.00)
0(0.00)
0(0.00)
Liberal
0.70(0.48)
0.33(0.58)
0.86(0.38)
0.50(0.53)
0.60(0.55)
0.40(0.55)
Moderate
0.30(0.48)
0.67(0.58)
0.14(0.38)
0.20(0.42)
0.40(0.55)
0.00(0.00)
Conservative
0.00(0.00)
0.00(0.00)
0.00(0.00)**
0.30(0.48)
0.00(0.00)
0.60(0.55)**
10
3
7
10
5
5
Parent Highest Education
Family Social Class
Richest 10%
Family Political
Orientation
N
Note: Scholarship denotes the college scholarship group the individual is closest to; parent highest education denotes the
highest level of education that was received by either of the parents; family social class is a subjective measure of the perceived
social class the participant locates him- or herself in; family political orientation denotes which political orientation the
participant considers his or her family to be most closely affiliated with. ***, ** , and * indicate if the group mean difference,
within each gender specifically, is significant at the p<0.01, p<0.05, or p<0.1, level of significance.
66
TABLE II.
Experiment Choices
Experiment Data
Prime Group
Control Group
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Pooled
Women
Men
Pooled
Women
Men
0.40
(0.52)
0.00
(0.00)
0.57
(0.53)
0.2
(0.42)
0.00 (0.00)
0.40
(0.55)
0.20
(0.42)
0.00
(0.00)
0.29
(0.49)
0.20
(0.42)
0.40 (0.55)
0.00
(0.00)
0.80
(0.42)
0.00
(0.00)
1.00**
(0.00)
0.00
(0.00)
0.71
(0.49)
0.00
(0.00)
0.50
(0.53)
0.30
(0.48)
0.40 (0.55)
0.80
(0.45)
0.20
(0.45)
Men favored in workplace (1-4)
7.20
(1.81)
7.20
(1.69)
3.10
(0.57)
6.67*
(1.53)
6.67*
(0.58)
3.67**
(0.58)
7.43
(1.99)
7.43
(1.99)
2.86
(0.38)
7.10
(1.59)
7.00
(1.69)
3.10
(0.32)
8.00*
(0.00)
7.60*
(0.55)
3.00**
(0.00)
6.20
(1.92)
6.40
(2.3)
3.20
(0.45)
Men more easily promoted (1-4)
2.67
(0.71)
3.50
(0.71)
2.43
(0.53)
2.50
(0.85)
2.60 (0.89)
2.40
(0.89)
10
3
7
10
5
5
Tournament Scheme
Competition Question
Non-Competitive Internship
Medium-Competitive Internship
Competitive Internship
Self Confidence (1-10)
Self Esteem (1-10)
N
0.20**
(0.45)
Tournament Scheme denotes whether the individual enrolled in the tournament reward scheme; competition question
denotes an internship picked by the individual based on difficulty of selection and magnitude of reward; self confidence
is a subjective measure of self confidence where 1 is lowest and 10 is highest; self esteem is a subjective measure of self
esteem where 1 is lowest and 10 is highest; men favored in workplace is a subjective measure of the individuals'
perceived sexism in the workplace, where 1 is least and 4 is most; men more easily promoted is a comparable measure
based on work promotions. ***, ** , and * indicate if the group mean difference, within each gender specifically, is
significant at the p<0.01, p<0.05, or p<0.1, level of significance.
67
Table III
Probability to Compete
Reward Scheme Choice:
Treatment
Female
Treatment*Female
(1)
(2)
(3)
0.171
0.226
0.345
(0.322)
(0.304)
(0.241)
-0.4
-0.454*
-0.601**
(0.245)
(0.251)
(0.244)
-0.171
-0.272
-0.497
(0.322)
(0.319)
(0.415)
-0.014
0.169
(0.166)
(0.154)
-0.299
-0.448*
(0.269)
(0.236)
Age
Part of Sports' Team
Middle Class
-0.613
(0.332)
Upper Middle Class
-0.793*
(0.421)
Parent's Education: Masters
-0.049
(0.259)
Parent's Education: Ph.D.
-0.473
(0.330)
Scholarship: Partial
-0.601**
(0.245)
Scholarship: Full
-0.429
(0.411)
Constant
Observations
R-squared
0.4
0.784
-1.415
(0.245)
(3.161)
(2.736)
20
0.306
20
0.382
20
0.768
Note: Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
Robust standard errors for columns (1) and (2). The standard errors
in column (3) are not heterosekadastic
68
Table IV
Internship
Internship Choice
Treatment
Female
Treatment*Female
(1)
(2)
(3)
-0.486
-0.408
-0.405
(0.365)
(0.389)
(0.405)
-0.2
-0.249
0.109
(0.395)
(0.418)
(0.410)
0.486
0.395
0.138
(0.584)
(0.629)
(0.699)
-0.06
-0.125
(0.237)
(0.259)
-0.365
-0.418
(0.349)
(0.398)
Age
Part of Sports' Team
0.972
Middle Class
(0.559)
0.429
Upper Middle Class
(0.708)
0.866*
Parent's Education: Masters
(0.436)
0.163
Parent's Education: Ph.D.
(0.556)
-0.107
Scholarship: Partial
(0.412)
-0.952
Scholarship: Full
(0.692)
Constant
Observations
R-squared
2.200***
3.453
4.009
(0.279)
(4.462)
(4.608)
20
20
20
0.104
0.169
0.602
Note: Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
69
Table V
Men Are Favored in the Labor Market
Variables
(1)
(2)
(3)
Treatment
-0.343
-0.390*
-0.537**
(0.223)
(0.208)
(0.193)
-0.2
-0.325
-0.318
(0.241)
(0.224)
(0.196)
1.010**
0.777**
1.342***
(0.357)
(0.336)
(0.333)
0.258*
0.243*
(0.127)
(0.123)
-0.11
-0.144
(0.187)
(0.189)
Female
Treatment*Female
Age
Part of Sports' Team
-0.127
Middle Class
(0.267)
-0.49
Upper Middle Class
(0.337)
0.188
Parent's Education: Masters
(0.208)
0.779**
Parent's Education: Ph.D.
(0.265)
0.328
Scholarship: Partial
(0.196)
0.478
Scholarship: Full
(0.330)
Constant
Observations
R-squared
3.200***
-1.553
-1.467
(0.170)
(2.385)
(2.195)
20
20
20
0.388
0.566
0.835
Note: Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
70
Table VI
Men Are More Likely to be Promoted
Variables
(1)
(2)
(3)
Treatment
0.029
-0.047
-0.674
(0.444)
(0.459)
(0.435)
0.2
0.02
-0.13
(0.479)
(0.494)
(0.438)
0.871
0.346
1.441
(0.774)
(0.855)
(0.863)
0.383
-0.07
(0.313)
(0.319)
-0.132
-0.361
(0.415)
(0.424)
Female
Treatment*Female
Age
Part of Sports' Team
0.939
Middle Class
(0.605)
1.372
Upper Middle Class
(0.788)
-0.158
Parent's Education: Masters
(0.532)
0.403
Parent's Education: Ph.D.
(0.635)
0.662
Scholarship: Partial
(0.515)
1.936**
Scholarship: Full
(0.748)
Constant
Observations
R-squared
2.400***
-4.678
2.352
(0.339)
(5.880)
(5.844)
19
0.19
19
0.302
19
0.741
Note: Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
71
Table VII
Confidence
Variables
Treatment
Female
Treatment*Female
(1)
1.229
(2)
0.865
(3)
0.921
(4)
0.788
1.178
(0.962)
(0.989)
(1.306)
(2.310)
(1.170)
1.8
1.739
1.608
1.785
1.914*
(1.039)
(1.064)
(1.323)
(1.599)
(0.909)
-2.562
-2.676
-2.261
-2.136
-3.354**
(1.538)
(1.601)
(2.252)
(2.879)
(1.457)
0.694
0.891
0.996
(0.603)
(0.835)
(1.307)
1.081
0.143
-0.0638
(0.888)
(1.282)
(1.231)
-0.746
-1.16
(1.804)
(2.980)
-2.297
-2.718
(2.283)
(3.000)
0.806
1.353
(1.407)
(2.710)
0.138
0.561
(1.792)
(2.713)
-0.062
-0.335
(1.328)
(4.697)
0.066
-0.159
(2.230)
(3.608)
Age
Part of Sports' Team
Middle Class
Upper Middle Class
Parent's Education: Masters
Parent's Education: Ph.D.
Scholarship: Partial
Scholarship: Full
GPA
Constant
Observations
R-squared
6.200***
-7.135
-9.347
-0.735
20
0.178
-11.354
20
0.288
-14.854
20
0.453
(5)
-0.616
-0.845
(2.935)
(1.068)
-8.85
-33.12
18
0.447
9.064**
-3.395
18
0.244
Note: Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
72
Table VIII
Self Esteem
Variables
Treatment
Female
Treatment*Female
(1)
(2)
(3)
0.869
0.876
1.415
1.03
(1.290)
(1.395)
(1.648)
(2.163)
(1.350)
1.2
1.136
0.696
1.804
1.346
(1.058)
(1.247)
(1.103)
(1.466)
(1.187)
-1.962
-2.08
-1.802
-1.433
-2.275
(1.348)
(1.187)
(2.199)
(2.766)
(1.477)
0.356
0.737
1.472
(0.572)
(0.824)
(1.236)
0.395
-0.624
-0.674
(0.889)
(1.019)
(1.129)
-1.848*
0.341
(0.907)
(2.563)
-3.001
-4.6
(1.823)
(2.908)
0.574
2.157
(0.991)
(2.416)
-0.464
0.393
(1.901)
(2.554)
-0.802
-0.916
(1.631)
(1.858)
0.195
-3.085
(2.392)
(4.129)
Part of Sports' Team
Middle Class
Upper Middle Class
Parent's Education: Masters
Parent's Education: Ph.D.
Scholarship: Partial
Scholarship: Full
GPA
Observations
R-squared
(5)
1.029
Age
Constant
(4)
6.400***
-1.03
20
0.097
-0.387
-10.6
20
0.119
-4.506
-13.95
20
0.422
2.045
0.0273
(2.578)
(1.112)
-25.71
-26.7
18
0.511
6.307*
-3.431
18
0.108
Note: Standard errors in parentheses *** p<0.01, ** p<0.05, * p<0.1
73
74
Airline performance: Taking off after 30 years on the tarmac
Kaspar Mueller ’15
Introduction to Econometrics
Why are airline profits at record highs after decades of sustained losses? In this paper, I complete an
econometric analysis of factors influencing airline operating margins using panel data on U.S. airlines
from 2002 to 2014 with fixed- and random-effects models. Previous literature has examined the
industry as a contestable market; I explore the effects of various industry performance measures on
average airline ticket price to examine the similarity to a contestable market structure. The results
indicate that Bain’s structure-conduct-performance theory is much more apt at explaining industry
structure, providing evidence in support of the idea that operating margins and average ticket prices
increase as the industry’s concentration ratio increases. Additionally, the model provides support for
the hypotheses that increases in industry demand and airline load factor increase operating margins,
while increases in jet fuel price decrease operating margins. The findings surrounding operating
margins are in agreement with anecdotal evidence appearing in the media, and the industry structure
findings are valuable in the context of recent industry consolidations.
I. Introduction
Why are US domestic passenger airline profits at record highs after decades of sustained losses? In
Q1 2014, all major US domestic airlines besides United posted record profits, and in Q2 2014,
United, American, Southwest and Jetblue all posted record high profits (Johnnson, Schlangenstein,
and Sasso, 2014 and Jansen, 2014). Third quarter financial results showed a similar pattern, with all
five of the largest US airlines showing strong profits (Carey and Nicas, 2014). The airline industry in
the United States was deregulated in 1978, but airline profits only recently begun to improve, with
the industry accumulating $59 billion in losses from 1979 through 2009 (Borenstein, 2011).
What caused this sharp improvement? In the late 2000s, research pointed to sustained low
profitability due to increased price sensitivity surrounding air travel demand, changes in marginal
cost favoring direct flights, post 9/11 demand downturn, a large cost differential between legacy
airlines and low-cost carriers, and low customer satisfaction levels (Borenstein, 2011). Recent
anecdotal evidence suggests that decreasing fuel costs and strong demand account for airline’s
improved performance, and even led airlines to indicate that they would increase capacity in the next
year (Carey and Nicas, 2014 and Dastin, 2014). I will analyze these factors and explore others to
discover what influences airline operating margins among 15 U.S. airlines (Appendix A) between
2002 and 2014. I will also examine how these factors impact average airfares to explore the industry
structure.
75
This paper is divided into sections covering previous literature (Section II), explaining the
conceptual models (Section III), describing the ideal (Section IV) and actual data (Section V),
depicting the results of the estimated models (Section VI), and the conclusion (Section VII).
II. Literature Review
There are two primary types of literature that will be the most useful in the analysis of the airline
industry. The first grouping of literature provides information surrounding the structure of the
industry, which will provide helpful in determining if the industry structure has changed since
airlines became newly profitable. The second grouping of literature provides empirical evidence for
the industry’s performance in the past. Much of the literature surrounding the U.S. airline industry
focuses on the period since the industry’s deregulation in 1978.
II. A. Literature Review: Industry Structure
Basic U.S. airline industry research says that the industry is a contestable market. The claim
that the industry fits stipulations of a contestable market stems in part from Baumol, Panzar, and
Willig’s (1982) book on the topic. The implications of this classification most relevant to this analysis
are that predatory pricing cannot be used as a competitive force and that firms traditionally price at
marginal cost when in a contestable market. This means that as market concentration increases, we
should see higher operating margins due to pricing power. While these arguments are certainly
compelling, in 1992, Borenstein found that adding competition to individual airline routes did, in
fact, decrease prices, indicating that airlines had not previously priced tickets at marginal cost and
that the market may not have been contestable (Borenstein, 1992). However, as the market structure
has changed since this analysis, it may have become closer to a contestable market (Borenstein,
1992).
Whinston and Collins (1992) performed a case study on Peoples Express, an airline that
expanded into 24 new city-pairs in 1984 and 1985, to examine the contestability of the market. They
discovered that the market did not conform to a contestable structure, as the airlines on the routes
Peoples Express entered lost millions of dollars in value following the entry. Whinston and Collins
use a CAPM model that examines value in the context of the stock market for their analysis (beyond
the scope of understanding of this course). One shortcoming of this approach is that today’s
financial markets may be more reflective of short-term affects rather than long-term implications of
market entry, but the relationship is still valuable in that it questions the industry structure.
76
Bain’s 1950 book on structure-conduct-performance theory is also applicable to the airline
industry. The basis of this theory is that market structure leads to a particular competitive nature in a
market, and performance follows from that. Specifically, Bain states that market structure (in terms
of firm concentration) and competitive behavior (collusion) should determine price levels and
profits. One of Bain’s four oligopoly types fits airlines well,
““Chaotic” competition or relatively active price rivalry, potentially emergent from
unrecognized interdependence, inconsistent conjectures by rivals. (If chaotic to the point of
persistent losses, it may be argued that this pattern would be temporary, or transitional to
another)” (Bain, p.39).
The airline industry has existed in this chaotic state, with poor profitability persistent, and it may be
undergoing a transitional period to a new structure. This is evidenced by the recent mergers that
have taken place, some of which followed bankruptcies, and the industry’s recent profitability
(Isadore, 2013). In addition to competition, the entry forces in the market are important for firm
behavior, specifically with pricing. Bain argues that a more concentrated industry should exhibit
higher profits, and these industries may also have high selling costs if the barriers to entry are
naturally low. Because the entry for this industry is relatively easy, carriers may be moving towards a
less homogeneous service model to make selling costs high and inhibit quick market entry and exit.
As long as market entry is inexpensive, the stage is set for inefficiency because there is no incentive
for investment in long-run scale improvements.
II. B. Literature Review: Industry Performance
There are a wide array of variables that influence airline costs and operations; I have
identified some of the most important ones for study in this paper. One measure of airline efficiency
is revenue seat miles per available seat mile (also known as load factor), where each revenue seat
mile is a passenger flying one mile, and an available seat mile is one seat flying one mile (Basic
Measurements, n.d.). We would expect that airline profits would increase as this percentage
increases, because as airlines improve their capacity usage, they make more revenue with only slight
increases in costs. On the cost side, jet fuel is crucial, accounting for an average of 34% of carrier
operating costs (Mayerowitz, 2014). Jet fuel price decreases should bring higher margins for airlines.
Lastly, demand is a driver of industry profits. In accordance with Borenstein (2011) and Morrison
and Winston (1989), I model demand as a direct relationship between adjusted average fare price
and revenue seat-miles.
77
The most comprehensive work in the area of airlines’ poor financial performance is
presented by Borenstein (2011), who examines the effects of taxes, fuel costs, demand, and
competition on airline profitability. This analysis took place within the United States from the time
of deregulation (1982) until 2011. Borenstein did not use regression analysis for this paper, which is
a weakness. Instead, he compared his variables of interest over time. His research shows that the
low-costs of small carriers and decreased demand are the primary factors influencing low industry
profitability. Because these factors are unpredictable, he concludes that, “[A]fter more than 30 years,
it seems unlikely that airline losses are due entirely to a series of unfortunate exogenous events
relative to what management and investors should have expected” (Borenstein, 2011, p.12); it has
not taken long for him to be disproven by the profitability the industry now experiences. Borenstein
suggests there are two mechanisms for airlines to improve profitability: bringing their costs in line
with those of low-cost-carriers, or increasing their price premium. Because his data ended in 2011, it
will be useful to examine the phenomenon with more updated data. To analyze industry demand,
Borenstein used the function Q = AtPε, where Q represents quantity, A is the demand factor, P is
the average price, and ε = -1. When the airline industry was deregulated in 1978, Morrison and
Winston (1989) provided suggestions on improving financial performance of the newly deregulated
system. They included the same simple demand function in their analysis.
Borenstein also discusses the effects of new entrants in the airline industry in terms of lowcost carrier revenue passenger-miles compared to the overall industry. This discussion is important
given that the airline industry is traditionally labeled as a contestable market (Borenstein, 1992).
Borenstein provides a solid basic analysis of airline company performance, but does not incorporate
industrial organization principles, which he argues would be applicable to the industry. The fourfirm concentration ratio cited by other researchers (Morrison and Winston, 1989) can provide
further insight on this topic and would be an improvement to his research.
Borenstein and Rose (2007) examine the structure of airline markets more in-depth when
they explore the motivation for deregulating airline markets. They describe contestability as a
situation in which, “Potential competition would discipline firms, forcing them to keep prices at
competitive levels in order to deter new entry” (Borenstein and Rose, p. 44). In practice, however,
they believe that other mechanisms enabled airlines to keep prices high, such as airline dominance in
particular routes and airports. This could be a key factor in airline profitability due to recent industry
consolidation. This analysis cites costs due to delays and airport congestion as one of the largest
78
contributors to airline profitability problems in the past decade (Borenstein and Rose, 2007). Related
to profitability and structure are the massive mergers that occur among airline companies. One
possible motive of these mergers in the recent decade is poor financial performance. Some research
finds that mergers do increase ticket prices, but does not make a statement about how this affects
profitability (Liang, 2013). Liang’s research was limited to one-way flights and performed at the
carrier-route level using a regression analysis, which is important to note when comparing the data in
this paper’s analysis because the data structure is very different. Liang used a standard regression to
examine fare prices, with data from immediately before and after the Delta-Northwest merger, and
found a positive coefficient on load factor.
Blalock, et al., 2005 examines of the effect of the 9/11 terrorist attacks on airline demand .
This study brings valuable aspects of air travel demand into play that simpler analyses, like
Borenstein’s, leave out. Blalock et al. includes security factors and finds significant effects of security
factors on demand while controlling for price. It is therefore important to note that deriving demand
from ticket price may not be the most accurate in practice, and this could bring some
multicollinearity into play. Additionally, this analysis estimates demand at the carrier-route level,
rather than the carrier level, which provides a more granular sample. The quarter of the year was also
used as a factor in determining demand, which could be valuable given the cyclicality of the industry.
Some of Borenstein’s papers simply pick out particular quarters to analyze in each year, while others
ignore the issue of seasonality.
Bamberger, Carlton, and Neumann (2004) use OLS regressions to determine the effect of
airline alliances on ticket prices and airline traffic. They split these regressions into a series of
“before” and “after” regressions for comparison before and after the Continental/America West
and Northwest/Alaska alliances. The strength of this analysis lies in the granularity of the data; this
analysis looks at individual city pairs to see how the Herfindahl‐ Hirschman index changes before
and after alliances. This study finds that fares drop significantly after alliances were implemented,
and traffic increased on alliance routes as well.
Fuel price hedging research by Carter, Rogers, and Simpkins (2003) articulates a significant
negative relationship between jet fuel prices and operating cash flows. This analysis uses a standard
OLS regression of quarterly airline operating incomes with jet fuel prices as the explanatory variable.
One weakness of the analysis is that it only uses industry averages for jet fuel and cash flow, rather
79
than individual airline costs and expenses. This relies on the assumption that fuel prices affect all
airlines in the same way.
Behn and Reiley (1999) analyzed non-financial indicators of airline financial performance,
postulating that load factor, market share, and customer complaints would affect quarterly
profitability. Their linear regression indicated that both market share and load factor are positively
correlated with profitability, while customer complaints have a negative relationship. The goal of
Behn and Reiley’s analysis was to produce a predictive model for airline net income, and based on
the significant F-statistic and R2 values between .41 and .71, they concluded that their model would
be adequate for predicting airline financial performance. One limitation on this model is that it does
not take costs like fuel into account – a price shock to fuel would not be captured by any of the
model measures and the model would become inaccurate. Overall, this literature gives us a good
base from which to start examining characteristics of the industry in the past few years.
III. Conceptual Model
I selected airline operating margin as the dependent variable because it eliminates some of the
irregularities and non-operational factors that factor into the net income, and it measures the airline’s
performance in its core competency. The conceptual model used in this analysis is that airline
operating margins are a function of revenue seat miles per available seat miles on the per-airline level
(RSM per ASM, known in the industry as load factor), jet fuel prices, demand (calculated as revenue
seat miles times average fare price for the overall industry), and the top 4-firm revenue
concentration ratio. To account for seasonality, this analysis will be executed with 4-quarter moving
averages as well. The equation to be analyzed is as follows:
π‘‚π‘π‘’π‘Ÿπ‘Žπ‘‘π‘–π‘›π‘” π‘€π‘Žπ‘Ÿπ‘”π‘–π‘› = 𝛽1 × π‘…π‘†π‘€ π‘π‘’π‘Ÿ 𝐴𝑆𝑀 + 𝛽2 × π½π‘’π‘‘ 𝐹𝑒𝑒𝑙 π‘ƒπ‘Ÿπ‘–π‘π‘’ + 𝛽3 × π·π‘’π‘šπ‘Žπ‘›π‘‘ +
𝛽4 × πΆπ‘œπ‘›π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘‘π‘–π‘œπ‘› + πœ€π‘– + πœ‡π‘–π‘—
Table 1: Expected Results, Operating Margin
Variable
Sign
RSM per ASM
+
Jet Fuel Price
-
Demand
+
4-Firm Revenue Concentration
+
80
In addition to modeling operating margins, I will also examine airline ticket prices as a
function of load factor (RSM per ASM), industry revenue concentration, and jet fuel prices. This
relationship will be modeled as follows:
π‘‡π‘–π‘π‘˜π‘’π‘‘ π‘π‘Ÿπ‘–π‘π‘’ = 𝛽1 × π‘…π‘†π‘€ π‘π‘’π‘Ÿ 𝐴𝑆𝑀 + 𝛽2 × π½π‘’π‘‘ 𝐹𝑒𝑒𝑙 π‘ƒπ‘Ÿπ‘–π‘π‘’ + 𝛽3 × πΆπ‘œπ‘›π‘π‘’π‘›π‘‘π‘Ÿπ‘Žπ‘‘π‘–π‘œπ‘› + πœ€π‘– + πœ‡π‘–π‘—
Table 2: Expected Results, Ticket Price
Variable
Sign
RSM per ASM
+
Jet Fuel Price
+
4-Firm Revenue Concentration
+
IV. Ideal data
Conceptually, operating margin is a good indicator of firm performance, so it is largely an adequate
measurement. The issue with using profits comes when airlines do restructuring or merger activities.
This often results in huge losses, sometimes several orders of magnitude greater than average. This
issue is mostly resolved by using operating expenses and revenues. The problem with using
operating margin is that it excludes taxation from our analysis, which may be a factor that
contributes to profitability. The concentration ratio also has some issues. Ideally, the data would
measure precisely the dominance of firms in the industry. This ratio is only a rough approximation
of a factor that, in reality, includes minute details such as dominance on particular routes. Some
studies take this into account, but it was not possible for this analysis. Demand is also not an ideal
measurement, because it does not take the variance in price into account. Ideally, we would have a
dollar value per mile that customers are willing to pay for air travel. Load factor is a fairly adequate
measurement, but could be improved by incorporating the distance of flights into the analysis.
Lastly, jet fuel is a great indicator, but segmenting jet fuel prices by airline would be ideal since
airlines hedge fuel differently and thus pay different prices. The measure of fare price is adequate,
but the ideal measure would include fare price variance, as the mean does not tell the whole story of
fare competition over the past 12 years.
81
V. Actual Data
While these data are far from ideal, they to provide good insights into the concepts previously
discussed. I selected 15 of the largest US airlines over the past 12 years for analysis on a quarterly
basis. Data on airline seat miles, revenues, and expenses come from the Bureau of Transportation
Statistics (BTS), which keeps a wide range of data on individual airlines. The operating margin was
calculated from these data as follows:
π‘‚π‘π‘’π‘Ÿπ‘Žπ‘‘π‘–π‘›π‘” π‘€π‘Žπ‘Ÿπ‘”π‘–π‘› =
π‘‚π‘π‘’π‘Ÿπ‘Žπ‘‘π‘–π‘›π‘” 𝑅𝑒𝑣𝑒𝑛𝑒𝑒𝑠 − π‘‚π‘π‘’π‘Ÿπ‘Žπ‘‘π‘–π‘›π‘” 𝐸π‘₯𝑝𝑒𝑛𝑠𝑒𝑠
π‘‚π‘π‘’π‘Ÿπ‘Žπ‘‘π‘–π‘›π‘” 𝑅𝑒𝑣𝑒𝑛𝑒𝑒𝑠
The 4-firm revenue concentration ratio was calculated as the portion of airline revenues made up by
the top four airlines, by sales, in each quarter. Load factor was calculated as revenue seat miles per
available seat mile. Data on average ticket prices also came from the BTS website. Jet fuel prices
came from the U.S. Energy Information Administration website. The air carriers selected for study
were selected with help from the MIT Airline Data project, and the ideas for the explanatory
variables came from the website in conjunction with the previous literature. One important feature
of the data is which of the variables are industry averages, and which variables are measured at the
individual carrier level. Table 3 details the specificity levels of each variable. To account for
seasonality, I calculated a 4-quarter moving average for both jet fuel prices and operating margin.
For the fare price model, I calculated an average load factor across airlines in each quarter because I
did not have my dependent variable on the carrier level.
Table 3: Variable Specificity
Variable
Specificity Level
Operating margin
Airline
Jet fuel price
Commodity market
Revenue concentration
Industry
Load factor (RSM per ASM)
Airline
Average fare
Industry
VI. A. Results: Operating Margin
The results of the analysis yielded the expected coefficients, according to the theoretical predictions.
Because panel data are at hand, I checked the fixed effects and random effects models with moving
averages and normal data with a Hausman test. For both the contemporaneous and moving average
models, the Hausman test produced a non-significant result, indicating that a random effects model
82
is more appropriate. I used a moving average model in addition to the standard model because the
moving averages should account for the seasonality of airline profits.
The random effects model reveals a negative coefficient on jet fuel, indicating that for every dollar
increase in the price of jet fuel, airline operating margins decrease by 7%, according to the
contemporaneous model. This is consistent with what I expect from the literature (Carter, Rogers,
and Simpkins, 2003).
The positive coefficient on load factor (RSM per ASM) indicates that for every percentage point
increase in load factor, airlines can expect to see a 1.5% increase in operating margin, according to
the moving averages model. This makes sense; logically, we would expect that as revenues grow,
operating margin would grow by a similar amount. The fact that this coefficient is greater than 1%
deserves attention, as it indicates that airlines are efficient to the point that each additional passenger
costs less than the previous one. Anecdotally, this can be rationalized if we imagine a plane with a
capacity of 100 persons that only has 80 people on board. Adding five people to the plane will add
very marginally to the amount of fuel and in-flight snacks consumed, and it will not require any
additional flight attendants or pilots on the plane, so the revenue increases from the additional
tickets far outweigh the cost increases.
Four-firm revenue concentration also has a positive coefficient, indicating that the effect of a
10% increase in industry revenue concentration is a .04% increase in operating margin in the moving
averages model. This result provides evidence in support of the idea that the industry does not
represent a contestable market, and also supports Bain’s structure-conduct-performance theory. As
the structure of the market changes, the firms in the market are able to adjust their operations and
pricing and thus perform better financially. One possible reason that the value of this coefficient is
so small is that while some firms may benefit from increased concentration, some firms may suffer
and see their operating margins drop.
The discussion of demand is more complex, in part because of interpretability issues and in
part because the moving average and contemporaneous models disagree over the sign of this
coefficient. The moving average model indicates that increases in demand are associated with
decreases in operating margin, while the standard random effects regression shows a positive
coefficient on demand. Interpreting this coefficient is challenging, but theory tells us that we expect
an increase in demand to be associated with an increase in operating margin (Borenstein, 2011).
From the contemporaneous fixed effects model, the coefficient tells us that every 10000 unit
83
increase in demand (whether the change comes from ticket prices, revenue miles, or both) is
associated with a 3% increase in operating margin.
The contemporaneous model elasticities (using a dyex method) are helpful in understanding
the regression results. The load factor has an elasticity of 1.18, indicating a 1.18 increase operating
margin for every 1% change in load factor. A 1% change in profit concentration yields a .17 change
in operating margin, according to the model. A 1% increase in jet fuel prices decreases operating
margin by .14, and a 1% increase in demand increases operating margin by .14. These findings
provide evidence in support of Bain’s (1950) conjectures about increased concentration leading to
increased profits.
The significance of the variables discussed previously is also of interest. In both models, we
see statistical significance (p-values less than .01) in load factor and revenue concentration ratio,
while we only see statistical significance in the jet fuel coefficient below the .05 level in the
contemporaneous random effects model. Demand does not appear statistically significant in either
model; this is not surprising since we see opposing signs for the demand covariate in the two
models.
The contemporaneous and moving average models are similar in that almost all of their
coefficients match the theory, with the exception being the demand coefficient in the moving
average model. Both models have similar R2 values, with the moving average model’s R2 within value
at 0.14 while the standard model’s within value is 0.18. Both models have significant F-test values of
92 and 128, respectively. Because of the higher significance on the jet fuel coefficient in the
contemporaneous model, I would select the contemporaneous model over the moving average
model.
This model is robust. We would not expect our results to change very much if we removed
one airline, because some of the independent variables are the same across the industry, and most of
the airlines experienced similar fluctuations in operating margin over the twelve year period this
model examines (see Figure 1). We can also confirm the robustness of this estimation by comparing
it with the model with moving averages and conclude that even when we control for cyclicality, we
see similar results. A discussion of the model residuals is contained in appendix D and also helps
justify the robustness of the model.
84
VI. B. Results: Fare Prices
The analysis of fare prices yields similar conclusions about the industry structure. To analyze fare
price, I used an OLS regression with industry-average variables (Appendix E). In the OLS model,
jet fuel, load factor, and concentration ratio all held positive coefficients, in accordance with
economic theory and lending support to the idea that the industry structure and costs influence
pricing. The indicator for industry structure, four-firm revenue concentration, indicates that a 1%
increase in revenue concentration leads to a $3.07 increase in average fare price. This is consistent
with the current industry narrative surrounding increased demand in conjunction with mergers and
higher ticket prices (Dastin, 2014 and Carey and Nicas, 2014). Jet fuel price increases are also
associated with higher fares, with the model predicting a $25.60 fare increase for every $1 increase in
fuel price. The load factor variable indicates a $1.03 average fare increase for every 1% increase in
load factor, which follows from the idea that airlines can charge more as seats become more scarce,
and also may be due to the fact that prices have gone up as airlines have reduced their available seat
miles.
Elasticities for the OLS fare price model are all less than 1. Jet fuel price has an elasticity of
.16, indicating that for a 1% increase in jet fuel price, the fare price will increase by .16%. We see a
.25 coefficient on load factor, meaning that a 1% increase in load factor increases fare price by .25%,
and a .38 coefficient on industry revenue concentration, meaning that a 1% increase in industry
revenue concentration increases fare price by .38%. These elasticities are interesting given that
airlines are probably able to respond to these factors on a daily or even minute-by-minute basis,
matching fares to the number of seats left on a plane and to updated fuel prices. This makes the low
values somewhat surprising. However, the fact that industry concentration has the largest magnitude
of any elasticity comes as no surprise given our prior conjectures about industry structure. When
airlines have more market power, they can increase ticket prices because there is less competition in
the market. Similarly to Borenstein’s (1992) findings (although in the opposite test) decreasing
competition increases prices.
This model is also fairly robust. It also would not likely suffer from the removal of one
airline, and the high R2 values in conjunction with the proper economic results also indicate the
strength of the model (Table 5). Additionally, the F-test score of zero is an indicator of the model’s
validity. All of the variables have statistical significance, with p-values less than .05. Both jet fuel
85
price and concentration ratio have p-values of zero, and load factor has a low p-value of .047. The
residual discussion (Appendix F) also speaks to the usefulness of the model.
VII. Conclusion
This paper examined the factors that influence airline profitability and average airfares using panel
data across 15 U.S. airlines from 2002-2014. I found evidence in support of the hypothesis that
increases the 4-firm revenue concentration ratio, demand, and load factor all improve airline
operating margins, while jet fuel price increases decrease operating margins. I also found evidence in
support of the hypothesis that the 4-firm revenue concentration ratio, jet fuel prices, and load factor
all increase the average airline ticket price.
These conclusions provide evidence in the contrary to the idea of the airline industry as a
contestable market, and support Bain’s structure-conduct-performance theory by showing that
airline operating margins increase when the industry becomes more consolidated. They provide
evidence in support of the anecdotal evidence appearing in the media recently regarding jet fuel
prices and demand driving strong airline earnings (Dastin, 2014 and Carey and Nicas, 2014).
The contemporaneous operating margin model has a few limitations, the first of which is the
predictive power of the model. Because some of these metrics are only measured across the
industry, rather than for the individual airlines, it is difficult to make predictions for the individual
airlines. In addition, this model does not include many smaller airlines, which are still an important
part of the industry, although their operating structures may be different. Perhaps one of the most
important limitations on this model is due to data availability – because the model only looks at
factors between 2002 and 2014, it cannot compare profitability factors between decades to see if
these factors have changed. It might be useful to compare this model with the same regressions for
pre-2002 industry data.
There are many opportunities for future research on this topic. In particular, a more granular
analysis using the same variables on particular routes or at specific airports would be of value to the
industry. One of the largest questions that this paper opens up is how much mergers play a role in
recent airline profitability, and how these mergers influenced the independent variables in this paper.
Several of the 15 airlines in this data set were in operation at the beginning of the 12-year period, but
not the end. Additionally, a measurement for airline technology (both IT infrastructure and aircraft)
86
and a measurement of airline employee efficiency could be incorporated into this analysis to measure
how airlines are improving operationally.
87
References
Bain, Joe S. (May 1950). The American Economic Review, Vol. 40, No. 2, Papers and Proceedings of the
Sixty-second Annual Meeting of the American Economic Asociation, pp. 35-47
Bamberger, Gustavo E., Dennis W. Carlton, and Lynette R. Neumann . (April 2004). An Empirical
Investigation of the Competitive Effects of Domestic Airline Alliances Journal of Law and
Economics, Vol. 47, No. 1, pp. 195-222
Basic Measurements in the Airline Business. American Airlines website. Accessed 12/9/14.
http://www.aa.com/i18n/amrcorp/corporateInformation/facts/measurements.jsp
Baumol, William J., John C. Panzar, and Robert Willig. (1982). Contestable Markets and the Theory of
Industry Structure. Harcourt College Publishing.
Behn, Bruce K., and Richard A. Riley. (1999). "Using nonfinancial information to predict financial
performance: The case of the US airline industry." Journal of Accounting, Auditing & Finance
14.1: 29-56.
Blalock, Garrick, Vrinda Kadiyali and Daniel H. Simon. forthcoming. “The Impact of Post-9/11
Airport Security Measures on the Demand for Air Travel.” Journal of Law and Economics.
http://dyson.cornell.edu/faculty_sites/gb78/wp/airport_security_022305.pdf
Borenstein, Severin and Nancy L. Rose. (September 2007). “How Airline Markets Work...Or Do
They? Regulatory Reform in the Airline Industry.” NBER Working Paper No. 13452. Online.
Borenstein, Severin. “On the Persistent Financial Losses of U.S. Airlines: A Preliminary
Exploration.” NBER Working Paper No. 16744 (January 2011). JEL No. L1,L93. Online.
Borenstein, Severin. “The Evolution of U.S. Airline Competition.” Journal of Economic Perspectives.
http://faculty.haas.berkeley.edu/borenste/download/JEP92AirEvol.pdf
Bureau of Transportation Statistics. (2014). Fares, Financial, Revenue Passenger-Miles, Available Seat-Miles.
http://www.rita.dot.gov/bts/
Carey, Susan, and Jack Nicas. (2014, Oct. 23). “Airlines post strong profits, offer bullish outlooks.”
Wall Street Journal.
Carter, D., Rogers, D. A., & Simkins, B. J. (2003). Does fuel hedging make economic sense? The
case of the US airline industry. Journal of Finance website. Accessed at http://www.gresicetai.hec.ca/cref/sem/documents/030923.pdf
Dastin, Jeffrey, (2014, Oct. 23). “U.S. airlines see third-quarter profits rise, upbeat outlook.” Reuters.
Forbes, Silke Januszewski. 2004. “The Effect of Air Travel Delays on Airline Prices.” University of
California, San Diego. Mimeo. Available at:
http://weber.ucsd.edu/˜sjanusze/www/airtrafficdelays.pdf
Isadore, Chris. “US Airways-American Airlines to merge.” 14 Feb 2013. CNN Money.
http://money.cnn.com/2013/02/14/news/companies/us-airways-american-airlinesmerger/index.html
Jansen, Bart. "Several Airlines Announce Record Profits." USA Today. Gannett, 24 July 2014. Web.
19 Sept. 2014.
Johnsson, Julie, Mary Schlangenstein, and Michael Sasso. "United Tumbles as It Stands Alone With
Airline Profit Loss." Bloomberg.com. Bloomberg, 4 Apr. 2014. Web. 19 Sept. 2014.
Liang, Jiajun. (2013) "What are the Effects of Mergers in the U.S. Airline Industry? An Econometric
Analysis on Delta-Northwest Merger," The Macalester Review: Vol. 3: Iss. 1, Article 2.
Available at: http://digitalcommons.macalester.edu/macreview/vol3/iss1/2
88
Mayerowitz, Scott. “Jet fuel prices plummet, yet air fares keep rising.” 17 Nov 2014. Boston Globe.
http://www.bostonglobe.com/business/2014/11/17/fuel-prices-are-lower-why-arenairline-tickets-cheaper/6IO8Hkcy4Norm71MzAvfaJ/story.html
MIT Airline Data Project is a website that defines key airline industry metrics and keeps data on
those metrics. (http://web.mit.edu/airlinedata/www/default.html).
Morrison, Steven A., and Clifford Winston. (1989) “Enhancing the Performance of the Deregulated
Air Transportation System.” Brookings Papers on Economic Activity Microeconomics, Vol.
1989, pp. 61-123
US Energy Information Administration. (2014). US Gulf Coast Kerosene-Type Jet Fuel Spot Price FOB.
http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=pet&s=eer_epjk_pf4_rgc_dpg&f
=m
Whinston, Michael D. and Scott C.Collins. (1992). “Entry and Competitive Structure in Deregulated
Airline Markets: An Event Study Analysis of People Express.” RAND JournalofEconomics. 23(4):
445-462.
89
Appendix
Appendix A: Operating Margins
Figure 1: Operating Margins by Air Carrier
Figure 2: Industry Average Operating Margin
0.1
0.05
0
-0.05
-0.1
-0.15
-0.2
Appendix B: Carriers Studied
AirTran Airways
American Airlines
Allegiant Air
America West
Airlines
Alaska Airlines
Continental Air Lines
Carriers Studied
Delta Air Lines
Frontier Airlines
Hawaiian Airlines
JetBlue Airways
Northwest
Airlines
Southwest
Airlines
US Airways
United Air
Lines
Virgin America
90
Appendix C: Operating Margin Stata Output – Hausman and Regressions
Moving average model
. xtreg opmarginma4 rsmperasm profitconcentration jetfuelma4 demandna,re
Random-effects GLS regression
Group variable: carriername1
Number of obs
=
Number of groups =
592
15
R-sq: within = 0.1378
between = 0.0486
overall = 0.0927
Obs per group: min =
19
avg = 39.5
max =
45
Wald chi2(4) =
91.91
corr(u_i, X) = 0 (assumed)
Prob > chi2 =
0.0000
------------------------------------------------------------------------------------opmarginma4 |
Coef. Std. Err.
z
P>|z| [95% Conf. Interval]
--------------------+---------------------------------------------------------------rsmperasm | 1.327397 .1578222 8.41 0.000
1.018071
1.636723
profitconcentration | .5963654 .1357281 4.39 0.000
.3303433
.8623875
jetfuelma4 | -.0143931 .0124105
-1.16 0.246 -.0387171
.009931
demandna | -2.73e-06 1.78e-06
-1.53 0.126 -6.21e-06
7.62e-07
_cons | -1.194667 .1271141 -9.40 0.000 -1.443806 -.9455277
--------------------+---------------------------------------------------------------sigma_u | .10574337
sigma_e | .14162803
rho | .35792586 (fraction of variance due to u_i)
91
Appendix D: Models
Explaining Operating Margin: Random Effects Models
Variable
Intercept
Contemperaneous
Moving averages - Jet
Fuel and Operating
Margin
Margins (dyex),
Contemperaneous
-1.39
-1.19
-10.82
-9.40
1.47**
1.33**
9.24
8.41
0.41**
0.60**
2.94
4.39
-0.07**
-0.01
-4.63
-1.16
3.08E-06
1.41
-2.73E-06
-1.53
σμ
0.09
0.11
σε
0.15
0.14
ρ
0.26
0.36
Within
0.18
0.14
Between
0.04
0.05
Overall
0.13
0.09
Observations (N)
592
592
15
15
128
92
0
0
Load Factor
4-Firm Revenue Concentration
Jet Fuel Price
Demand
1.18
0.17
-0.14
0.14
R-squared:
Groups
Wald chi-squared
P-value, 4 DF
*significance at .05 level
**significance at .01 level
T-statistics in bold under coefficients
92
Explaining Average Airfares: OLS Model
Variable
Coefficient
Intercept
72
Elasticity (eyex)
1.75
Load Factor
103.1*
0.245
2.05
4-Firm Revenue Concentration
307.5**
0.384
9.68
Jet Fuel Price
25.6**
0.156
10.07
R-squared
0.89
Adjusted R-squared
0.88
SSR
4395
Observations (N)
50
F-statistic
124
P-value, 3 DF
0
*significance at .05 level
**significance at .01 level
T-statistics in bold under coefficients
93
Contemporaneous model
. xtreg opmargin rsmperasm profitconcentration jetfuel demandna,re
Random-effects GLS regression
Group variable: carriername1
R-sq: within = 0.1832
between = 0.0403
overall = 0.1257
corr(u_i, X) = 0 (assumed)
Number of obs
=
Number of groups =
Obs per group: min =
avg = 39.5
max =
Wald chi2(4) =
Prob > chi2
592
15
19
45
127.83
=
0.0000
------------------------------------------------------------------------------------opmargin |
Coef. Std. Err.
z
P>|z| [95% Conf. Interval]
--------------------+---------------------------------------------------------------rsmperasm | 1.474853 .1596745 9.24 0.000
1.161897
1.787809
profitconcentration | .413676 .1409062
2.94 0.003
.137505
.689847
jetfuel | -.0660432
.014265
-4.63 0.000 -.0940022 -.0380842
demandna | 3.08e-06 2.18e-06
1.41 0.157
-1.19e-06
7.35e-06
_cons | -1.388466 .1283094 -10.82 0.000 -1.639948 -1.136984
--------------------+---------------------------------------------------------------sigma_u | .0868341
sigma_e | .14527761
rho | .26322097 (fraction of variance due to u_i)
------------------------------------------------------------------------------------Margins
. margins, dyex (*) atmeans
Conditional marginal effects
Model VCE : Conventional
Number of obs =
592
Expression : Linear prediction, predict()
dy/ex w.r.t. : rsmperasm profitconcentration jetfuel demandna
at
: rsmperasm =
.8032384 (mean)
profitconc~n =
.4135105 (mean)
jetfuel
=
2.093877 (mean)
demandna
=
46771.4 (mean)
------------------------------------------------------------------------------|
Delta-method
|
dy/ex Std. Err.
z
P>|z| [95% Conf. Interval]
--------------+---------------------------------------------------------------94
rsmperasm | 1.184658 .1282567 9.24 0.000
.9332799
profitconce~n | .1710594 .0582662
2.94 0.003
.0568598
jetfuel | -.1382863 .0298693 -4.63 0.000 -.196829 -.0797437
demandna | .1440674 .1019084
1.41 0.157
-.0556694
1.436037
.285259
.3438042
-------------------------------------------------------------------------------
Hausman - contemperaneous
. hausman fe re
Note: the rank of the differenced variance matrix (3) does not equal the number of coefficients being
tested (4); be sure this is what you expect, or there may be problems computing the test. Examine the
output of your estimators for anything unexpected and possibly consider scaling your variables so that
the coefficients are on a similar scale.
---- Coefficients ---|
(b)
(B)
(b-B) sqrt(diag(V_b-V_B))
|
fe
re
Difference
S.E.
-------------+---------------------------------------------------------------rsmperasm | 1.512895
1.474853
.0380426
.0320381
profitconc~n |.4198087
.413676
.0061327
.0075831
jetfuel | -.0652263 -.0660432
.0008169
.0008572
demandna | 2.82e-06
3.08e-06
-2.58e-07
1.93e-07
-----------------------------------------------------------------------------b = consistent under Ho and Ha; obtained from xtreg
B = inconsistent under Ha, efficient under Ho; obtained from xtreg
Test: Ho: difference in coefficients not systematic
chi2(3) = (b-B)'[(V_b-V_B)^(-1)](b-B)
=
2.78
Prob>chi2 = 0.4274
Appendix D: Residuals – Operating Margin Random Effects Contemporaneous Model
Figure 3 plots the model residuals chronologically. While there are a few outliers, overall, the
residual plot looks good. We do not see increasing or decreasing residual variance, so there is most
likely not a problem with heteroskedasticity. There is no pattern in the residuals, so there is also not
a serial correlation problem with the model.
Figure 3
95
Appendix E: Fare Price Stata Output – Regressions
. reg fareprice jetfuel profitconcentration avgload
Source |
SS
df
MS
Number of obs =
50
-------------+-----------------------------F( 3, 46) = 123.53
Model | 35407.5178
3 11802.5059
Prob > F
= 0.0000
Residual | 4394.98912 46 95.5432417
R-squared
= 0.8896
-------------+-----------------------------Adj R-squared = 0.8824
Total | 39802.507
49 812.29606
Root MSE
= 9.7746
------------------------------------------------------------------------------fareprice |
Coef. Std. Err.
t
P>|t| [95% Conf. Interval]
--------------+---------------------------------------------------------------jetfuel | 25.60118 2.543072 10.07 0.000
20.48225
30.72012
profitconce~n | 307.5478 31.76375 9.68 0.000
243.6107
371.4849
avgload | 103.0576 50.38555 2.05 0.047
1.636794
204.4785
_cons | 72.00405 41.09154
1.75 0.086
-10.70892
154.717
-------------------------------------------------------------------------------
. margins, eyex (*) atmeans
96
Conditional marginal effects
Model VCE : OLS
Number of obs =
50
Expression : Linear prediction, predict()
ey/ex w.r.t. : jetfuel profitconcentration avgload
at
: jetfuel
=
2.04928 (mean)
profitconc~n =
.4201327 (mean)
avgload
=
.8002188 (mean)
------------------------------------------------------------------------------|
Delta-method
|
ey/ex Std. Err.
t
P>|t| [95% Conf. Interval]
--------------+---------------------------------------------------------------jetfuel | .1560743 .0155168 10.06 0.000 .1248406
.187308
profitconce~n | .3843873 .0397312
9.67 0.000
.3044125
.4643622
avgload | .2453347 .11995 2.05 0.047
.0038879
.4867814
------------------------------------------------------------------------------Appendix F: Residuals – Fare Price OLS Model
Figure 4 plots the model residuals against the actual values. There are not many outliers and
there is no pattern in the residuals, so there is not a serial correlation issue with the model. We do
not see increasing or decreasing residual variance, so there is most likely not a problem with
heteroskedasticity. The plot of fitted values against real values (Figure 5) also does not prompt any
worries about variance patterns, and variance looks relatively constant.
Figure 4
Figure 5
97
Back to School: Drivers of Educational Attainment Across States
(1990-2010)
Tyler J. Skluzacek ‘16
Introduction to Econometrics
Bachelor’s degree attainment rates vary greatly between states in the United States, from 20% in
New Mexico to 48% in Massachusetts. This paper examines the hypothesis that a state’s geography,
governmental college subsidies, and college wage premium affect a state’s level of educational
attainment. Little literature to date examines factors varying educational attainment across states,
but rather across nations. This paper uses the model presented by Katz & Goldin (2008) that
examines the variation of educational attainment between nations in different hemispheres.
Interestingly, all empirical findings agree with economic theory – increases in governmental college
spending and the college wage premium encourage more citizens to attend school, while a state’s
geographical location in the South hampers its ability to increase its numbers. To achieve this result,
this paper examines both panel data and cross-sectional regressions between 1980 and 2010, with
both regressions showing significant results. I conclude, using elasticities, that higher educational
spending by the government has the largest effect on educational attainment of all factors presented.
One implication this study has for public policy is that governments wishing to increase educational
attainment of its population could simply increase its subsidies of college educations.
I. Introduction
In 2010, 31.1% of United States citizens held a bachelor’s degree. Individual states, however, do not
have homogeneous bachelor’s attainment rates, and range from 20% in New Mexico up to 48% in
Massachusetts (NCHEMS). Katz and Goldin (2008), authors of the book The Race Between Education
and Technology, argue that variations in educational attainment rates between areas can be explained
through the college wage premium – the gap in earnings between the high school and college-educated
workers. States with higher college wage premia should experience greater educational attainment
rates, because the returns to personal investments in higher education is relatively higher. This is
98
illustrated in Figure 1. Other notable economists and geographers argue that a state’s geographical
location, urban-to-rural population ratio, and governmental college subsidies drive college
attainment in that state. No current economic literature analyzes the relative effects of geography,
government education spending, and the college wage premium on educational attainment over time
through 2010. This paper examines and confirms the hypothesis that all of these factors increase
educational attainment over time.
This paper is organized as follows: Section II reviews relevant literature on educational
attainment across states. This section presents theories and empirical research by Katz & Goldin
(2007, 2008) on the college wage premium, Barr (2004) on governmental higher educational
spending by states, and Sander (2008) and Kelly (2010) on geographical location and characteristics.
Section III suggests a conceptual model for predicting educational attainment in a state, and
discusses ideal data for the experiment. Section IV shows the actual model used in predicting
educational attainment in a state, given data constraints. Sections V and VI provide the results of
the main regression and discuss robustness checks used. Finally, Section VII reports the conclusion
that a higher urban-to-rural population ratio, higher education spending, and college wage premium
increase educational attainment in a state.
II. Literature Review
The empirical literature presents three main factors driving college attainment across states: (1) the
college wage premium, (2) overall geography, and (3) college subsidies from the government. The
college wage premium is defined as the gap in earnings between high school and college educated
workers. Katz and Goldin (2007) argue that the college wage premium has increased since 1980
because of skill-biased technological advancement - changes in capital stock that benefits those with
greater human capital (i.e. a college education). Katz and Goldin examine how the relative supply
and demand for skills explain variations in attainment. The data they use are from the 1950 IPUMS
and 2005 CPS MORG for the workforce aged 18 to 65 years. They find that increases in
educational attainment between 1950 and 2005 are due to the increasing college wage premia.
College earners earned 31% higher incomes than high school graduates in 1950 and 62% higher
incomes in 2005. Simultaneously, college attainment increased from 8% to 32% over that same span.
They conclude that technological change causes an increase in college educations demanded (Katz &
Goldin, 2007).
99
The second key factor influencing educational attainment across states is governmental
college spending. State governments have varying levels of subsidization for higher education, and it
is theorized that states with higher spending on education will see higher educational attainment. In
2004, Nicholas Barr conducted a study of OECD nations’ higher educational spending, and found
that countries spending less on higher education experienced lower educational participation rates9.
For example, higher spending in nations such as New Zealand and Australia led to higher
educational participation rates. Lower spending in Slovenia and the Czech Republic led to lower
participation rates (2004). Economic theory supports this finding, because decreased prices leads to
increased demand. In terms of college educations, the overall tuition funded directly by the student
decreases, so the overall education demanded increases. Figure 2. reveals that this relationship
holds across states in a 2010 cross-section: states with greater college subsidies experience greater
educational attainment rates.
The final key factor affecting educational attainment across states is geography. Geography
has two dimensions: (1) where a state is located relative to other states, and (2) where most of the
people within the state reside. William Sander (2008) claims that one’s residential location at age 16
impacts whether that person decides to attend college. Sander’s study examines a large number of
United States metropolitan and rural areas, with a focus on predicting Chicago’s outcomes. He
looks at household level data through a mixed-method study utilizing both interviews and OLS
regression. Specifically, he claims that those living in large metropolitan areas are more likely to
both pursue and obtain college degrees. One factor contributing to this relationship is industrial
conglomeration within major cities and their metropolitan areas (Glaeser & Shapiro, 2001). The jobs
centered in urban areas are much more likely to require college degrees, partially because it proves
quite difficult to earn an income above the cost of living10 with no education. For example, many
low-skill workers are farmers (often paid above the cost of living) and cashiers (often paid below it),
but there are not many farming jobs in metropolitan areas. Additionally, ethnic and cultural
minorities tend to conglomerate in urban areas, and these groups see the highest percentage
increases in higher educational attainment over time (Sander, 2008).
Barr’s (2004) article is used rather than other articles looking at state educational attainment. Most statebased papers are pre-2000, or have a political twist, creating an opportunity for bias.
2 Glaeser and Shapiro (2001) back this point by stating there could exist an equilibrium in which higher costof-living places have higher wages. However, they admit that this does not always hold empirically.
1
100
Also important when discussing geography is the location of a given state in the United
States. The United States Census Bureau divides the United States into 4 main regions: the Midwest,
Northeast, South, and West. According to Kelly (2010) of the NCHEMS, states in the United States
South simply suffer from a misalignment of goals between national and southern policy. The
industrial culture of the South is less competitive overall than other industrial centers in the North
and West, and southern states aim to compete with other nearby low-performers in the South
(Kelly, 2010). Kelly (2010) cites one specific example in his analysis, stating that Mississippi aims to
reach the educational attainment levels of Kentucky and Tennessee (both with educational
attainment rates in the bottom 30%) rather than states in the North with much higher educational
attainment. Additionally, the South has intangible differences from the rest of the United States, such as
a legacy of slavery and the Confederacy. Though these factors are difficult to measure, there is the
chance that they drive educational attainment in the South.
These studies support the hypothesis that increasing the college wage premium,
governmental college subsidies, and the urban-to-population ratio should increase educational
attainment in a state. Additionally, states in the United States South (the region defined by the
United States Census Bureau) should have lower baseline educational attainment numbers than
states in the Midwest, Northeast, and West.
III. Ideal Data and Conceptual Model
Given our theories of the effects of the college wage premium, higher educational spending, and
geography on educational attainment, we can create the following conceptual model:
[1] Educational Attainment = f(College Wage Premium, Public Ed Spending, Geography)
This model theorizes that educational attainment in a state is a function of its college wage premium,
government higher education spending, and its geography. Most college wage premium numbers
include the population 18-64, though this does not take into account the percent of the labor force
currently enrolled in college or the percentage of the labor force who retire before age 64. Higher
educational spending is difficult to standardize, because some states reduce the tuition price of
public universities rather than providing direct subsidies to students11 (Archibald & Feldman, 2011).
The medium by which a government subsidizes depends on the state. Some states give direct
subsidy to the student (i.e. state grants), some give indirect subsidies to the schools (i.e. land grants),
and some adjust the prices of state universities, counter-balancing the need for aid.
3
101
The structure of the experiment also makes it difficult to account for ‘brain drain’ – the
phenomenon where students and laborers work or are educated in states they do not call home.
There also exists some endogeneity between educational attainment and the college wage premium.
Theoretically, once people attend school, the college wage premium should decrease. After some
amount of time, wages adjust such that educational attainment decreases. This feedback loop is
difficult to account-for with a single regression model.
IV. Actual Model
The data used in this paper are close to the ideal data, though there are some limitations in this
analysis. The data constitute a balanced panel with 150 observations over the census years 1990,
2000, and 2010. A fixed effects regression is the most appropriate technique for the main regression.
There are many state-specific unobserved effects on educational attainment in the United States
which this type of regression will control for. This is justified by the results of a Hausman Test12.
The actual data used are from the National Center for Higher Education Management Systems
(NCHEMS), the American Community Survey (ACS) and the IPUMS Microdata survey
downloaded from the ICPSR. Educational attainment data is measured as the percentage of people
in a population aged 25-64 who hold a bachelor’s degree.
The guiding regression equation will be examined extensively in this analysis, and each item
in the equation has roots in the economic theories of educational attainment provided before:
[2] Attainmenti,t = ß0 i,t + ß1Premiumi,t + B2EdSpendingi,t + ß3UrbanPopi,t + ß4Regioni + ei,t
This model uses a fixed-effects regression. The model accounts for the fixed-effect (the
state) over time, assuming that different regions have different baseline educational attainments
(intercepts). Attainment refers to the overall educational attainment as the percentage of total
population aged 25-64 in a given state (IPUMS). Premium is the additional percentage of a high
school graduate’s income earned by a college educated worker (NCHEMS). EdSpending is the
spending per student by the state and local governments on higher education, including both the
indirect and direct funding sources mentioned earlier (IPUMS and NCHEMS). Finally, the UrbanPop
and Region variables are the percentage of a state’s population living in an urban area and a dummy
variable for region (Midwest, Northeast, South, and West), respectively (United States Census
4
Hausman X2 statistic of 12.44 with a p-value of 0.00
102
Bureau). Based on the theory from the literature, the hypothesis will be confirmed if the sign on ß1,
ß2, and ß3 are all positive13.
V. Results
Priming the Model
The model needs virtually no adjustment from the raw regression results. Without transforms or
variable-conversions, all regressions are homoscedastic14 and have no serial correlation15. The
random, normal distribution of the residuals is shown in Plot 1 of the Appendix. A quick
correlation matrix reveals that urban-rural population and the college wage premium reveal a modest
correlation of 34.8%, but both variables will remain in the analysis based on theory.
Running the Model
The main regression for this analysis is performed on equation [2]. As presented in Table 1, the
number of observations is 150 – accounting for three snapshots of state data in 1990, 2000, and
2010. Model 2 in the table shows the effects of the college wage premium, the urban-rural ratio, and
higher education spending numbers of states on educational attainment. As the values of each of
these variables increase, the total percentage of the population aged 18-64 that attain a college
education increases. Model 3 adds regional dummy variables for the Midwest, Northeast, South,
and West – with the Northeast as the reference group. As hypothesized, states in the South
(depicted as such by the United States Census Bureau) have the lowest baseline educational
attainment rates compared to all other regions.
What is more interesting, however, is the extent which each factor impacts educational
attainment. Table 2 presents the elasticities of each of the independent variables to educational
attainment. A 1% increase in the college wage premium provides a .315% increase in a state’s
educational attainment rate. The same increase in the urban-rural population ratio provides only a
.158% increase in the educational attainment rate. Higher education spending by the state is the
most elastic: a 1% increase in higher education spending leads to a .386% increase in the educational
attainment rate.
All links to data sources are provided in Appendix B.
Wald-statistic of 81.57 with a p-value of 0.00.
7 A Dicky-Fuller test concludes that all variables are stationary, and no action must be taken.
5
6
103
This result is interesting considering what Katz and Goldin conclude in The Race Between
Education and Technology (2008), a book oft-cited in contemporary labor economics. They assert that
the college wage premium, caused by skill-biased technological change, is the single greatest driver of
educational attainment in the United States. According to this study, however, higher educational
spending is actually the greatest driver, by a small margin. The relationship between higher
educational spending and educational attainment is illustrated in Figure 3. Katz and Goldin,
however, looked at a number of different nations, and did not look at states within the United
States. Differences in geographic mobility could explain the difference between national and state
analyses. For example, if the college wage premium rises in California, people can move to
California from other states, weakening the incentive for natives in California to invest in higher
education. On the other hand, if the college wage premium increases in Australia, it is difficult for
labor market participants in Europe and North America to move to Australia for an education.
Therefore, this study finds that the relative impact of the college wage premium and higher
educational spending are switched when one looks at states in place of nations16.
Robustness Check
The initial regression revealed that government spending on higher education, the college wage
premium, and geography all drive educational attainment. Fixed-effects regressions tend to diminish
any cross-sectional variation in both independent and dependent variables, and it tends to exacerbate
measurement error problems. To alleviate this, I look at 3 cross-sectional OLS regressions for 1990,
2000, and 2010, to test the hypothesis that a state’s educational attainment correlates positively with
the college wage premium, governmental college spending, and the urban-rural population ratio.
The robustness regressions attempt to mimic the results of Model 3 from Table 2 for each
year: 1990, 2000, and 2010. Table 3 shows the cross-sectional results. The OLS provides lesssignificant results, but most of the variable-effects are similar. Most coefficient directions are in
accord with theory, except the 1990 college wage premium and the 2000 difference between the
South and the West. In 1990, states with higher educational spending actually experience lower
educational attainment, with all else held constant. In 2000, the West had a lower baseline
attainment level than the South. Because we examine only a cross-section in this regression, unusual
years can have a larger effect on the model. Because the variables, for the most part, have theoryaccurate signs on the coefficients, I say with certainty that the hypothesis is confirmed – the
8
Possible reasons for this switching phenomenon are explained in Section VII.
104
premium, urban-rural ratio, and higher educational spending positively impact educational
attainment.
VI. Conclusion
In this study I seek to determine the factors driving educational attainment levels across each of the
50 states in the United States. My hypothesis is that higher college wage premia, educational
spending, and urban-to-rural population ratios would increase a state’s educational attainment
relative to the average state. Additionally, I hypothesize that if a state is located in the South, it
should have a lower bachelor’s attainment rate than other states. To measure the effects of these
factors on educational attainment, I run a fixed-effects model using panel data on states between
1990 and 2010. From the analysis, I found that a state’s higher education spending has the largest
impact on educational attainment. The college wage premium, surprisingly, had a slightly lesser
effect, but all three of these variables align with the theory – increases in each variable lead to greater
educational attainment. All relationships between these variables and educational attainment are
statistically significant. When looking at education, I found that the ‘educational effects’ hierarchy is
as follows: (1) Northeast, (2) Midwest, (3) West, (4) South, where the Northeast has the largest gains
in levels of educational attainment. This confirms my hypothesis that a state’s location in the South
decreases its educational attainment relative to other regions.
To improve upon this study, there are several directions future research could take. I focused
on South versus ‘Not South’ for geographical analysis, but when I broke ‘Not South’ into West,
Midwest, and Northeast, I notice that the Northeast has a significantly higher baseline educational
attainment than the other regions. An extension of this research could examine factors within the
Northeast causing a relative educational boom. This could come in the form of explicit industryvariables or demographics of an area. How do states differ with cotton, computers, corn, lumber,
automobiles, or steel as their main industry? How do ethnic, age, or familial background
demographics drive educational attainment? Additionally, the study could look at the attainment of
specific college majors, to see if educational attainment increases in fields with higher premia. This
would provide a robustness-check to the hypothesis that an increased college wage premium
increases educational attainment.
A number of implications could arise from this study. Because it is clear that higher
educational spending by states and localities increases educational attainment, states could consider
105
reallocating money to education to fill high-skill job openings, should they exist. The geographical
effects on educational attainment should push researchers and policy-makers in those states to
determine courses of action to better their educational outcomes. In each of the 50 states, college
educated workers earn more than their high school educated counterparts. This means that if a
state can increase its educational attainment numbers, more people in the state can be better off.
106
References
Archibald, R. B., & Feldman, D. H. (2011). Why does college cost so much?. New York: Oxford
University Press.
Barr, N. (January 01, 2004). Higher education funding. Oxford Review of Economic Policy,
20, 2.)
Glaeser, E. L., & Shapiro, J. M. (February 01, 2003). Urban Growth in the 1990s: Is City Living
Back?. Journal of Regional Science, 43, 1, 139-165.
Goldin, C., & Katz, L. F. (January 01, 2007). Long-Run Changes in the Wage Structure:
Narrowing, Widening, Polarizing. Brookings Papers on Economic Activity, 2007, 2,
135-165.
Goldin, Claudia Dale., and Lawrence F. Katz. (2009). The Race between Education and
Technology. Cambridge, MA: Belknap of the Harvard UP, 2009.
Kelly, Patrick J. (2010). Closing the College Attainment Gap between the U.S. and Most
Educated Countries, and the Contributions to be made by the States. National Center for
Higher Education Management Systems.
National Center for Higher Education Management Systems, Information Center. (2014).
Income: Earnings Premium by Education: 2007-2010 data [Data Set]. Retrieved from
http://www.higheredinfo.org/
Sander, W. (January 01, 2006). Educational Attainment and Residential Location. Education and
Urban Society, 38, 3, 307-326.
United States Census Bureau American Community Survey (ACS) Report. (2011). U.S.
Neighborhood Income Inequality in the 2005-2009 Period. United States Census Bureau.
ACS-16.
107
Appendix
A.
Figure 1. Data Source: Premium data from the National Center for Higher Education Management
Systems and College Attainment data from the American Community Survey.
Figure 2. Data Source: Spending data from the National Center for Higher Education Management
Systems and College Attainment data from the American Community Survey.
108
B. Results
Table 1: Main Results for Educational Attainment
Independent Variable
College Premium
Model 1
Model 2
Model 317
25.00
13.26
16.288
(9.64)***
(4.41)***
(5.28)***
0.414
0.0609
(3.51)***
(1.86)**
0.0277
0.0149
(3.42)***
(2.67)***
Urban-Rural Ratio
Higher Ed Spending
Midwest
-4.36
(-0.57)
South
-8.66
(-7.39)***
West
-5.77
(-4.80)***
Cons
3.549
14.80
(1.17)
(6.14)***
Observations
150
150
150
Number of States
50
50
50
R-Squared Within
0.4841
0.6201
---
R-Squared Between
0.0034
0.1130
0.612
R-Squared Overall
0.1848
0.2208
---
F-Statistic
5.19***
5.79***
20.69***
t-statistic in parentheses
*** p<0.01, ** p<.05, * p<.10
17
This was another variation of fixed-effects regression using OLS regression, due to linear algebraic complications.
There is only a within-R2 produced.
109
Table 2: Elasticities of Non-Dummy Independent Variables (At
Means)
Independent Variable
Elasticity
College Premium
0.315
Urban-Rural Ratio
0.158
Higher-Ed Spending
0.386
Figure 3. Data Source: Spending data from the National Center for Higher Education Management Systems
and College Attainment data from the American Community Survey.
110
C. Robustness
Table 3: Robustness Results – OLS Regressions
Independent Variable
1990
2000
College Premium
-5.82
7.84
(-1.55)*
(1.71)*
(1.31)
0.0776
0.094
(3.07)**
(1.75)*
(2.40)**
0.00711
3.33
-5.912
(0.61)
(-2.38)**
(-4.93)***
-0.835
0.00433
-0.0044
(-0.53)
(0.68)
(-0.96)
-1.43
-2.00
-6.15
(-0.82)
(-0.82)
(-0.80)***
---
-4.77
-5.75
---
(-3.28)***
(-4.75)***
6.25
3.21
23.2
(0.82)
(5.61)***
(8.22)***
50
50
50
R-Squared
0.4829
0.3965
0.5947
F-Statistic
6.69***
10.18***
10.15***
Urban-Rural Ratio
Higher Ed Spending
Midwest
South
West
Cons
Observations
0.1129
2010
5.722
t-statistic in parentheses
*** p<0.01, ** p<.05, * p<.10
111
D. Residuals for Fixed Effects Model – Model 3 from Table 2
E. Links to data:
National Center for Higher Education Management Systems: http://www.higheredinfo.org/
American Community Survey:
https://www.census.gov/hhes/www/income/data/earnings/
IPUMS (ICPSR): http://www.icpsr.umich.edu/icpsrweb/ICPSR/studies/116
United States Census Bureau: http://www.census.gov/2010census/data/
Data was collected from all sites and compiled into one data set, which is available if you
email tskluzac@macalester.edu. For the American Community Survey and the United States
Census Bureau, I had to manually enter the data from 1990 and 2000, because it only exists
in .pdf file format. All IPUMS data is aggregated into state-figures (from individuals).
112
Download